-
Posts
3,887 -
Joined
-
Last visited
-
Days Won
266
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Can you tell me why you want to call PeekMessage() and not simply do the diagram looping to let LabVIEW do the proper abort handling and what else?
-
It might be, however I'm not aware of a MessagePump() exported function in the LabVIEW kernel. It may exist but without the according header file to be able to call it correctly it's a pretty hopeless endeavor. It definitely wasn't exported in LabVIEW versions until around 2009, I stopped trying to analyze what LabVIEW might export on secret goodies around that time. Besides this is not leaving the work to the user. This loop is somewhere inside a subVI of your library. Can the user change it like that? Yes of course but there are funnier ways to shoot in your own feet! 😝 If someone thinks he knows better than me and wants to go and mess with such a subVI, it's his business, but don't come to me and wine if the PC then blows up into pieces! 😀 I'm not quite sure to which statement you refer with the NI reference. Technically every Win32 executable has somewhere pretty much this code, which should be called from the main thread of the process, the same that is created by the OS when launching the process and which is used to execute WinMain() int PASCAL WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpszCmdLine, int nCmdShow) { MSG msg; BOOL bRet; WNDCLASS wc; UNREFERENCED_PARAMETER(lpszCmdLine); // Register the window class for the main window. if (!hPrevInstance) { wc.style = 0; wc.lpfnWndProc = (WNDPROC) WndProc; wc.cbClsExtra = 0; wc.cbWndExtra = 0; wc.hInstance = hInstance; wc.hIcon = LoadIcon((HINSTANCE) NULL, IDI_APPLICATION); wc.hCursor = LoadCursor((HINSTANCE) NULL, IDC_ARROW); wc.hbrBackground = GetStockObject(WHITE_BRUSH); wc.lpszMenuName = "MainMenu"; wc.lpszClassName = "MainWndClass"; if (!RegisterClass(&wc)) return FALSE; } hinst = hInstance; // save instance handle // Create the main window. hwndMain = CreateWindow("MainWndClass", "Sample", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, (HWND) NULL, (HMENU) NULL, hinst, (LPVOID) NULL); // If the main window cannot be created, terminate // the application. if (!hwndMain) return FALSE; // Show the window and paint its contents. ShowWindow(hwndMain, nCmdShow); UpdateWindow(hwndMain); // Start the message loop. while( (bRet = GetMessage( &msg, NULL, 0, 0 )) != 0) { if (bRet == -1) { // handle the error and possibly exit } else { TranslateMessage(&msg); DispatchMessage(&msg); } } // Return the exit code to the system. return msg.wParam; } The message loop is in the while() statement and while LabVIEWs message loop is a little more complex than this, it is still principally the same. This is also called the root loop in LabVIEW terms, because the Mac message loop works similar but a little different and was often referred to as root loop. MacOS was not inherently multithreading until MacOS X, but in OS 7 and later an application could make use of extensions to implement some sort of multithreading but it was not multithreading like the pthread model or the Win32 threading. And this while loop is also often referred to as message pump, and even Win32 applications need to keep this loop running or Windows will consider the application not responding and will eventually make most people open Task Manager to kill the process. This message pump is also where the (D)COM Marshalling hooks into and makes that marshalling fail if a process doesn't "pump" the messages anymore. And the window created here is the root window in LabVIEW. It is always hidden but its WndProc is the root message dispatcher that receives all the Windows messages that are sent to processes rather than individual windows.
-
Well, I do understand all that and that is what I have tried to explain to you in my previous two posts, how to work around that. If you for some reason need to execute the CLFN in the UI thread you can NOT use the Abort() callback anymore. Instead you need to do something like this: Basically you move the polling loop from inside the C function into the LabVIEW diagram and just call the function to do its work and check if it needs to be polled again or is now finished. If you put a small wait in the diagram or not depends on the nature of what your C function does. If it is mostly polling some (external) resource you should put a delay on the diagram, if you do really beefy computation in the function you may rather want to spend as much as possible in that function itself (but regularly return to give LabVIEW a chance to abort your VI). The C code might then look something like this: typedef enum { Init, Execute, Finished, } StateEnum; typedef struct { int state; int numm; } MyInstanceDataRec, *MyInstanceDataPtr; MgErr MyReserve(MyInstanceDataPtr *data) { // Don't specifically allocate a memory buffer here. If it is already allocated just initialize it if (*data) { (*data)->state = Init; (*data)->num = 0; } return noErr; } MgErr MyUnreserve(MyInstanceDataPtr *data) { if (*data) { DSDisposePtr(*data); *data = NULL; } return noErr; } MgErr MyPollingFunc(int32_t someNum, uInt8_t *finished, MyInstanceDataPtr *data) { if (!*data) { *data = (MyInstanceDataPtr)DSNewPClr(sizeof(MyInstanceDataRec)); (*data)->state = Execute; } else if ((*data)->state != Execute) { (*data)->state = Execute; (*data)->numm = 0; } // No looping inside our C function and neither should we call functions in here that can block for long periods. The idea is to do what is // necessary in small chunks, check if we need to be executed again to do the next chunk or if we are finished and return the according status. (*data)->numm++; *finished = (*data)->numm >= someNum; if (*finished) (*data)->state = Finished; else usleep(10000); return noErr; } You could almost achieve the same if you would pass a pointer sized integer into the CLFN instead of an InstanceDataPtr, and maintain that integer in a shift register of the loop. However if the user aborts your VI hierarchy, this pointer is left lingering in the shift register and might never get deallocated. Not a big issue for a small buffer like this but still not neat. And yes this works equally well for CLFNs that can run in any thread, but it isn't necessary there. And of course: No reentrant VIs for this! You can not have a reentrant VI execute a CLFN set to run in the UI thread!
-
I somehow missed the fact that you now work on HDCs. HDCs do not explicitly need to be used in the root loop, BUT they need to be used in the same thread that created/retrieved the HDC. And since in LabVIEW only the UI thread is guaranteed to always be the same thread during the lifetime of the process, you might indeed have to put the CLFN into the UI thread. Also I'm pretty sure that Reserve(), Unreserve() and Abort are all called in the context of the UI thread too. But what I'm not getting is where your problem is with this. I believe that the Unreserve() function is always called even if Abort() was called too, but that would have to be tested. In essence it changes nothing though. If you need to call the CLFN in the UI thread, you need to make sure that it does not block inside the C function for a long time, or Windows will consider your LabVIEW app to be unresponsive. And maybe even more important, the LabVIEW UI won't be handled either so you can't press the Abort button in the toolbar either.
-
You completely got that wrong! What I meant was instead of entering the Call Library Node and locking in the DLL function until your functionality is finished, and polling in the C function periodcally the abort condition in the InstanceDataPtr, you would do the looping on the VI diagram and reenter the CLFN until it returns a status done value, after which you exit the loop. Now you do not need to actually configure the Abort() function and could even not configure the Reserve() function but still pass the InstanceDataPtr to your CLFN function. On entry you check that InstanceDataPtr to be non null and allocate a new one if it is null, and then you store state information for your call in there. Then you start your operation and periodically check for its completion. If it is not completed you still terminate the function but return a not completed status to the diagram, which will cause the diagram loop to continue looping. When the user now aborts the VI hierarchy LabVIEW will be able to abort your VI when your CLFN returns with the complete or not complete status. So you don't need the InstanceDataPtr to be able to abort your CLFN function asynchronously. But you still get the benefit of the Unreserve() function which LabVIEW will call with the InstanceDataPtr. In there you check that the pointer is not null and deallocate all the resources in there and then the pointer itself. It's almost equivalent if you would use a shift register in your diagram loop to store a pointer in there that you pass into the function and after the CLFN call put back into the shift register on each iteration, except that when the user aborts the VI hierarchy you do not get a chance to call another VI to deallocate that pointer. With the InstanceDataPtr the Unreserve() function can do that cleanup and avoid lingering resources, aka memory leaks. You could do that for both UI threaded CLFNs as well as any threaded CLFNs, for the first it is mandatory to avoid your function blocking the LabVIEW UI thread, for the second its optional but still would work too.
-
The Callbacks are executed during initialization of the environment (the instant you hit the run button) or when the hierarchy ends its execution. So returning those errors as part of the Call Library Node may be not really very intuitive. As the Reserve() seems to be called at start and not load, it's also not easy to cause a broken diagram. So yes I can see that these errors are kind of difficult to fit into a consistent model to be reported to the user.
-
The scope is supposedly still CLFN local but I never tested that. The InstanceDataPtr is tied to the CLFN instance and that is dependent on the VI instance. As long as a VI is not reentrant it exists exactly once in the whole LabVIEW hierarchy no matter if it runs in the UI thread or another thread. And each CLFN in the VI has its own InstanceDataPtr. If a VI is reentrant things get more hairy as each CLFN gets for each VI instance its own InstanceDataPtr. And if you have reentrant VIs inside reentrant VIs that instantiation quickly can spiral out of reach of human grasp. 🙂 Think of the InstanceDataPtr as an implicit Feedback Node or Shift Register on the diagram. One of them for every Call Library Node. That's basically exactly how it works in reality. Obviously if you run a blocking VI in the UI thread you run into a problem as LabVIEW doesn't get any chance to run its message loop anymore which executes in the same thread. And Windows considers an application that doesn't poll the message queue with GetMessage() for a certain time as being unresponsive. But calling GetMessage() yourself is certainly going to mess up things as you now steal events from LabVIEW and PeekMessage() is only for a short time a solution. So if you absolutely have to call the CLFN in the UI thread (why?) you will have to program it differently. You must let the CLFN return to the LabVIEW diagram periodically and do the looping on the diagram instead of inside the C function. You still can use the InstanceDataPtr to maintain local state information for the looping but the Abort mechanisme won't be very important as LabVIEW gets a chance to abort your VI everytime the CLFN returns to the diagram. The nice thing about using an InstanceDataPtr for this instead of simply a pointer sized integer maintained yourself in a shift register in the loop is, that LabVIEW will still call Unreserve() (if configured of course) when terminating the hierarchy so you get a chance to deallocate anything you allocated in there. With your own pointer in a shift register it gets much more complicated to make the deallocation properly when the program is aborted.
-
Custom subarrays
Rolf Kalbermatter replied to HugoChrist's topic in Application Design & Architecture
It's really unclear to me what you try to do. There is simply no way to create subarrays in external code until now. And there is no LabVIEW node that allows you to do that either. LabVIEW nodes decide themselves if they can accept subarrays or not and if they want to create subarrays or not but there is simply no control about that. Also subarrays support a bit more options than what ArrayMemInfo returns. Aside from stride it also contains various flags such as if the array is considered to be reversed (its internal pointer points to the end of the array data), transposed (rows and colums are swapped meaning that the sizes and strides are swapped), etc. Theoretically, Array Subset should be able to allocate subarrays, and quite likely does so, but once you display them in a front panel control, that front panel control will always make a real copy for its internal buffer, since it can't rely on subarrays. Subarrays are like pointers or reference and you do not want your front panel data element to automatically change its values at anytime except when dataflow dictates that you pass new data to the terminal. And the other problem is once you start to autoindex subarrays, things get extremely hairy very quickly. You would need subarrays containing subarrays containing subarrays to be able to represent your data structure and that is aside from very difficult to make generic also quickly consuming even more memory than your 8 * 8 * 3 * 3 element array would require. Even if you extend your data to huge outer dimensions a subarray takes pretty much as much data to store than your 3 * 3 window, so you win very little. Basically LabVIEW nodes can generate subarrays, auto indexing tunnels on loops could only with a LOT of effort on figuring out the right transformations, with very little benefit in most situations. -
Your problem is that the correct definition for those functions in terms of basic datatypes would be: MgErr (*FunctionName)(void* *data); This is a reference to a pointer, which makes all of the difference. A little more clearly written as: MgErr (*FunctionName)(void **data);
-
I don't quite have a working example but the logic for allocation and deallocation is pretty much as explained by JKSH already. But that is not where it would be very useful really. What he does is simply calculating the time difference between when the VI hierarchy containing the CLFN was started until it was terminated. Not that useful really. 😀 The usefulness is in the third function AbortCallback() and the actual CLFN function itself. // Our data structure to manage the asynchronous management typedef struct { time_t time; int state; LStrHandle buff; } MyManagedStruct; // These are the CLFN Callback functions. You could either have multiple sets of Callback functions each operating on their own data // structure as InstanceDataPtr for one or more functions or one set for an entire library. Using the same data structure for all. In // the latter case these functions will need to be a bit smarter to determine differences for different functions or function sets // based on extra info in the data structure but it is a lot easier to manage, since you don't have different Callback functions for // different CLFNs. MgErr LibXYZReserve(InstanceDataPtr *data) { // LabVIEW wants us to initialize our instance data pointer. If everything fits into a pointer // we could just use it, otherwise we allocate a memory buffer and assign its pointer to // the instanceDataPtr MyManagedStruct *myData; if (!*data) { // We got a NULL pointer, allocate our struct. This should be the standard unless the VI was run before and we forgot to // assign the Unreserve function or didn't deallocate or clear the InstanceDataPtr in there. *data = (InstanceDataPtr)malloc(sizeof(MyManagedStruct)); if (!*data) return mFullErr; memset(*data, 0, sizeof(MyManagedStruct)); } myData = (MyManagedStruct*)*data; myData->time = time(NULL); myData->state = Idle; return noErr; } MgErr LibXYZUnreserve(InstanceDataPtr *data) { // LabVIEW wants us to deallocate a instance data pointer if (*data) { MyManagedStruct *myData = (MyManagedStruct*)*data; // We could check if there is still something active and signal to abort and wait for it // to have aborted but it's better to do that in the Abort callback ....... // Deallocate all resources if (myData->buff) DSDisposeHandle(myData->buff); // Deallocate our memory buffer and assign NULL to the InstanceDataPointer free(*data) *data = NULL; } return noErr; } MgErr LibXYZAbort(InstanceDataPtr *data) { // LabVIEW wants us to abort a pending operation if (*data) { MyManagedStruct *myData = (MyManagedStruct*)*data; // In a real application we do probably want to first check that there is actually something to abort and // if so signal an abort and then wait for the function to actually have aborted. // This here is very simple and not fully thread safe. Better would be to use an Event or Notifier // or whatever or at least use atomic memory access functions with semaphores or similar. myData->state = Abort; } return noErr; } // This is the actual function that is called by the Call Library Node MgErr LibXYZBlockingFunc1(........, InstanceDataPtr *data) { if (*data) { MyManagedStruct *myData = (MyManagedStruct*)*data; myData->state = Running; // keep looping until abort is set to true while (myData->state != Abort) { if (LongOperationFinished(myData)) break; } myData->state = Idle; } else { // Shouldn't happen but maybe we can operate synchronous and risk locking the // machine when the user tries to abort us. } return noErr; } When you now configure a CLFN, you can assign an extra parameter as InstanceDataPtr. This terminal will be greyed out as you can not connect anything to it on the diagram. But LabVIEW will pass it the InstanceDataPtr that you have created in the ReserveCallback() function configuration for that CLFN. And each CLFN on a diagram has its own InstanceDataPtr that is only valid for that specific CLFN. And if your VI is set reentrant LabVIEW will maintain an InstanceDataPtr per CLFN per reentrant instance!
-
Custom subarrays
Rolf Kalbermatter replied to HugoChrist's topic in Application Design & Architecture
Actually arrays (of scalars) normally are allocated as one block. And while LabVIEW internally does indeed use subarrays, there is also a function that will convert subarrays to normal arrays whenever a function doesn't like subarrays. Basically functions need to tell LabVIEW if they can deal with subarrays and unless they explicitly say that they can for an array parameter, LabVIEW will simply convert it to a full array for them before passing it to the function. And the Call Library Node is a function that explicitly does not want subarrays parameters. Theoretically it may be possible but the subarray data structure is more complex than the one that you display in your post. The interface to subarrays is not documented for external tools in LabVIEW, never passed to any external function, interface or data client. It is not trivial to work with, and if LabVIEW would allow that at the Call Library Node interface, EVERY code would need to be prepared that there could be a subarray entry, or there would have to be some involved need for letting a DLL tell LabVIEW that it can accept subarrays for parameter x, z and s, but not for a, b, and c. Totally unmanageable!!! 🤮 So no a Call Library Node will always receive a full array. If necessary LabVIEW will create one! -
Unprintable characters on LavaG
Rolf Kalbermatter replied to LogMAN's topic in Site Feedback & Support
I reported all of them last week. Did not notice at first either but in the last post somehow the link sprang in my eyes and I was first thinking it was a special name for quote marks wondering what that word would mean. 😀 Google quickly taught me that it is some drugs name and from there it was obvious. Then looked at the other 3 before that and saw the same pattern together with a pretty meaningless message. -
You really should learn a little C programming. Because that is what is required when trying to call DLLs. Or hire someone to make the LabVIEW bindings for you! Currently you are sticking around with a pole in a heap of hay to find the needles hidden in there, but having chosen to not only blindfold yourself to make it more "interesting" but also bind your hands on your back. DLL_START is the function pointer declaration and is basically documenting the parameters and return value the function takes. This is almost what you need to use for the import library wizard but not quite. A function pointer declaration is only similar to a function declaration but not the same. The Import Library Wizard however needs a function declaration and it needs to use the same name as what the DLL is exporting, otherwise the wizard can't match the declaration to a particular function. In your example you need to find what function pointer declaration is used for which function. Then you need to translate it to a function declaration. So you have determined that the DLL_START declaration is used for the function pointer for StartGenericDevice() typedef int (*DLL_START) ( DWORD *dwSamplerate ); will then have to be turned into following function declaration: int StartGenericDevice( DWORD *dwSamplerate ); With this the Import Library Wizard does have a function prototype to use for the function exported from the DLL. Now you need to do that also for your other functions in the DLL.
-
Well, if you have the source code for the GenericDevice_DLL_DEMODlg program you may be able to verify that which function pointer is assigned which DLL function. Without that it is simply assuming and things and there is "ass" in the word assuming, which is where assumptions usually bite you in! 😀
-
That's because the GenericDeviceInterface.h doesn't declare the functions. And the other two DEMO header files don't really do either but are rather header files for an application to use this DLL with (and declare C++ classes, which the Import Library Wizard can't do anything with). There are some function pointer declarations in GenericDevice_DLL_DEMODlg.h that the according sample code most likely dynamically imports from the DLL on initialization but the naming is only partly similar to the function names the DLL seems to export, so it is a bit tricky and there is no function pointer declaration for the GetRequestKey() export but two function pointers for a DLL_TEST and DLL_ShowData function that the DLL doesn't seem to export anything similar for.
-
Sometimes you may be forced to develop in 64-bit (image acquisition, large data processing or similar requirements) but also need to interface to a driver whose manufacturer never made the move to 64-bit and possibly never will. The opposite may also be possible: that you develop in 32-bit because the majority of your drivers are only available in 32-bit but one specific driver is only available in 64-bit. If the device protocol is documented and going over a standard bus like GPIB, serial or TCP/IP I would always recommend to implement the driver for at least the oddball device in LabVIEW instead of trying to mix and match bitnesses. If that is not an option, the only feasible solution is to create a separate executable and communicate to it through some IPC (RPC) mechanisme.
-
Sometimes you don't really have a choice. But I agree, if at all possible, don't try to do it! In my case it is usually about my own DLLs/shared libraries, so this particular problem doesn't really present itself for me. I just recompile the DLL/shared library in whatever bitness is needed. Tidbit: While there is indeed thunking, and Windows internally uses it in the SysWOW64 layer that makes the 64-bit kernel API available to 32-bit application, this mechanism was very carefully shielded by Microsoft to not be available to anything outside of the SysWOW64 layer and therefore not provide any thunking facilities for user code between 32-bit and 64-bit code. It generally also only works from 32-bit code calling into 64-bit code and not the opposite at all. I suppose Microsoft wanted to avoid the situation when they went from the segmented 16-bit Windows memory model to the 32-bit flat memory model and just documented how the thunking can be done and everybody was starting to develop all kinds of mechanisms in weird to horrible assembly code to do just that. There was a lot of low level assembly involved in doing so, it had many restrictions and difficulties and once almost everybody had moved to 32-bit, really everybody tried to forget as quickly as possible about this episode. So when going to 64-bit model they carefully avoided this mistake and simply stated from the start that there was no in-process 32-bit to 64-bit translation layer at all (which is technically incorrect since SysWOW64 is just that, but you can't use its services from application code other than indirectly through calling the official Windows APIs). The method used here with executing the different bitness code in a separate process and communicate with it through network communication (or possibly some other Inter-Process Communication method) is not really thunking but rather out of process invocation. There is no officially sanctioned way of thunking between 32-bit and 64-bit code although I'm pretty sure that with enough determination, time and grey matter, there have been people developing their own thunking solutions in assembly. But it would require deep study of the Intel microcode documentation about how 32-bit and 64-bit code execution can interact together and it would probably result in individual assembly thunking wrappers for every single function that you want to call. Definitely not something most people could or would want to do. And to make matters worse, you would never be sure that there are not some CPU models that somehow do something just a little bit different than what you interpreted the specification to be and catastrophically fail on your assembly code thunk.
-
Error handling is always a heated discussion topic. You could argue about the same for timeout errors on network and VISA nodes. And some people get in their frillies about the VISA Read returning a warning when it reads as many characters as you have specified it to read. A warning wouldn't be better as you still would have to read both the status=FALSE and code==4 to detect it. Also I never really work with the EOF error status as I don't read a file until it errors out but until I reach its size. And if you want to work with the EOF status there is a very easy thing. Using the Clear Errors.vi for error 4 you actually get a boolean status if this error was removed from the error cluster if you need that. Otherwise just terminate the loop on the error cluster anyways, clear error 4 in all cases and go on.
-
That does take some time as LabVIEW has to enumerate the directory contents for all files to get the size which is the number of files in the directory.
-
Most likely because the original code originates from pre LabVIEW 8.0. There all LabVIEW Read and Write nodes had explicit file offset input and output. When you upgrade these VIs, LabVIEW mutates them by adding explicit file offset calls before and after the File Read and File Write. It's the only safe way as LabVIEW can't easily know if the original file offset handling was unnecessary because the access is fully sequential or not. Obviously for trivial cases like this the analyzer could be made smart enough to decide that it is not needed, but there are corner cases where this is not easily decided. Rather than try to think up of all such corner cases and make sure that analyzer won't decide wrong by removing one file offset call to much, the easier thing is to simply maintain the original functionality and risk some performance loss (which is minimal in comparison to the old situation where this offset handling was always done anyways). The "example scrubber" for that code probably cleaned it up but didn't dare to remove the file offset calls, obviously not to familiar with LabVIEW internas.
-
You can remove the Set and Get File Offset inside the loop. The LabVIEW file IO nodes maintain internally a file offset (actually it's the underlying OS file IO functions which do and advance that pointer along as you read). As long as you do pure sequential access there is no need to set the file offset explicitly setting. It's even so that when you open a file for anything but append mode, the file offset will be automatically set to 0. Only when you do random access will you need to do explicit file offset setting. I don't expect this to save a lot of time but why do it if it is not necessary? That would seem very strange. The Get File Size directly translates to a Windows API call on the underlying file handle. Why that would be so slow is a miracle to me.
-
One obvious discrepancy: create uses a pointer sized integer and destroy uses an Adapt to type. This will result in passing the pointer as an u64 passed by reference (Adapt to Type are always passed by reference if they are not handles, arrays or ActiveX references). What you want to configure it to is Numeric, Pointer sized Integer, Pass by Value. Yes you want to pass it by value, the value returned from the create function is already a pointer and destroy expects this pointer.
-
Since you don't access the internal elements in the struct at all from LabVIEW you just can treat it all as a pointer sized integer. In fact since OpenSSL 1.0.0 all those structs are considered opaque in terms of external users of the API and should never be referenced in any way other than through published OpenSSL functions. In terms of an external API user these contexts are meant to be simply a handle (a pointer to private data whose contents is unknown). EVP_MD_CTX_create() creates the context -> just configure it to return a pointer sized integer. Then pass this to all other EVP functions again as pointer sized integer. And of course don't forget to call the EVP_MD_CTX_free() function at the end to avoid memory leaks.
-
It's essentially the same as what QueueYueue posted. And it has the same problem, it won't work at runtime. "LVClass.Open" is not available in Runtime and Realtime (Library: Get Ref by Qualified Name is available but not remote executable, but typecasting to LVClass won't work since the Runtime and Realtime does not support that VI server class). "ChildrenInMemory" is not available in Runtime and Realtime All LVClass properties and methods are not available in Runtime and Realtime
-
Actually, the Widechar functions supported it since at least Windows 2000 but only with the special prefix. That registry hack and application manifest is needed to not have to use this prefix, so yes porting to Widechar functions is in either case needed to support long file paths. My library adds the special prefix and didn't have to go through manifests and registry settings to use the feature.