Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. I don't quite have a working example but the logic for allocation and deallocation is pretty much as explained by JKSH already. But that is not where it would be very useful really. What he does is simply calculating the time difference between when the VI hierarchy containing the CLFN was started until it was terminated. Not that useful really. 😀 The usefulness is in the third function AbortCallback() and the actual CLFN function itself. // Our data structure to manage the asynchronous management typedef struct { time_t time; int state; LStrHandle buff; } MyManagedStruct; // These are the CLFN Callback functions. You could either have multiple sets of Callback functions each operating on their own data // structure as InstanceDataPtr for one or more functions or one set for an entire library. Using the same data structure for all. In // the latter case these functions will need to be a bit smarter to determine differences for different functions or function sets // based on extra info in the data structure but it is a lot easier to manage, since you don't have different Callback functions for // different CLFNs. MgErr LibXYZReserve(InstanceDataPtr *data) { // LabVIEW wants us to initialize our instance data pointer. If everything fits into a pointer // we could just use it, otherwise we allocate a memory buffer and assign its pointer to // the instanceDataPtr MyManagedStruct *myData; if (!*data) { // We got a NULL pointer, allocate our struct. This should be the standard unless the VI was run before and we forgot to // assign the Unreserve function or didn't deallocate or clear the InstanceDataPtr in there. *data = (InstanceDataPtr)malloc(sizeof(MyManagedStruct)); if (!*data) return mFullErr; memset(*data, 0, sizeof(MyManagedStruct)); } myData = (MyManagedStruct*)*data; myData->time = time(NULL); myData->state = Idle; return noErr; } MgErr LibXYZUnreserve(InstanceDataPtr *data) { // LabVIEW wants us to deallocate a instance data pointer if (*data) { MyManagedStruct *myData = (MyManagedStruct*)*data; // We could check if there is still something active and signal to abort and wait for it // to have aborted but it's better to do that in the Abort callback ....... // Deallocate all resources if (myData->buff) DSDisposeHandle(myData->buff); // Deallocate our memory buffer and assign NULL to the InstanceDataPointer free(*data) *data = NULL; } return noErr; } MgErr LibXYZAbort(InstanceDataPtr *data) { // LabVIEW wants us to abort a pending operation if (*data) { MyManagedStruct *myData = (MyManagedStruct*)*data; // In a real application we do probably want to first check that there is actually something to abort and // if so signal an abort and then wait for the function to actually have aborted. // This here is very simple and not fully thread safe. Better would be to use an Event or Notifier // or whatever or at least use atomic memory access functions with semaphores or similar. myData->state = Abort; } return noErr; } // This is the actual function that is called by the Call Library Node MgErr LibXYZBlockingFunc1(........, InstanceDataPtr *data) { if (*data) { MyManagedStruct *myData = (MyManagedStruct*)*data; myData->state = Running; // keep looping until abort is set to true while (myData->state != Abort) { if (LongOperationFinished(myData)) break; } myData->state = Idle; } else { // Shouldn't happen but maybe we can operate synchronous and risk locking the // machine when the user tries to abort us. } return noErr; } When you now configure a CLFN, you can assign an extra parameter as InstanceDataPtr. This terminal will be greyed out as you can not connect anything to it on the diagram. But LabVIEW will pass it the InstanceDataPtr that you have created in the ReserveCallback() function configuration for that CLFN. And each CLFN on a diagram has its own InstanceDataPtr that is only valid for that specific CLFN. And if your VI is set reentrant LabVIEW will maintain an InstanceDataPtr per CLFN per reentrant instance!
  2. Actually arrays (of scalars) normally are allocated as one block. And while LabVIEW internally does indeed use subarrays, there is also a function that will convert subarrays to normal arrays whenever a function doesn't like subarrays. Basically functions need to tell LabVIEW if they can deal with subarrays and unless they explicitly say that they can for an array parameter, LabVIEW will simply convert it to a full array for them before passing it to the function. And the Call Library Node is a function that explicitly does not want subarrays parameters. Theoretically it may be possible but the subarray data structure is more complex than the one that you display in your post. The interface to subarrays is not documented for external tools in LabVIEW, never passed to any external function, interface or data client. It is not trivial to work with, and if LabVIEW would allow that at the Call Library Node interface, EVERY code would need to be prepared that there could be a subarray entry, or there would have to be some involved need for letting a DLL tell LabVIEW that it can accept subarrays for parameter x, z and s, but not for a, b, and c. Totally unmanageable!!! 🤮 So no a Call Library Node will always receive a full array. If necessary LabVIEW will create one!
  3. I reported all of them last week. Did not notice at first either but in the last post somehow the link sprang in my eyes and I was first thinking it was a special name for quote marks wondering what that word would mean. 😀 Google quickly taught me that it is some drugs name and from there it was obvious. Then looked at the other 3 before that and saw the same pattern together with a pretty meaningless message.
  4. You really should learn a little C programming. Because that is what is required when trying to call DLLs. Or hire someone to make the LabVIEW bindings for you! Currently you are sticking around with a pole in a heap of hay to find the needles hidden in there, but having chosen to not only blindfold yourself to make it more "interesting" but also bind your hands on your back. DLL_START is the function pointer declaration and is basically documenting the parameters and return value the function takes. This is almost what you need to use for the import library wizard but not quite. A function pointer declaration is only similar to a function declaration but not the same. The Import Library Wizard however needs a function declaration and it needs to use the same name as what the DLL is exporting, otherwise the wizard can't match the declaration to a particular function. In your example you need to find what function pointer declaration is used for which function. Then you need to translate it to a function declaration. So you have determined that the DLL_START declaration is used for the function pointer for StartGenericDevice() typedef int (*DLL_START) ( DWORD *dwSamplerate ); will then have to be turned into following function declaration: int StartGenericDevice( DWORD *dwSamplerate ); With this the Import Library Wizard does have a function prototype to use for the function exported from the DLL. Now you need to do that also for your other functions in the DLL.
  5. Well, if you have the source code for the GenericDevice_DLL_DEMODlg program you may be able to verify that which function pointer is assigned which DLL function. Without that it is simply assuming and things and there is "ass" in the word assuming, which is where assumptions usually bite you in! 😀
  6. That's because the GenericDeviceInterface.h doesn't declare the functions. And the other two DEMO header files don't really do either but are rather header files for an application to use this DLL with (and declare C++ classes, which the Import Library Wizard can't do anything with). There are some function pointer declarations in GenericDevice_DLL_DEMODlg.h that the according sample code most likely dynamically imports from the DLL on initialization but the naming is only partly similar to the function names the DLL seems to export, so it is a bit tricky and there is no function pointer declaration for the GetRequestKey() export but two function pointers for a DLL_TEST and DLL_ShowData function that the DLL doesn't seem to export anything similar for.
  7. Sometimes you may be forced to develop in 64-bit (image acquisition, large data processing or similar requirements) but also need to interface to a driver whose manufacturer never made the move to 64-bit and possibly never will. The opposite may also be possible: that you develop in 32-bit because the majority of your drivers are only available in 32-bit but one specific driver is only available in 64-bit. If the device protocol is documented and going over a standard bus like GPIB, serial or TCP/IP I would always recommend to implement the driver for at least the oddball device in LabVIEW instead of trying to mix and match bitnesses. If that is not an option, the only feasible solution is to create a separate executable and communicate to it through some IPC (RPC) mechanisme.
  8. Sometimes you don't really have a choice. But I agree, if at all possible, don't try to do it! In my case it is usually about my own DLLs/shared libraries, so this particular problem doesn't really present itself for me. I just recompile the DLL/shared library in whatever bitness is needed. Tidbit: While there is indeed thunking, and Windows internally uses it in the SysWOW64 layer that makes the 64-bit kernel API available to 32-bit application, this mechanism was very carefully shielded by Microsoft to not be available to anything outside of the SysWOW64 layer and therefore not provide any thunking facilities for user code between 32-bit and 64-bit code. It generally also only works from 32-bit code calling into 64-bit code and not the opposite at all. I suppose Microsoft wanted to avoid the situation when they went from the segmented 16-bit Windows memory model to the 32-bit flat memory model and just documented how the thunking can be done and everybody was starting to develop all kinds of mechanisms in weird to horrible assembly code to do just that. There was a lot of low level assembly involved in doing so, it had many restrictions and difficulties and once almost everybody had moved to 32-bit, really everybody tried to forget as quickly as possible about this episode. So when going to 64-bit model they carefully avoided this mistake and simply stated from the start that there was no in-process 32-bit to 64-bit translation layer at all (which is technically incorrect since SysWOW64 is just that, but you can't use its services from application code other than indirectly through calling the official Windows APIs). The method used here with executing the different bitness code in a separate process and communicate with it through network communication (or possibly some other Inter-Process Communication method) is not really thunking but rather out of process invocation. There is no officially sanctioned way of thunking between 32-bit and 64-bit code although I'm pretty sure that with enough determination, time and grey matter, there have been people developing their own thunking solutions in assembly. But it would require deep study of the Intel microcode documentation about how 32-bit and 64-bit code execution can interact together and it would probably result in individual assembly thunking wrappers for every single function that you want to call. Definitely not something most people could or would want to do. And to make matters worse, you would never be sure that there are not some CPU models that somehow do something just a little bit different than what you interpreted the specification to be and catastrophically fail on your assembly code thunk.
  9. Error handling is always a heated discussion topic. You could argue about the same for timeout errors on network and VISA nodes. And some people get in their frillies about the VISA Read returning a warning when it reads as many characters as you have specified it to read. A warning wouldn't be better as you still would have to read both the status=FALSE and code==4 to detect it. Also I never really work with the EOF error status as I don't read a file until it errors out but until I reach its size. And if you want to work with the EOF status there is a very easy thing. Using the Clear Errors.vi for error 4 you actually get a boolean status if this error was removed from the error cluster if you need that. Otherwise just terminate the loop on the error cluster anyways, clear error 4 in all cases and go on.
  10. That does take some time as LabVIEW has to enumerate the directory contents for all files to get the size which is the number of files in the directory.
  11. Most likely because the original code originates from pre LabVIEW 8.0. There all LabVIEW Read and Write nodes had explicit file offset input and output. When you upgrade these VIs, LabVIEW mutates them by adding explicit file offset calls before and after the File Read and File Write. It's the only safe way as LabVIEW can't easily know if the original file offset handling was unnecessary because the access is fully sequential or not. Obviously for trivial cases like this the analyzer could be made smart enough to decide that it is not needed, but there are corner cases where this is not easily decided. Rather than try to think up of all such corner cases and make sure that analyzer won't decide wrong by removing one file offset call to much, the easier thing is to simply maintain the original functionality and risk some performance loss (which is minimal in comparison to the old situation where this offset handling was always done anyways). The "example scrubber" for that code probably cleaned it up but didn't dare to remove the file offset calls, obviously not to familiar with LabVIEW internas.
  12. You can remove the Set and Get File Offset inside the loop. The LabVIEW file IO nodes maintain internally a file offset (actually it's the underlying OS file IO functions which do and advance that pointer along as you read). As long as you do pure sequential access there is no need to set the file offset explicitly setting. It's even so that when you open a file for anything but append mode, the file offset will be automatically set to 0. Only when you do random access will you need to do explicit file offset setting. I don't expect this to save a lot of time but why do it if it is not necessary? That would seem very strange. The Get File Size directly translates to a Windows API call on the underlying file handle. Why that would be so slow is a miracle to me.
  13. One obvious discrepancy: create uses a pointer sized integer and destroy uses an Adapt to type. This will result in passing the pointer as an u64 passed by reference (Adapt to Type are always passed by reference if they are not handles, arrays or ActiveX references). What you want to configure it to is Numeric, Pointer sized Integer, Pass by Value. Yes you want to pass it by value, the value returned from the create function is already a pointer and destroy expects this pointer.
  14. Since you don't access the internal elements in the struct at all from LabVIEW you just can treat it all as a pointer sized integer. In fact since OpenSSL 1.0.0 all those structs are considered opaque in terms of external users of the API and should never be referenced in any way other than through published OpenSSL functions. In terms of an external API user these contexts are meant to be simply a handle (a pointer to private data whose contents is unknown). EVP_MD_CTX_create() creates the context -> just configure it to return a pointer sized integer. Then pass this to all other EVP functions again as pointer sized integer. And of course don't forget to call the EVP_MD_CTX_free() function at the end to avoid memory leaks.
  15. It's essentially the same as what QueueYueue posted. And it has the same problem, it won't work at runtime. "LVClass.Open" is not available in Runtime and Realtime (Library: Get Ref by Qualified Name is available but not remote executable, but typecasting to LVClass won't work since the Runtime and Realtime does not support that VI server class). "ChildrenInMemory" is not available in Runtime and Realtime All LVClass properties and methods are not available in Runtime and Realtime
  16. Actually, the Widechar functions supported it since at least Windows 2000 but only with the special prefix. That registry hack and application manifest is needed to not have to use this prefix, so yes porting to Widechar functions is in either case needed to support long file paths. My library adds the special prefix and didn't have to go through manifests and registry settings to use the feature.
  17. As Shaun already more or less explained it is a multilayered problem. 1) The LabVIEW path control has internally following limitations: - a path element (single directory level or filename) can be at most 255 characters. - the path can have at most 65535 levels The only practical limit that is even remotely ever reachable is the 255 character limit per path level, but I think we all agree that if you get that long path level names you have probably other problems to tackle first. 😀 (such as getting out of the straightjacket they for sure have put you in already). 2) Traditionally Windows only supported long path names when you used the Widechar file IO functions and also only when you prepended the path string with a special character sequence. LabVIEWs lack of native support for Unicode made that basically impossible. Long path names are limited to 32000 something characters. 3) Somewhere along the line of Windows versions (7, 8?) the requirement for the special character sequence prepending seems to have relaxed. 4) Since Windows 10 you can enable a registry setting that also allows the ANSI functions to support long path names. So while theoretically there is now a way to support long path names in LabVIEW on Windows 10 this is hampered by a tiny little snag. The path conversion routines between LabVIEW paths and native paths never had to deal with such names since Windows until recently didn't support it for the ANSI functions, and there are some assumptions that paths can't get more than MAX_PATH name characters. This is simply for performance. With a maximum fixed size you don't need to preflight the path to determine a maximum size to allocate a dynamic buffer for, that you then have to properly deallocate afterwards. Instead you simply declare a buffer on the stack, which is basically nothing more than a constant offset added to the stack pointer and all is well. Very fast and very limiting! This is where it is currently still going wrong. Now reviewing the path manager code paths to all properly use dynamically allocated buffers would be possible but quite tedious. And it doesn't really solve the problem fully since you still need to go and change an obscure registry setting to enable it to work on a specific computer. And it doesn't solve another big problem, that of localized path names. Any character outside the standard 7-bit ASCII code will NOT transfer well between systems with different locales. To solve this LabVIEW will need some more involved path manager changes. First the path needs to support Unicode. That is actually doable since the Path data type is a private data type so how LabVIEW stores path data inside the handle is completely private and it could easily change that format to use whatever is the native prefered Unicode char for the current platform. On Windows this would be a 16 bit WCHAR, on other platforms it would be either a wchar or an UTF8 char. It wouldn't really matter since the only other relevant platforms are all Linux or Mac BSD based and use by default UTF8 for filenames. When the path needs to be externalized (LabVIEW speak is flattened) it always would be converted to and from UTF8 to the native format. Now LabVIEW can convert its Path to whatever is the native path type (WCHAR string on Windows, UTF8 string on other platforms) and it support long path names and international paths all in one go. The UTF8 format of externalized paths wouldn't be strictly compatible with the current paths, but for all practical purposes it would be not really worse than it is now. The only special case would be when saving VIs for previous versions where it would have to change paths from UTF8 to ASCII at a certain version. I kind of did attempt to do something like that for the OpenG ZIP library but it is hacky and error prone since I can't really go and change the LabVIEW internal data types, so I had to define my own data type that represents a Unicode capable path and then create functions for every single file function that I wanted to use to use this new path, basically rewriting a large part of the LabVIEW Path and File Manager component. It's ugly and really took away most of my motivation to work on that package anymore. I have another private library that I used in a grey past to create the LLB Viewer File Explorer extension before NI made one themselves, and I have modified that to support this type of file paths. Works quite well in fact but it is all very legacy by now. But it did have long file name and local independent file name support some 15 years ago already with an API that looked almost exactly like the LabVIEW File and Path Managers.
  18. We usually use discrete ones and just use a few digital IO ports in our E cabinet for them. The digital IO to use depends on the hardware in the E cabinet. That could be cRIO digital IOs, or a Beckhoff PLC IO or Beckhoff BusCoupler IOs, usually accessed through the ADS protocol over Ethernet. USB controlled devices don't work well for non-Windows controllers at all, since you always run into trouble to get drivers.
  19. My real life experience definitely does not support this statement. I have seen handles being returned that are bigger than 0xFFFFFFFF in value and crashing the process when treated as 32-bit value. So while this may be true for some handles it certainly isn't for all Windows handles. And yes that was about Windows handles, not some third party library declaring void* pointers as handle, that were in reality pointers to a struct (in which case not treating them as pointer sized integer certainly and positively will cause problems). I do believe that some of the Windows handles are similar to LabVIEW Magic Cookies that are basically an index into an object manager and object class specific private data list, but there certainly are various different approaches and some handles seem to be rather pointers in nature. For instance the HINSTANCE or HMODULE is basically the virtual address of where the module was loaded into memory and is sometimes used to directly access resource lists and other things in a loaded PE module (EXE and DLL) through so called RVA (Relative Virtual Address) offsets in the module image data. It's not a neat way of doing things and one should rather use functions from the debug library but sometimes that is not practical (and if you want to program not so official things it might be sometimes impossible). Of course doing it all by hand has a lot of possibilities to miss some of the complications, so that it will break with non-standard linked module files or with extensions of the PE specification for new Windows versions. Similar things apply to some COM objects like HIMAGELIST and others. They seem to be basically the actual COM object pointer that contains the COM methods virtual table directly, not some magic cookie implementation that references the COM object pointer. All the ImageList_xxxxx functions in the CommCtrl library are basically just compiled C wrappers that call the according virtual table method in the COM object. And while COM is object oriented, its ABI is defined in such a strict way that it can easily be called from standard C code too, if you have the correct headers for the COM object class. It's even possible to implement COM classes purely in C, as has been done for a long time by the Wine project, which had a policy that all code needed to be in standard C in order to be compilable on as many different platforms as possible. They relaxed that requirement in recent years as some of the MacOSX APIs can't really get easily called in other ways than Objective C, the Apple way of object oriented C programming, which was originally an Objective C preprocessor that was then putting everything through a standard C compiler anyhow.
  20. Definitely needs some love to work in LabVIEW 64-bit. This library was developed ca. LabVIEW 6i and that is loooooong before the Call Library could support pointer sized integers (LabVIEW 2009) which all the handles in there need to be in order for it to work in 64-bit LabVIEW.
  21. No it's not. A .Net DLL is not supposed to change location in build applications. .Net itself only knows really two locations by default where it will search for assemblies: - The directory in which the current executable file is located - The GAC Anything else is extra, such as an non-default AppDomain with its custom ApplicationBase. LabVIEW adds to this the option to reference Assemblies by full path (which the application builder adjusts to the location you configured the assembly to be installed to) but that path is embedded in the compiled VI and not accessible just as you can't change the path of subVIs in a compiled executable either.
  22. Not directly. But I solved that in the past by creating VIs that called the .Net (or ActiveX) nodes and then calling those VIs dynamically through VI Server. A sort plugin system with the dynamic called VIs containing the .Net or ActiveX nodes.
  23. Debugging pictures is unfortunately not possible. And without the hardware I couldn't really do much either. I can not comment to the tests your IT department did, but they likely understand even less of the problem than you do and can only do some standard tests that may or may not point out the problem. There is a lot of information in this thread about things to check. I can't really give more ideas. You will have to read through everything and test it on your system to see if you get any results. Debugging network problems is a highly specialized ability that requires to understand a lot of different things, often a lot of time, and the hardware at hand to go through the many tests and trials to hopefully end up with some indications where the problem could be and then find the solution for it. And yes it is hard. Networking has been getting ubiquitous to the point that everybody simply expects it to be present and working. In reality the techniques involved are highly complex and even simple misconfigurations can make it fail. TCP/IP with its many fall back and fail save mechanisms makes this sometimes even more complex, since it doesn't just fail flat out but still sort of works, but with much degraded performance.
  24. Buaaah, not fair. I'm still only a Rookie! 😂
  25. Very obviously and the start date seems to be July 31, somehow (more likely whenever you first logged in after the forum update). That's at least what most of my Ranks have as granting date. Funny to see that I have managed to create both my 10th and 500th Lava post and been one month and also one year after joining Lava on that same day! Apparently the system doesn't know about a 10 year anniversary, but who knows maybe in 8 years from now it may grant the 25 years anniversary. 😃
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.