Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. I did, using a variant of the factory pattern. It was an instrument driver for a USB serial device that could have two different device types. Implemented the low level drivers for each device as a derived class of the main class which was the actual interface used by a user of the driver. Depending on a selection by the user either of the two low level drivers was instantiated and used for the actual implementation of the device communication. Worked fine except that you need to do some path magic to allow for execution on the RT target. It's mostly the same as what you need to do for execution in a build application but there is a difference between the paths when you deploy the program directly from the project (during debugging for instance) and when you build an rtexe application and execute that.
  2. That's not a good idea!! The new 64 bit shared library has various changes that will not work without an updated ZLIP VI library and support files at all. The VIs as they are in the sourceforge repository are the updated ones. A new packages needs to be build but I have delayed that since there are still some issues with additional functionality I wanted to include. This here is an early beta version of a new package which adds support for 64 bit Windows. The MacOSX package support hasn't been added yet so that part won't work at all. What it does contain is an installer for support for the NI realtime targets. This RT installer will however only get installed if you install the package into 32 bit LabVIEW for Windows, since that is the only version which supports realtime development so far. Once it is installed you can go into MAX and select to install extra software for your RT target. Then select custom install and in there should be an OpenG ZIP Toolkit entry which will make sure to install the necessary shared library to your target. For the deflate and inflate alone the replacement of the shared library may indeed be enough but trying to run any of the other ZIP library functions has a very big chance to crash your system if you mix the new shared library with the old VIs. That package was released in 2011, LabVIEW was already present then for 64 bit (since LabVIEW 2009) but VIPM didn't know about 64 bit LabVIEW yet and one could not even attempt to try to convince VIPM to make a difference there. Also the updated package was mainly just a repackiging of the older version from 2009 to support the new VI palette organization, and nothing else. oglib_lvzip-4.1.0-b3.ogp
  3. Windows 7, but even explicitedly changing the security settings for the WMI root\\CIMV2 tree wouldn't allow me to do an WMIService->ExecQuery() no matter what I try to query so there might be something else going wrong despite of the error code 80041003 indicating some access right issue. I'll try to debug it a bit more but have found in the meantime code that uses various Win32 APIs to query this information more quickly and more reliably as WMI can sometimes misreport this.
  4. And I tried to implement the WMI calls through COM in a DLL to be called from LabVIEW. No such luck. Turns out you can't CoInitialize() the COM system in the DLL since LabVIEW has done that already. And you also can't initialize the COM system with extra security privileges either since LabVIEW has done this initialization (most likely implicitedly on startup with the lowest possible privileges) and COM does not support to readjust the security privileges later on. Without those security privileges quering any WMI database info then fails.
  5. Since that is an abstract class you can't instantiate it. Abstract classes are similar to interfaces. They describe an object interface and implement part of the object but leave out at least one method to be implemented by the end user. Even in a real .Net language you would have to create an actual implementation of this class in your code that derives from this abstract class and implements all abstract methods of it LabVIEW can interface with .Net but can not derive from .Net classes itself. As such you can't pull this of without some external help. You will have to create a .Net assembly in VB or C# or any other .Net development platform that you are familiar in. This assembly has to impement your specific MyConditionExpression class that derives from ConditionExpression and then you can instantiate that from your assembly and pass it to this method.
  6. Your code should ALWAYS call the DLL which has the same bitness your application was compiled in. So if you used LabVIEW 32 bit to create your application (or run your VI in LabVIEW 32 bit) you MUST reference the 32 bit DLL, independent if you run on 32 bit or 64 bit Windows. A 32 bit process CANNOT load a 64 bit DLL (for code execution that is, but that is all you care about!) nor can a 64 bit process load 32 bit DLLs. The problem is not about a 32 bit DLL living somewhere in a random subdirectory, but about a 32 bit DLL living in a directory that a 64 bit system considers reserved for 64 bit code, such as "C:\Program Files\..." or "C:\Windows\System32" and 64 bit DLLs living in a location that the 64 bit system considers reserved for 32 bit code, such as "C:\Program Files (x86)\.." or "C:\Windows\SysWOW64". If you have a 32 bit system those file system redirections do not apply but any 64 bit code file anywhere will be considered an invalid executable image file.
  7. Shaun already told you that this is a 32 Bit path. A 64 bit application will attempt to load "C:\program files\signal hound\spike\api\..." no matter what you do. As to error codes: LabVIEW does sometimes second guess Windows error codes and attempts to determine a more suitable code, also when loading shared libraries but that has to end at some point. Otherwise they could start to reimplement whole parts of Windows, that is actually easier than to try to second guess system API error codes which also can vary between OS versions and even depending on system extensions which can or can not be installed. There is a possibility to disable file system redirection temporarly through Wow64DisableWow64FsRedirection() which LabVIEW might do when loading an explicit DLL defined in the Library Path of the CLN confiiguration (at least as a secondary attempt if the normal load fails). However this is a thread global setting. Loading of DLLs that are explicitiedly named in the configuration dialog happens at load time of the VI, which is a highly serialized operation with all kinds of protection locks in place inside LabVIEW to make sure there won't be any race conditions as LabVIEW updates its internal bookkeeping tables about what resources it has loaded etc. The loading of DLLs that are passed as path to the CLN has to happen at runtime and executes in the context of the thread that executes the CLN call and other parts of the diagram if the CLN is set to execute reentrantly, otherwise it has to compete with the GIU update which also does all kind of things that could get badly upset with the file system redirection disabled. So while it can be safe to temporarly disable file system redirection during VI load time, this is a much more complicated issue at runtime and the safe thing here is to simply not do it. It is even less safe to wrap your CLN with an additional CLN call to a Wow64DisableWow64FsRedirection()/Wow64RevertWow64FsRedirection() call since you either have to execute all 3 CLNs in the UI thread to make sure they operate as intended and that could influence other things in LabVIEW that happen in the UI thread between calls to your CLN. If you set the CLN calls to run as reentrant they will most likely not run in the same thread at all as LabVIEW executes diagram clumps in random threads within an execution system thread pool. There are only two ways to make this still work if you need to disable file system redirection. One is to write a DLL that your CLN calls which calls first Wow64DisableWow64FsRedirection() then attempts to load the DLL with LoadLibrary() then calls Wow64RevertWow64FsRedirection() and then calls the function. The other is to create a subVI with subroutine execution and pack the Wow64DisableWow64FsRedirection() , CLN call and Wow64RevertWow64FsRedirection() in it. Subroutine VIs are guaranteed to be executed in one go without any thread switching during the execution of the entire subroutine diagram. More precisely though, your API installation is broken. Installing 64 bit DLLs inside "Program Files (x64)" is very BAD, as well as installing 32 bit DLLs inside "Program Files" or "Windows\System32" on a 64 bit system.
  8. Where would the paths be located? There exists something like files system redirection in Windows VISTA and higher which will redirect certain paths like C:\Windows\System32 or C:\Program Files and C:\Program Files (x86) to whatever the kernel considers the appropriate location for the current application based on its bitness. So eventhough you ask for C:\Windows\System32 in a 32 bit process, it will be directed to C:\Windows\SysWOW64 when run on a 64 bit OS! It's possible that LabVIEW attempts to be smart when trying to reference a DLL that is defined in the Call Library Node but won't second guess Windows decision when you define the dynamic path.
  9. Well, lots of questions and some assumptions. I created the cdf files for the OpenG ZIP library by hand from looking at other cdf files. Basically if you want to do something similar you could take the files from the OpenG ZIP library, change the GUID to some self generated GUID in there. This is the identifier for your package and needs to be unique, so you can not use that of another package or you mess up your software installation for your toolkit. Also change the package name in each of the files and the actual name of your .so file. When you interactively deploy a VI to the target that references your shared library through a Call Library Node and the shared library is not present or properly installed then you will get an according message in the deployment error message with the name of the missing shared library and/or symbol. If you have some component that does reference a shared library through dlopen()/dlsym yourself then LabVIEW can not know that this won't work as the dlopen() call will fail at runtime and not at deployment time and therefore you will only get informed if you implement your own error handling around dlopen(). But generally why use dlopen() since the Call Library Node basically uses dlopen()/dlsym() itself to load the shared library. Basically if you reference other shared libraries explicitedly by using dlopen()/dlsym() in a shared library you will have to implement your own error handling around that. If you implicitedly let the shared library reference symbols that should be provided by other shared libraries then the loading of your shared library will fail when those references can't be resolved. The error message in the deplyoment dialog will tell you that the shared library that was referenced by the Call Library Node failed to load, but not that it failed because some secondary dependency couldn't be resolved. This is not really different with Windows where you can either reference other DLLs by linking your shared library with an import library or do the referencing yourself by explicitedly calling LoadLibrary()/GetProcAdress(). The only difference between Windows and elf is in the fact that on Windows you can not create a shared library that has unresolved symbols. If you want the shared library to implicitedly link to another shared library you have to link your shared library with an import library that resolves all symbols. On elf the linker simply assumes that any missing symbols will be resolved at load time somehow. That's why on Windows you need to link with labviewv.lib if you reference LabVIEW manager functions but with labviewv.lib being actually a specially crafted import library as it uses delay load rather than normal load. That means a symbol will only be resolved to the actual LabVIEW runtime library function when first used, not when your shared library is loaded, but delay load import libraries are a pretty special thing under Windows and there are no simple point and click tools in Visual Studio to create them. Please note that while I have repeatedly said here that elf shared libraries are similar to Windows DLLs in these aspects, there are indeed quite some semantic differences, so please don't go around quoting me as having said they are the same. Versioning of elf shared libraries is theoretically a useful feature but in practice not trivial since there are many library developers who have their own ideas about versioning of their shared libraries. Also it is not an inherent feature of elf shared libraries but rather based on naming conventions of the resulting shared library which then is resolved through extra symlinks that create file references for the so name only and a so name with major version number. Theory is that the shared library itself uses a so.major.minor version suffix and applications link generally to the .so.major symlink name. And anytime there is a binary incompatible interface change, the major version should be incremented. But while this is a nice theory quite a few developers only follow that idea somewhat or not at all. In addition I did have trouble to get the shared library recognized by ldconf on the NI Linux RT targets if I didn't create the .so name without any version information. Not sure why on normal Linux systems that doesn't seem to be an issue, but that could also be a difference caused by different kernel versions. I tend to use an older Ubuntu version for Desktop Linux development which also has an older kernel than what NI Linux RT is using.
  10. There is no direct way to install binary modules for NI RT targets from the package manager interface. Basically those binary modules need to be currently installed through the Add Software option in MAX for the respective target. One way I found that does work and which I have used for the OpenG ZIP Toolkit is to install the actual .cdf files and binary modules in the "Program Files (x86)\National Instruments\RT Images" directory. Unfortuantely this directory is protected and only accessible with elevated rights which the package manager does not have by default. Instead of choosing to have to start the VIPM with adminstrative rights to allow the copying of the files to that directory I created a setup program using InnoSetup that requests the administrative access rights on launch from the user. This setup program is then included in the VI package and launched during package installation through a post install VI hook. You can have a look at the Open G ZIP Toolkit sources on the OpenG Toolkit page on sourceforge to see how this all could be done. It's not trivial and not easy, but it is a workable option.
  11. .dylib is basically the actual shared library file that contains the compiled object code. It is similar to the .so file on Linux but in a different object format. .framework is a package very much like the .app for an application. It is a directory structure containing several symlink files pointing over a few redirections to the actual binary object file that is the shared library. In addition it can contain string and other resource files for localization and version information. The low level shared module loader works with the .dylib file or the binary object file inside the .framework, but the MacOSX shared library support works on the .framework level, alhough it does crrently still support to load .dylibs directly too. But Apple tries to move everyone to the package format and removes more and more references in the documentation about the low level access and there is a good chance that support for that is eventually depreciated. This is all in an attempt to remove unportable interfaces from the application level in order to make an application work more an more likely on any iOS compatible device.
  12. You use hardware based timing for your tasks. The only way to get this working as you describe is to buy two seperate DAQ boards and run each of the two AI tasks on one of them. There is one timer circuitry on each board for analog input timing and you can't have two tasks trying to use that circuitry at the same time. There is no software trick to make this work as you want. You could modify your requirements and start the two AI channels together and do the trigger detection afterwards in the read data though. You could even configure the AI task to start together with the AO task by making it a slave of the AO clock.
  13. I use LabVIEW for Mac on an iMac regularly. What do you expect to not work? You don't have many hardware IO interfaces on the Mac but running LabVIEW for Windows in a virtual machine won't give you more options for sure. And BootCamp or whatever that is called nowdays likely will be also not a full solution since Windows doesn't come with drivers for every hardware component in a MacBook Pro.
  14. Well I guess two modulo-remainder operations where a big deal back in the early eighties . Apple smartly sidestepped that issue by choosing 1904 as their epoch for MacOS, and yes I'm sure that was not only to be different than Lotus. As to if that was a deliberate decision back then or more negligence by the gals and guys at Lotus will probably never be found out for sure. It may also just have been something they "inheritet" from VisiCalc.
  15. While it's a breaking change to modify this now, the original of writing seconds since january 1. 1904 as a timestamp to SQLLite is truely broken too. So I would investigate if you can demote the current method as depreciated and remove it from the palettes and add a new one that does the right thing. Existing applications using that function will still work as they used too, and still use a broken timestamp while new developments would use the right method. Also document that difference somewhere for anyone wanting to read databases written with the old method to use the depreciated method for reading, but a string reminder to not use it for new development.
  16. LabVIEW's timestamp format is explicitedly defined as number of seconds since midnight January 1, 1904 GMT. There is no reason LabVIEW needs to adhere to any specific epoch. On Windows the system time epoch is number of 100 ns, since January 1, 1601, and Unix uses indeed January 1, 1970, while the MacOS used January 1, 1904 (yes LabVIEW was originally a MacOS only application! ). And as curiosa, Excel has a default time definition of number of days since January 1, 1900, although due to a mishap when defining that epoch and forgetting about that 1900 wasn't a leap year, the real epoch starts on midnight December 31, 1899. But there is a configuration option in Excel to shift this to the MacOS time epoch! It's definitely a good thing that they stick to the same time epoch on all LabVIEW platforms, and since the MacOS one was the first to be used by LabVIEW, it's a good choice to stick with that. If your API has a different epoch you have to provide functions to convert between the LabVIEW epoch and your API epoch and vice versa.
  17. It's generally correct what JKSH writes when the handle containing the handles is allocated by yourself. However if that array of CodecInfo structs comes from LabVIEW there are strict rules that can be observed and must be followed when returning such data to LabVIEW. Anything beyond the number of elements indicated in the array is uninitialized although the actual array handle space may be bigger but never smaller than needed for the number of valid elements. The only exception to this are ampty array handles which can be both either a valid handle with a number of elements equal to 0 OR a null handle itself. So when receiving handles from LabVIEW you should always check for null and depending on that do DSNewHandle/DSNewHClr or DSSetHandleSize. When resizing a handle existing elements need to be resized and appended elements always created. Removed elements when making an array smaller must always be recursively deallocated as they don't exist for LabVIEW anymore once the array length has been readjusted to a smaller size and would therefore create a memory leaks. If you allocate both the array handle as well as the contained handles inside the array in the same code section without returning in between to the LabVIEW diagram it is entirely up to you if you want to use DSNewHandle or DSNewHClr, as the values inside the newly allocated array need to be initialized anyhow explicitly. The latter requires more execution time but may be faster if you need to initialize most elements in there to 0 or an empty handle anyways. Also it may be a little safer when someone later makes modifications to the code as null pointer dereferencing has a higher chance of crashing than accessing uninitialzed pointers, so debugging is easier.
  18. I thought it would be this but that has many shortcomings. The only way to destroy an occurrence is by calling the DestroyOccur() C API in LabVIEW. However if you do that with an occurrence that was created with the Create Occurrence node that occurrence is gone and the only way to get it back is by unloading and reloading the VI that contains the Create Occurence node. Without reloading the VI, this occurrence will be invalid and immediately run into the timeout case again if you restart the VI. Not exactly a convinient thing when you are developing and starting and stopping your app repeatedly. Of course if you create the occurrence by calling the AllocOccur() API this is not an issue but then you call already two undocumented C APIs.
  19. You forgot a smiley there after the last sentence!
  20. The Create Occurrence node only executes at VI load time and that is by design. It has been that way since the inception of occurrences in LabVIEW back around LabVIEW 2.0. There is the undocumented LabVIEW manager function AllocOccur() that returns a unique occurrence refnum every time it is executed. However since around LabvIEW 6 you have notifiers, queues, and semaphores which internally do use occurrences for the signaling aspect but have extra advantages of allowing to transport some data along with the signal event that occurrences don't have. Occurrences did have a nasty feature in the past and may still have that could bite you if you are not careful. Their state would remain triggered once they were set if the Wait on Occurrence wasn't executed before the program was terminated and would immediately trigger the Wait on Occurrence on a new start of the application even if in this new run the occurrence hadn't yet been set. As to destroying the occurrence to let the Wait on Occurrence terminate, that sounds like a pretty brute force approach. Why resolve to that instead of just setting the occurrence anyways?
  21. You launch the VI inside your RT app and want its front panel to show on your host computer? That is not possible to the best of my knowledge. Your host application can have a front panel that it shows and remotely call your VI through VI server to execute some code on your RT taget, though.
  22. I'm not sure where you see hostility in those remarks. Yes it may be not sugar coated sweet talk, but hostile? I think you should reconsider this.
  23. I take an issue with that n**i word. If someone comes to you and tells you your application doesn't run, you would also try to educate him that there is a lot more information needed in order for you to mean something for him, wouldn't you? In this case it is not even software I wrote so how would we have been able to even guess from the first two posts of the OP that GPIB might be involved? Nor did he provide any information about his application other than that it caused a specific error number. Posting in OpenG made me actually guess at first it may be related to some OpenG toolkit function, otherwise I had left the post entirely alone in the first place.
  24. Well, it is already pretty helpful to know that the function uses GPIB, but it would be even more helpful to see what is actually inside that VI. Most likely it uses the old GPIB function interface and there error 6 means actually that the GPIB IO operation was aborted. That is a somewhat generic error for GPIB operations that the GPIB controller detected some kind of error and has aborted the transfer because of that. Check out this document for a list of possible GPIB errors and what it could mean. As to posting a photograph of a screen shot, well, posting the actual VI AND subVI would have taken less work for you and would be about 100 times more useful to be able to see how the VI that causes the error is built up. We still only can guess that it is probably the GPIB function causing that error.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.