Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. Nope. Binary shared library dependencies only get copied over on the Pharlap targets during deployment. All other targets need the binary dependencies to be copied by hand (VxWorks targets only) or explicitedly installed with a software install script from within MAX (the latest ZIP Toolkit beta does install these scripts onto the development machine when you install the Toolkit into a 32 Bit LabVIEW for Windows version, other LabVIEW versions don't support realtime development anyways).
  2. Not really. PosLVUserEvent() isn't really documented in any way that deserves the word documention. Yes it is mentioned in the External Code Reference manual just as all the other public LabVIEW manager functions. But for most of them that documentation goes seldom further than the function prototype, some more or less meaningful parameter names and a short description of each parameter that mostly repeats what can be derived from the parameter name too. The fact that you can register callback VIs for other events than .Net events is not only not documented but has been mentioned by LabVIEW developers to be a byproduct of the event architecture of LabVIEW and shouldn't really be relied upon to always work exactly the same. Never heard anyone mentioning that they could be used for PostLVUserEvent() but it is a logical step considering that it does work for other LabVIEW user events and I was waking up this morning with the idea that this might be something to try out here. Nice that Jack confirmed this already and the extra tidbit about being synchronous for callback VIs is an interesting information too, although logical if you think about it. Of course it also allows the callback VI developer to easily block the whole system, if he ends up accessing the same API again which was invoking the callback VI!
  3. One possible solution might be what I did in the OpenG ZIP Library. There the open function is polymorphic and allows to select if the subsequent operations should be performed on a file on disk or on a byte array stream. For the Unzip on stream operation the "stream" to read from is passed to the Open function as a LabVIEW string (really should be a byte array but traditionally LabVIEW uses strings for APIs that should work as byte stream like TCP/UDP, VISA etc.). For the ZIP operation on a stream the Close function returns a LabVIEW string which contains the byte stream data. This isn't pluggable with user provided VIs that provide direct stream access as you intend and has the drawback that the entire data has to be in memory during the entire operation but it is at least possibly to reasonably implement.
  4. That's not going to work in this case like that I'm afraid without some means of synchronization. The FMpeg library does not call the callback to send data to the client but to request the next junk of data from the "stream". As such it is also not a classic callback interface but rather a stream API with a device driver handle that contains specific method function pointers for the various stream operations (open/read/write/seek/close). The Library calls thes functions and expects them to return AFTER the function has done the necessary work on the "stream". While not exactly impossible it is a rather complicated and cumbersome interface. You could write a VI that can act as "callback", it's not really the normal callback type but rather as explained above a driver API with a "handle" with specific access methods as function pointers in it. It's a very popular C technique for pluggable APIs, but really only easily accessible from C too. You could also see it as the C implementation of a C++ object pointer. Then compile those VIs into a DLL that you can then LoadLibrary() the entry points of it into your LabVIEW cluster mimicking that API structure with the function pointers. But there are many problems with that: 1) The DLL will run in the LabVIEW runtime version that corresponds to the LabVIEW version used to create it. If your users are going to run this in a different LabVIEW version there will be a lot of data marshalling between the users LabVIEW system and the LabVIEW runtime in which the DLL runs. 2) If you want to make this pluggable with user VIs your DLL will somehow have to have a way of registering the users LabVIEW VI server instance with it so that your VI can proxy to the user VI through VI server, which adds even more marshalling overhead to the whole picture. 3) Every little change will require you to recreate the LabVIEW DLL, distribute it somehow and hope it doesn't break with some users setup. 4) The whole story about loading the DLL function pointers into a LabVIEW cluster to serve as your FMpeg library handle is utterly ugly, error prone and will in case of supporting both 32 and 64 bit LabVIEW versions also require entirely different clusters that your code needs to conditionally use depending on which bitness LabVIEW has. 5) The chance to get this working reliably is pretty small, will require lots and lots of work that will need to be rechecked everytime you modify something anywhere in that code and allow a user to actually mess up the whole thing if he is careless enough to go into those parts of the VIs and modify anything.
  5. That's more or less how you need to do it with the published LabVIEW APIs. Nothing else will really work easier or better without going the route of undocumented LabVIEW manager calls. NI does have the possibility to call VIs from within C code directly.But that is only used within LabVIEW itself, not in external code AFAIK. And that functionality may hit bad issues if used from external code. Lua for LabVIEW does something similar but without using the PostLVUserEvent() as that wasn't really available when Lua for LabVIEW was developed. The Lua for LabVIEW API allows to register VIs in the C interface with a Lua function name. When the Lua bytecode interpreter encounters such a name the call is passed back to a VI deamon (a background VI process that Lua for LabVIEW starts behind the scenes) and that deamon then pulls the parameters from the Lua stack, calls the VI and pushes any return values back on the Lua stack before handing control back to the Lua engine. Quite involved and tricky to handle correctly but the only feasable way to deal with this problem. There is also a lot of sanity checking of parameters and their types necessary to avoid invalid execution and crashes as you do want to avoid the situation where a user can crash the whole thing by making an error in the VI interface.
  6. You have a misunderstanding here. LabVIEW really knows two variant types. The ActiveX variant which is supposedly what you also can interface to with the cviauto.h file and the native Variant. When you pass a native variant to a Call Library Node configured to accept an ActiveX variant, LabVIEW is converting from one to the other which in fact means creating a complete copy of all the data inside. However ActiveX variants do not know attributes in the sense as the LabVIEW variant does. So those attributes are not only not converted but can not be passed in a meaningful way along with the ActiveX variant and are therefore simply not present on the ActiveX side. While you can pass a native variant to the C code with the Adapt to Type configuration in the CLN this doesn't buy you anything since the C API to access the native variant data is not officially documented by NI.
  7. Which LabVIEW version? According to my own tests some time ago, there was no way to get lvlib, lvclass or similar >= LabVIEW 8.x files into an LLB.
  8. The project directory is fine for DLLs that you declare by name only inside the Call Library Node. However for Windows the project directory of LabVIEW has absolutely no meaning. LabVIEW tries do load "a.dll" in your project directory which depends on "b.dll" and "c.dll". After LabVIEW attempts to LoadLibrary() with the correct path for "a.dll" everything is out of the hands of LabVIEW and only Windows search path rules apply. That means Windows will not search in the directory where you load your project file from but in the directory where the current executable is located. For a build application this is the directory where your myapp.exe is located but when you work in the LabVIEW IDE then it is in the install directory of LabVIEW itself where labview.exe is located.
  9. This is probably one of your problems. DLLs that are directly called from LabVIEW VIs can be confiigured to be moved in any folder by the Application Builder (default is the "data" folder) as the Application Builder will adjust the library path in the Call Library Node to point to that location, when building the application. However secondary dependencies are not resolved by LabVIEW but either by Windows or very seldom by the DLL explicitedly. Windows knows absolutely nothing about a "data" folder and will NOT search in there for DLLs unless you happen to add that directory to the PATH environment variable. But that is not a good solution to do anyhow. Instead you need to move these secondary DLLs into the the same directory as where you executable is. This is always the first directory Windows will search when asked to load a DLL. I usually modify the Application Builder script to install all DLLs into the main directory instead of into a seperate data subdirectory.
  10. Well, I never worked with ATM directly but did early on work in a company which made telecommunication products and one of the products that was developed there did use ATM. As it would seem the fact that whatever you must implement uses ATM isn't really the main issue here. There is nothing like a standard API for ATM on modern computer platforms. So the question really boils down to how is your computer even connected to the ATM network and do you have documentation about the API for the driver of that card?
  11. I did, using a variant of the factory pattern. It was an instrument driver for a USB serial device that could have two different device types. Implemented the low level drivers for each device as a derived class of the main class which was the actual interface used by a user of the driver. Depending on a selection by the user either of the two low level drivers was instantiated and used for the actual implementation of the device communication. Worked fine except that you need to do some path magic to allow for execution on the RT target. It's mostly the same as what you need to do for execution in a build application but there is a difference between the paths when you deploy the program directly from the project (during debugging for instance) and when you build an rtexe application and execute that.
  12. That's not a good idea!! The new 64 bit shared library has various changes that will not work without an updated ZLIP VI library and support files at all. The VIs as they are in the sourceforge repository are the updated ones. A new packages needs to be build but I have delayed that since there are still some issues with additional functionality I wanted to include. This here is an early beta version of a new package which adds support for 64 bit Windows. The MacOSX package support hasn't been added yet so that part won't work at all. What it does contain is an installer for support for the NI realtime targets. This RT installer will however only get installed if you install the package into 32 bit LabVIEW for Windows, since that is the only version which supports realtime development so far. Once it is installed you can go into MAX and select to install extra software for your RT target. Then select custom install and in there should be an OpenG ZIP Toolkit entry which will make sure to install the necessary shared library to your target. For the deflate and inflate alone the replacement of the shared library may indeed be enough but trying to run any of the other ZIP library functions has a very big chance to crash your system if you mix the new shared library with the old VIs. That package was released in 2011, LabVIEW was already present then for 64 bit (since LabVIEW 2009) but VIPM didn't know about 64 bit LabVIEW yet and one could not even attempt to try to convince VIPM to make a difference there. Also the updated package was mainly just a repackiging of the older version from 2009 to support the new VI palette organization, and nothing else. oglib_lvzip-4.1.0-b3.ogp
  13. Windows 7, but even explicitedly changing the security settings for the WMI root\\CIMV2 tree wouldn't allow me to do an WMIService->ExecQuery() no matter what I try to query so there might be something else going wrong despite of the error code 80041003 indicating some access right issue. I'll try to debug it a bit more but have found in the meantime code that uses various Win32 APIs to query this information more quickly and more reliably as WMI can sometimes misreport this.
  14. And I tried to implement the WMI calls through COM in a DLL to be called from LabVIEW. No such luck. Turns out you can't CoInitialize() the COM system in the DLL since LabVIEW has done that already. And you also can't initialize the COM system with extra security privileges either since LabVIEW has done this initialization (most likely implicitedly on startup with the lowest possible privileges) and COM does not support to readjust the security privileges later on. Without those security privileges quering any WMI database info then fails.
  15. Since that is an abstract class you can't instantiate it. Abstract classes are similar to interfaces. They describe an object interface and implement part of the object but leave out at least one method to be implemented by the end user. Even in a real .Net language you would have to create an actual implementation of this class in your code that derives from this abstract class and implements all abstract methods of it LabVIEW can interface with .Net but can not derive from .Net classes itself. As such you can't pull this of without some external help. You will have to create a .Net assembly in VB or C# or any other .Net development platform that you are familiar in. This assembly has to impement your specific MyConditionExpression class that derives from ConditionExpression and then you can instantiate that from your assembly and pass it to this method.
  16. Your code should ALWAYS call the DLL which has the same bitness your application was compiled in. So if you used LabVIEW 32 bit to create your application (or run your VI in LabVIEW 32 bit) you MUST reference the 32 bit DLL, independent if you run on 32 bit or 64 bit Windows. A 32 bit process CANNOT load a 64 bit DLL (for code execution that is, but that is all you care about!) nor can a 64 bit process load 32 bit DLLs. The problem is not about a 32 bit DLL living somewhere in a random subdirectory, but about a 32 bit DLL living in a directory that a 64 bit system considers reserved for 64 bit code, such as "C:\Program Files\..." or "C:\Windows\System32" and 64 bit DLLs living in a location that the 64 bit system considers reserved for 32 bit code, such as "C:\Program Files (x86)\.." or "C:\Windows\SysWOW64". If you have a 32 bit system those file system redirections do not apply but any 64 bit code file anywhere will be considered an invalid executable image file.
  17. Shaun already told you that this is a 32 Bit path. A 64 bit application will attempt to load "C:\program files\signal hound\spike\api\..." no matter what you do. As to error codes: LabVIEW does sometimes second guess Windows error codes and attempts to determine a more suitable code, also when loading shared libraries but that has to end at some point. Otherwise they could start to reimplement whole parts of Windows, that is actually easier than to try to second guess system API error codes which also can vary between OS versions and even depending on system extensions which can or can not be installed. There is a possibility to disable file system redirection temporarly through Wow64DisableWow64FsRedirection() which LabVIEW might do when loading an explicit DLL defined in the Library Path of the CLN confiiguration (at least as a secondary attempt if the normal load fails). However this is a thread global setting. Loading of DLLs that are explicitiedly named in the configuration dialog happens at load time of the VI, which is a highly serialized operation with all kinds of protection locks in place inside LabVIEW to make sure there won't be any race conditions as LabVIEW updates its internal bookkeeping tables about what resources it has loaded etc. The loading of DLLs that are passed as path to the CLN has to happen at runtime and executes in the context of the thread that executes the CLN call and other parts of the diagram if the CLN is set to execute reentrantly, otherwise it has to compete with the GIU update which also does all kind of things that could get badly upset with the file system redirection disabled. So while it can be safe to temporarly disable file system redirection during VI load time, this is a much more complicated issue at runtime and the safe thing here is to simply not do it. It is even less safe to wrap your CLN with an additional CLN call to a Wow64DisableWow64FsRedirection()/Wow64RevertWow64FsRedirection() call since you either have to execute all 3 CLNs in the UI thread to make sure they operate as intended and that could influence other things in LabVIEW that happen in the UI thread between calls to your CLN. If you set the CLN calls to run as reentrant they will most likely not run in the same thread at all as LabVIEW executes diagram clumps in random threads within an execution system thread pool. There are only two ways to make this still work if you need to disable file system redirection. One is to write a DLL that your CLN calls which calls first Wow64DisableWow64FsRedirection() then attempts to load the DLL with LoadLibrary() then calls Wow64RevertWow64FsRedirection() and then calls the function. The other is to create a subVI with subroutine execution and pack the Wow64DisableWow64FsRedirection() , CLN call and Wow64RevertWow64FsRedirection() in it. Subroutine VIs are guaranteed to be executed in one go without any thread switching during the execution of the entire subroutine diagram. More precisely though, your API installation is broken. Installing 64 bit DLLs inside "Program Files (x64)" is very BAD, as well as installing 32 bit DLLs inside "Program Files" or "Windows\System32" on a 64 bit system.
  18. Where would the paths be located? There exists something like files system redirection in Windows VISTA and higher which will redirect certain paths like C:\Windows\System32 or C:\Program Files and C:\Program Files (x86) to whatever the kernel considers the appropriate location for the current application based on its bitness. So eventhough you ask for C:\Windows\System32 in a 32 bit process, it will be directed to C:\Windows\SysWOW64 when run on a 64 bit OS! It's possible that LabVIEW attempts to be smart when trying to reference a DLL that is defined in the Call Library Node but won't second guess Windows decision when you define the dynamic path.
  19. Well, lots of questions and some assumptions. I created the cdf files for the OpenG ZIP library by hand from looking at other cdf files. Basically if you want to do something similar you could take the files from the OpenG ZIP library, change the GUID to some self generated GUID in there. This is the identifier for your package and needs to be unique, so you can not use that of another package or you mess up your software installation for your toolkit. Also change the package name in each of the files and the actual name of your .so file. When you interactively deploy a VI to the target that references your shared library through a Call Library Node and the shared library is not present or properly installed then you will get an according message in the deployment error message with the name of the missing shared library and/or symbol. If you have some component that does reference a shared library through dlopen()/dlsym yourself then LabVIEW can not know that this won't work as the dlopen() call will fail at runtime and not at deployment time and therefore you will only get informed if you implement your own error handling around dlopen(). But generally why use dlopen() since the Call Library Node basically uses dlopen()/dlsym() itself to load the shared library. Basically if you reference other shared libraries explicitedly by using dlopen()/dlsym() in a shared library you will have to implement your own error handling around that. If you implicitedly let the shared library reference symbols that should be provided by other shared libraries then the loading of your shared library will fail when those references can't be resolved. The error message in the deplyoment dialog will tell you that the shared library that was referenced by the Call Library Node failed to load, but not that it failed because some secondary dependency couldn't be resolved. This is not really different with Windows where you can either reference other DLLs by linking your shared library with an import library or do the referencing yourself by explicitedly calling LoadLibrary()/GetProcAdress(). The only difference between Windows and elf is in the fact that on Windows you can not create a shared library that has unresolved symbols. If you want the shared library to implicitedly link to another shared library you have to link your shared library with an import library that resolves all symbols. On elf the linker simply assumes that any missing symbols will be resolved at load time somehow. That's why on Windows you need to link with labviewv.lib if you reference LabVIEW manager functions but with labviewv.lib being actually a specially crafted import library as it uses delay load rather than normal load. That means a symbol will only be resolved to the actual LabVIEW runtime library function when first used, not when your shared library is loaded, but delay load import libraries are a pretty special thing under Windows and there are no simple point and click tools in Visual Studio to create them. Please note that while I have repeatedly said here that elf shared libraries are similar to Windows DLLs in these aspects, there are indeed quite some semantic differences, so please don't go around quoting me as having said they are the same. Versioning of elf shared libraries is theoretically a useful feature but in practice not trivial since there are many library developers who have their own ideas about versioning of their shared libraries. Also it is not an inherent feature of elf shared libraries but rather based on naming conventions of the resulting shared library which then is resolved through extra symlinks that create file references for the so name only and a so name with major version number. Theory is that the shared library itself uses a so.major.minor version suffix and applications link generally to the .so.major symlink name. And anytime there is a binary incompatible interface change, the major version should be incremented. But while this is a nice theory quite a few developers only follow that idea somewhat or not at all. In addition I did have trouble to get the shared library recognized by ldconf on the NI Linux RT targets if I didn't create the .so name without any version information. Not sure why on normal Linux systems that doesn't seem to be an issue, but that could also be a difference caused by different kernel versions. I tend to use an older Ubuntu version for Desktop Linux development which also has an older kernel than what NI Linux RT is using.
  20. There is no direct way to install binary modules for NI RT targets from the package manager interface. Basically those binary modules need to be currently installed through the Add Software option in MAX for the respective target. One way I found that does work and which I have used for the OpenG ZIP Toolkit is to install the actual .cdf files and binary modules in the "Program Files (x86)\National Instruments\RT Images" directory. Unfortuantely this directory is protected and only accessible with elevated rights which the package manager does not have by default. Instead of choosing to have to start the VIPM with adminstrative rights to allow the copying of the files to that directory I created a setup program using InnoSetup that requests the administrative access rights on launch from the user. This setup program is then included in the VI package and launched during package installation through a post install VI hook. You can have a look at the Open G ZIP Toolkit sources on the OpenG Toolkit page on sourceforge to see how this all could be done. It's not trivial and not easy, but it is a workable option.
  21. .dylib is basically the actual shared library file that contains the compiled object code. It is similar to the .so file on Linux but in a different object format. .framework is a package very much like the .app for an application. It is a directory structure containing several symlink files pointing over a few redirections to the actual binary object file that is the shared library. In addition it can contain string and other resource files for localization and version information. The low level shared module loader works with the .dylib file or the binary object file inside the .framework, but the MacOSX shared library support works on the .framework level, alhough it does crrently still support to load .dylibs directly too. But Apple tries to move everyone to the package format and removes more and more references in the documentation about the low level access and there is a good chance that support for that is eventually depreciated. This is all in an attempt to remove unportable interfaces from the application level in order to make an application work more an more likely on any iOS compatible device.
  22. You use hardware based timing for your tasks. The only way to get this working as you describe is to buy two seperate DAQ boards and run each of the two AI tasks on one of them. There is one timer circuitry on each board for analog input timing and you can't have two tasks trying to use that circuitry at the same time. There is no software trick to make this work as you want. You could modify your requirements and start the two AI channels together and do the trigger detection afterwards in the read data though. You could even configure the AI task to start together with the AO task by making it a slave of the AO clock.
  23. I use LabVIEW for Mac on an iMac regularly. What do you expect to not work? You don't have many hardware IO interfaces on the Mac but running LabVIEW for Windows in a virtual machine won't give you more options for sure. And BootCamp or whatever that is called nowdays likely will be also not a full solution since Windows doesn't come with drivers for every hardware component in a MacBook Pro.
  24. Well I guess two modulo-remainder operations where a big deal back in the early eighties . Apple smartly sidestepped that issue by choosing 1904 as their epoch for MacOS, and yes I'm sure that was not only to be different than Lotus. As to if that was a deliberate decision back then or more negligence by the gals and guys at Lotus will probably never be found out for sure. It may also just have been something they "inheritet" from VisiCalc.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.