Jump to content

Package manager and different target platforms


Recommended Posts

Hi,

 

I have a library, that is using unmanaged built binary code, so far I can target Windows32, Windows64. Also I am now attempting to compile with ARMv7 and x64 ni compilers for the latest rt-linux CRIOs.

 

What is the correct way to install for different platforms using the VIPM ?

 

1. So far I just put all the data, that might be necessary, into the package and let my own VIs take care of the installation process (copy correct files to correct locations etc.).

 

2. The other way around would be to produce multiple libraries, like LIBRARY_WIN32, LIBRARY_WIN64, LIBRARY_LINRTx64 etc. and let the standard VIPM deploy it using the standard file deploy options.

Link to comment

Hi,

 

I have a library, that is using unmanaged built binary code, so far I can target Windows32, Windows64. Also I am now attempting to compile with ARMv7 and x64 ni compilers for the latest rt-linux CRIOs.

 

What is the correct way to install for different platforms using the VIPM ?

 

1. So far I just put all the data, that might be necessary, into the package and let my own VIs take care of the installation process (copy correct files to correct locations etc.).

 

2. The other way around would be to produce multiple libraries, like LIBRARY_WIN32, LIBRARY_WIN64, LIBRARY_LINRTx64 etc. and let the standard VIPM deploy it using the standard file deploy options.

 

There is no direct way to install binary modules for NI RT targets from the package manager interface. Basically those binary modules need to be currently installed through the Add Software option in MAX for the respective target.

 

One way I found that does work and which I have used for the OpenG ZIP Toolkit is to install the actual .cdf files and binary modules in the "Program Files (x86)\National Instruments\RT Images" directory. Unfortuantely this directory is protected and only accessible with elevated rights which the package manager does not have by default. Instead of choosing to have to start the VIPM with adminstrative rights to allow the copying of the files to that directory I created a setup program using InnoSetup that requests the administrative access rights on launch from the user. This setup program is then included in the VI package and launched during package installation through a post install VI hook.

 

You can have a look at the Open G ZIP Toolkit sources on the OpenG Toolkit page on sourceforge to see how this all could be done. It's not trivial and not easy, but it is a workable option.

Link to comment

So I can make files, deploy them into LabVIEW directories and it will make the RT software package pop in MAX and also take care of the binaries to be deployed into RT target when used ? Sounds good. I tried to find how to do it (produce CDF files and follow some directory structure), but only found this:

http://www.ni.com/white-paper/12919/en/#toc1

Which describes doing it through some NI wizard. I dont have CRIO (only seen it on the internet pictures :D) nore any realtime module, so this is quite a time distant task.

Thanks for the answer.

 

p.s. Does the NI runtime include some mechanism to check for correct deployment of the dependencies, does it somehow translate OS errors when reporting missing *.so files? Does it make sense to drop usage of dlopen, dlclose and  dlsym on the realtime linux target? Upon reading on the ELF linker/loader, it seems to me, that programmatically loading shared objects brings more bad than good, since the ELF loader does already do all this stuff much better than windows + shared objects have VERSION!!? on linux. 

Link to comment

So I can make files, deploy them into LabVIEW directories and it will make the RT software package pop in MAX and also take care of the binaries to be deployed into RT target when used ? Sounds good. I tried to find how to do it (produce CDF files and follow some directory structure), but only found this:

http://www.ni.com/white-paper/12919/en/#toc1

Which describes doing it through some NI wizard. I dont have CRIO (only seen it on the internet pictures :D) nore any realtime module, so this is quite a time distant task.

Thanks for the answer.

 

p.s. Does the NI runtime include some mechanism to check for correct deployment of the dependencies, does it somehow translate OS errors when reporting missing *.so files? Does it make sense to drop usage of dlopen, dlclose and  dlsym on the realtime linux target? Upon reading on the ELF linker/loader, it seems to me, that programmatically loading shared objects brings more bad than good, since the ELF loader does already do all this stuff much better than windows + shared objects have VERSION!!? on linux. 

 

Well, lots of questions and some assumptions. I created the cdf files for the OpenG ZIP library by hand from looking at other cdf files. Basically if you want to do something similar you could take the files from the OpenG ZIP library, change the GUID to some self generated GUID in there. This is the identifier for your package and needs to be unique, so you can not use that of another package or you mess up your software installation for your toolkit. Also change the package name in each of the files and the actual name of your .so file.

 

When you interactively deploy a VI to the target that references your shared library through a Call Library Node and the shared library is not present or properly installed then you will get an according message in the deployment error message with the name of the missing shared library and/or symbol. If you have some component that does reference a shared library through dlopen()/dlsym yourself then LabVIEW can not know that this won't work as the dlopen() call will fail at runtime and not at deployment time and therefore you will only get informed if you implement your own error handling around dlopen(). But generally why use dlopen() since the Call Library Node basically uses dlopen()/dlsym() itself to load the shared library.

Basically if you reference other shared libraries explicitedly by using dlopen()/dlsym() in a shared library you will have to implement your own error handling around that. If you implicitedly let the shared library reference symbols that should be provided by other shared libraries then the loading of your shared library will fail when those references can't be resolved. The error message in the deplyoment dialog will tell you that the shared library that was referenced by the Call Library Node failed to load, but not that it failed because some secondary dependency couldn't be resolved. This is not really different with Windows where you can either reference other DLLs by linking your shared library with an import library or do the referencing yourself by explicitedly calling LoadLibrary()/GetProcAdress().

 

The only difference between Windows and elf is in the fact that on Windows you can not create a shared library that has unresolved symbols. If you want the shared library to implicitedly link to another shared library you have to link your shared library with an import library that resolves all symbols. On elf the linker simply assumes that any missing symbols will be resolved at load time somehow. That's why on Windows you need to link with labviewv.lib if you reference LabVIEW manager functions but with labviewv.lib being actually a specially crafted import library as it uses delay load rather than normal load. That means a symbol will only be resolved to the actual LabVIEW runtime library function when first used, not when your shared library is loaded, but delay load import libraries are a pretty special thing under Windows and there are no simple point and click tools in Visual Studio to create them.

 

Please note that while I have repeatedly said here that elf shared libraries are similar to Windows DLLs in these aspects, there are indeed quite some semantic differences, so please don't go around quoting me as having said they are the same.

 

Versioning of elf shared libraries is theoretically a useful feature but in practice not trivial since there are many library developers who have their own ideas about versioning of their shared libraries. Also it is not an inherent feature of elf shared libraries but rather based on naming conventions of the resulting shared library which then is resolved through extra symlinks that create file references for the so name only and a so name with major version number. Theory is that the shared library itself uses a so.major.minor version suffix and applications link generally to the .so.major symlink name. And anytime there is a binary incompatible interface change, the major version should be incremented.

But while this is a nice theory quite a few developers only follow that idea somewhat or not at all. In addition I did have trouble to get the shared library recognized by ldconf on the NI Linux RT targets if I didn't create the .so name without any version information. Not sure why on normal Linux systems that doesn't seem to be an issue, but that could also be a difference caused by different kernel versions. I tend to use an older Ubuntu version for Desktop Linux development which also has an older kernel than what NI Linux RT is using.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.