-
Posts
3,845 -
Joined
-
Last visited
-
Days Won
261
Community Answers
-
Rolf Kalbermatter's post in labview on DragonBoard 410c was marked as the answer
LabVIEW can in principle create ARM code for the crosscompilation to the ARM based NI RT RIO devices. But that doesn't work on other ARM targets easily. For one it must be an ARM Cortex A7 compatible device. And you need the LabVIEW runtime library for NI Linux RT which is technically not trivial to get running on a different target and legally you need to buy a runtime license from NI to be allowed to do that. Also it doesn't use Windows at all but the NI Linux RT OS which you would have to port to that board too.
Supposedly the guys from TSExperts are working on a version of their cross compilation toolchain that is supposed to work for the Raspberry Pi device which is also an ARM based embedded board. I have no idea how they get to create code from LabVIEW to port to those targets but would assume they make use of the LabVIEW C Code Generator module which has a hefty price tag. What their license deal with NI might be I also have no idea, but I don't expect this to be standard procedure.
So in conclusion, it is not a clear no as tst put it, but for most applications still not a feasable thing to attempt.
To the OP: The Windows 10 version running on the DragonBoard is not a normal Windows version as used on your desktop computer but the Windows RT kernel which is also used for the Windows Mobile platform. This is a Windows version build around .Net technology and does not provide any Win32 API but only the .Net API. Also it is typically not compiled for the x86 CPU but for some RISC based architecture like ARM. LabVIEW for Windows definitely can't run on this and never will since it's interfacing to the Win32 API and is compiled for the x86 CPU.
-
Rolf Kalbermatter's post in Package manager and different target platforms was marked as the answer
There is no direct way to install binary modules for NI RT targets from the package manager interface. Basically those binary modules need to be currently installed through the Add Software option in MAX for the respective target.
One way I found that does work and which I have used for the OpenG ZIP Toolkit is to install the actual .cdf files and binary modules in the "Program Files (x86)\National Instruments\RT Images" directory. Unfortuantely this directory is protected and only accessible with elevated rights which the package manager does not have by default. Instead of choosing to have to start the VIPM with adminstrative rights to allow the copying of the files to that directory I created a setup program using InnoSetup that requests the administrative access rights on launch from the user. This setup program is then included in the VI package and launched during package installation through a post install VI hook.
You can have a look at the Open G ZIP Toolkit sources on the OpenG Toolkit page on sourceforge to see how this all could be done. It's not trivial and not easy, but it is a workable option.
-
Rolf Kalbermatter's post in Open G Zip Tools on Linux RT was marked as the answer
I have created a new package with an updated version of the OpenG ZIP library. The VI interface should have remained the same with the previous versions.
The bigger changes are under the hood. I updated the C code for the shared library to use the latest zlib sources version 1.2.8 and made a few other changes to the way the refnums are handled in order to support 64 bit targets.
Another significant change is the added support for NI Realtime Targets. This was already sort of present for Pharlap and VxWorks targets but in this version all current NI Realtime targets should be supported. When the OpenG package is installed to a LabVIEW 32 bit for Windows installation, an additional setup program is started during the installation to copy the shared libraries for the different targets to the realtime image folder. This setup will normally cause a password prompt for an administrative account even if the current account already has local administrator rights, although in that case it may be just a prompt if you really want to allow the program to make changes to the system, without requiring a password. This setup program is only started when the target is a 32 bit LabVIEW installation since so far only 32 bit LabVIEW supports realtime development.
After the installation has finished it should be possible to go in MAX to the actual target and select to install new software. Select the option "Custom software installation" and in the resulting utility find "OpenG ZIP Tools 4.1.0" and let it install the necessary shared library to your target.
This is a prelimenary package and I have not been able to test everything. What should work:
Development System: LabVIEW for Windows 32 bit and 64 Bit, LabVIEW for Linux 32 Bit and 64 Bit
Realtime Target: NI Pharlap ETS, NI VxWorks and NI Linux Realtime targets
From these I haven't been able to test the Linux 64 Bit at all, as well as the NI Pharlap and NI Linux RT for x86 (cRIO 903x) targets
If you happen to install it on any of these systems I would be glad if you could report any success. If there are any problems I would like to hear them too.
Todo:
In a following version I want to try to add support for character translation of filenames and comments inside the archive if they contain other characters than the ASCII 7 bit characters. Currently characters outside that range are all getting messed up.
Edit (4/10/2015):
Replaced package with B2 revision which fixes a bug in the installation files for the cRIO-903x targets.
oglib_lvzip-4.1.0-b2.ogp
-
Rolf Kalbermatter's post in Reading the string output of a DLL was marked as the answer
LabVIEW takes the specification you set in the Call Library Node pretty literally. For C strings it means that it will parse the string buffer on the right side of the node (if connected) for a 0 termination character and then convert this string into a LabVIEW string. For a Pascal string it interprets the first byte in the string as a length and then assumes that the rest of the buffer contains as much characters (although I would hope that it uses an upper bounding of the buffer size as it was passed in on the left side).
Since your "String" contains embedded 0 bytes, you can not let LabVIEW treat it as a string but instead have to tell it to treat it as binary data. And a binary string is simply an array of bytes (or in this specific case possibly an array of uInt16) and since it is a C pointer you have to pass the array as an Array Data Pointer. You have to make sure to allocate the array to a size big enough for the function to fill in its thing (and probably pass in that size in pSize so the function knows how big the buffer is it can use) and on return resize the array buffer yourself to the size that is returned in pSize.
And you have of course to make sure that you treat the pSize correctly. This is likely the number of characters so if this is an UTF16 string then it would be equal to the number of uInt16 elements in the array (if you use a byte array instead on the LabVIEW side the size in LabVIEW bytes would be likely double that of what the function considers as size). But note the likely above! Your DLL programmer is free to require a minimum buffer size on entry and ignore pSize altogether, or treat pSize as number of bytes, or even number of apples if he likes. This information must be documented in the function documentation in prosa text and can not be specified in the header file in any way.
Last but not least you will need to convert the UTF16 characters to a LabVIEW MBCS string. If you have treated it as uInt16 array, you can basically scan the array for values that are higher than 127. These would need to be treated specially. If your array only contains values up to and including 127 you can simply convert them to an U8 byte and then convert the resulting byte array to a LabVIEW string. And yes values above 128 are not directly translatable to ASCII. There are special translation tables that can get pretty involved especially since they depend on your current ANSI codepage. The best would be to use the Windows API WideCharToMultiByte() but that is also not a very trivial API to invoke through the Call Library Node. On the dark side you can find some more information here about possible solutions to do this properly.
The crashing is pretty normal. If you deal with the Call Library Node and tell LabVIEW to pass in a certain datatype or buffer and the underlaying DLL expects something else there is really nothing LabVIEW can do to protect you from memory corruption.
-
Rolf Kalbermatter's post in Pointer Problem dll JTAG Macraigor Systems was marked as the answer
Well!!!! If you add a call to FlashErrorText() after each failed function call you will find out that it first reports after the FlashSetupAndConnect():
then after the FlashErase():
which is logical since the SetupAndConnect call had failed.
So what does this tell us?
The flashaccess.dll attempts to find the file cpu.ini in the directory for the current executable.
Unless there is a way to tell the DLL in the ocd file to look for this elsewhere, you may be required to put this file in the directory where your LabVIEW.exe file resides (and if you build an executable , also into the directory where your executable will be). Basically it is a bit stupid from the DLL to look for this in the executable directory only and not at least also in the DLL directory itself, but alas such is life.
-
Rolf Kalbermatter's post in OpenG Read and Write Panel to INI does not work with unnamed cluster elements was marked as the answer
It's discutable if this should work. But the quickest solution for this would be to allow for a small change in the "Write Key (Variant).vi" and "Read Key (Variant).vi" in the Cluster case similar to this:
-
Rolf Kalbermatter's post in Can Functional Globals be used to share data between VIs running on different targets? was marked as the answer
They of course can't do that out of the box. A VI in one application context does share absolutely no data with the same VI in another application context out of its own. That is even true if you run the FGV in two separate projects on the same computer and even more so if they run on different devices or different LabVIEW versions. As Jordan suggested you will need to implement some real interapplication communication here, either through network variables (most easy once you got the hang of it how to configure them and deploy them to the targets), or your own network communication interface (my personal preferences in most cases).
There are also reference design projects for cRIO applications such as the CVT (Current Value Table) and accompagning CCC (CVT Client Communication) that show a possible approach for implementing the second solution.