Jump to content

Jordan Kuehn

Members
  • Posts

    690
  • Joined

  • Last visited

  • Days Won

    21

Posts posted by Jordan Kuehn

  1. I use this on cRIO with the system exec vi:

    timeout 0.1 ping -c1 127.0.0.1

    Replace the 0.1 with the time you want (s) and the 127.0.0.1 with whatever IP address or hostname you want. I use this to determine if I want to attempt to open a shared variable connection to an expansion chassis since the timeouts on that API do not work.

  2. 18 minutes ago, Mads said:

    Start with something simple, then work from there... Here is an example of how it looks like with a simple test:


    1434272668_simpledeflate-inflatetest.PNG.103f21e66b11da5d746ad9f8f4e10e7c.PNG

     

    The deflated string is binary so the string indicator/control is set to hex for the deflated input/output...

    If the content can be compressed too much and the expected length is not included it can fail yes....We had an issue with that where we could not change a protocol to include the length, we "fixed" it (increased the probability of success that is) by editing the inflate VI so that it would run a few extra buffer allocation rounds - you can do that too...

    My simple test is quite like yours. It failed without the expected length wired. It worked with it wired. Thank you for the example. I will look into adjusting the inflate VI to auto-run a few more rounds, thank you for the suggestion.

    I just provided the big picture use case if the additional context might shed some light on what I'm after. Definitely working up incrementally to using it in that implementation.

  3. On 2/12/2022 at 5:39 PM, Rolf Kalbermatter said:

    Hmmm, clipboard copy! That has a very good chance of trying to be smart and to do text reformatting. I would definitely drag the entire control with all the data from one VI to the other, which should avoid Windows trying to be helpful. As a control, LabVIEW puts it in an application private format in the clipboard together with an image of the control. LabVIEW itself can pull the private format out of the clipboard, other applications will not understand that format and pull the image from the clipboard.

    If you only select the text, LabVIEW will store it as normal ASCII text in the clipboard and Windows may try to do all kinds of things including trying to translate it to proper Windows text, which could replace all \r "characters" with \r\n and there is even the chance that the text goes through ASCII to UTF-16 and back to ASCII on the way through the clipboard and that is not always a fully 100% back and forth translation, even though they may look optically the same. Text encoding translations is a total pitta to fully understand.

    So I just tried that without success. I had several screenshots to post of what I did, etc. and then I tried it with providing the expected length and it worked just fine. Is this input required? I read the description where you say it will work for up to 94% compression unwired. I'm compressing a JSON string of basically an array of clusters (quite compressible). Would it be disadvantageous to wire a sufficiently large constant to this input rather than bundling the actual expected output with the data? It did also work in when I tested with a large input.

    My use case here is to reduce bandwidth requirements when transferring JSON encoded status information via MQTT to a 3rd party system (non-LV). I hope to give them the requirement of inflating via zlib after delivery and then proceeding to use the JSON data as they like.

  4. 1 hour ago, Rolf Kalbermatter said:

    LabVIEW realtime support is only existing in the never officially released OpenG ZIP Library 4.1. That package is only available as download from earlier in this discussion thread here on LavaG and over at the NI forum I believe.

     

    Got it! I got the files installed, put them in a package build spec in the project explorer. Configured the source file to go where you said and wrote a simple script to run the ldconfig as a post install action. See screenshots below. I installed the package and ran the same test code that errored on deployment and it worked great this time. No manual moving of libraries on my end. Package attached. Thanks for all the help!

    image.png.c51c54661ffdd902285fe901c2ebf32b.png

    image.png.81d57e1e17d20d011c545e1294a9af5c.png

    image.png.2dadc63149df25611fb89e40865aaa10.png

    image.png.2076530ddc64d1c0966bfcfc9c0103d0.pngopeng-zip_1.0.0-2_x64.ipk

    • Like 1
  5. 4 hours ago, Rolf Kalbermatter said:

    Well the OpenG ZIP tools don't really represent a lot of elements on the realtime. Basically you have the shared library itself called liblvzlib.so which should go into /usr/local/lib/liblvzlib.so and then you need to somehow make sure to run ldconfig so that it adds this new shared library to the ldcache file.

    When you install the Beta version of the ZIP Tools package you should get a prompt at some point for administrative login credentials (or an elevation dialog if you are already logged in as administrator) which is caused by the ogsetup.exe program being launched as PostInstall hook of the OpenG package.

    This ogsetup.exe program does nothing more than extract the different shared libraries into C:\Program Files (x86)\National Instruments\RT Images}\OpenG ZIP Tools\4.2.0.

    Depending on your target you need to copy the according liblvzlib.so from either the LinuxRT_arm or LinuxRT_x64 subdirectory to /usr/local/lib/liblvzlib.so on your target. That should be all that is needed although sometimes it can be necessary to also run ldconfig on a command line to have the new shared library get added to the ldcache for the elf loader. With the old installation method in NI-MAX this was taken care of  by the installer based on the according *.cdf file in the OpenG ZIP Tools\4.2.0 directory.

    I tried to checkout the NI Package Builder but can't see how one would make a package for RT targets. I also only see the 20.5 and 20.6 versions of the NI Package Builder as last version, maybe that is why?

    I would be happy to see if I can build a package to share or at the very least move the files over to get it working in my application. However, I do not see that directory on my machine. I did reinstall while running VIPM in admin mode but still nothing.

    image.png.c16894e06e21497fa5f64330f213d7e7.png

     

    I opened the vipm package via winrar and looked around inside and this is what my post install vi looks like (I'm in Windows):

    image.png.113a8f6a5f9553c46cea90769f258fa6.pngimage.png.9f2bc1280c74a4c1f0d2805196c4abe6.png

    image.png.53313a29e31900793822c25b64e7fc65.png

  6. 3 hours ago, Rolf Kalbermatter said:

    So is there a problem with installing the OpenG ZIP tools for LabVIEW 2020 and/or 2021 on a cRIO at all or was that inquiry from Jordan something else? I can not place packages on the NI opkg streams so yes this library will always have to be sideloaded through NI-MAX in some way, unless you want to copy it over yourself by hand into the right directory and create the necessary symlinks on a command line and then run ldconfig. 😀

    I do not see a way to do this in 2021. Perhaps it is because I have the Linux RT image installed and not a custom image to begin with, but right now, even in MAX, I only see the options to configure feeds and to install packages from those feeds. If a package were available or I could build my own (I’m not sure all of the details for this library installation) I can put it in my project and install it as a dependency without it being in the official NI feed. 
     

    Am I missing something here? I remember the old way of installing software, but I do not see where that is available anymore. I’ll grab some screenshots when I get to my desk if that would help. 
     

    Edit// I think it was like this in 2020 as well now that I’m looking back. I believe it’s the Linux RT image that switches the software installation dialog. I can play with that some too a little later today to see if I can get it working like Mads described. But that would not work long term for me since all of my development is utilizing packages (and SystemLink) now. 

    Edit 2// Here is what I see when selecting the base system image to use. I believe if I go with the "Custom Software Installation" it will give the old method back, but that is described as "Legacy".

    image.png.bf63116e441cd7f554b7075db050fbe6.png

     

    And as you can see here, I only have the ability to add packages via the configured feeds. Some NI feeds, some my own.

    image.png.fac442d6b5da2c946692e1b129731273.png

  7. On 10/12/2017 at 4:59 AM, Rolf Kalbermatter said:

    Yes you need to install the shared library too. If you run VIPM to install the package you should have gotten a prompt somewhere along the install that asked you to allow installation of an additional setup program. This will install the NI Realtime extensions for the LVZIP library.

    After that you need to go into NI MAX and go to your target and select to install additional software. Deselect the option to only show recommended modules and then there should be a module for the OpenG ZIP library. Install it and the shared library should be on your controller.

    Is this still true in LV2021 with Linux RT? I am getting the same error, but the software installer has changed in current max and utilizes packages. In opkg I see a few perl and a python zlib package available, but not lvzlib.

  8. 1 hour ago, flarn2006 said:
    • Sometimes, often when I drag something on the block diagram, wires will suddenly move to illogical locations. One place where I've noticed this often is with tunnels on the bottom edge of a structure, where the wire will suddenly arrange itself so it connects from the left instead (while the tunnel remains on the bottom.).

    Perhaps this same issue that is a bug, per AQ's reply:

    https://forums.ni.com/t5/LabVIEW-Idea-Exchange/LV2021-Deactivate-Wire-Auto-Routing/idi-p/4183557

  9. 2 hours ago, brian said:

    I was looking at https://www.ni.com/en-us/events/niconnect.html

    where it says:

     

    Elsewhere, it hints that there could be external presentations, but my guess is they'll be industry-focused (and apparently with invited speakers only).

    Anybody heard more about it?  Any thoughts on this?

    Thanks for the link and bringing it to our attention! I focused on this part (my emphasis added):

     

    Quote

     

  10. 4 minutes ago, flarn2006 said:

    When is this necessary? FPGA I assume?

    FPGA certainly. I have use for the code I posted when say thresholding a value and wanting to ensure that it has exceeded that threshold for a period. That value could be anything. A plain signal or it’s derivatives, a float switch with a digital input, etc. At high rate in an FPGA you’d normally use it for say a mechanical switch that makes intermittent contact rapidly as the contacts first come into contact with each other. I’m sure there are more examples!

  11. That certainly fulfills the debounce nature in a more pure manner, debouncing both low and high. Mine is more of a conditional latch with optional single pulse output or latched high output. I think you got my point though about using a counter. As far as it being pretty or not, I don't know that I'd ever look at the BD again after finishing testing. The counts can be adjusted based on where you use it. Certainly sample rate will be a factor, but also expected noise/bounce vs desired responsiveness.

  12. 6 hours ago, Mads said:

    We normally just make the executable reboot the cRIO/sbRIO it runs on instead, through the system configuration function nisyscfg.lvlib:Restart.vi, but here are two discussions on killing and restarting just the rtexe on LinuxRT:

    https://forums.ni.com/t5/NI-Linux-Real-Time-Discussions/Launching-startup-rtxe-from-terminal-or-linux-window-manager/td-p/3457415

    https://forums.ni.com/t5/NI-Linux-Real-Time-Discussions/Is-it-possible-to-close-and-re-open-RTEXE-through-Embedded-UI/td-p/3707540

     

     

    Oh that’s perfect. Just like OP on that post it’s the reboot time that’s the issue for me. This line is what I was missing:

     

    /etc/init.d/nilvrt stop && /etc/init.d/nilvrt start

     

    I’ll give this a try. Thank you. 

  13. 21 hours ago, Rolf Kalbermatter said:

    No! An rtexe is not a real executable. It is more like a ZIP archive that can not be started in itself but that needs to be started by invoking the runtime engine and passing it the rtexe as parameter. And the exact mechanism is fairly obscure and not well researched and totally not documented. Unless you are a Linux kernel hacker who knows how to investigate the run level initialization and how the LabVIEW rtexe mechanisme is added in there.

    Rolf, I've been looking for this information myself. Not quite in this use case as requested, but simply to restart the application. Do you have any reference for simply restarting the runtime engine and relaunching the configured rtexe? SystemLink is capable of doing this, but I haven't managed to figure out how yet.

  14. So, I think you have pulled it apart fairly well in your summary. I believe the issue with the regular Write function is that it can fragment the data and builds the index on the file to take care of this. That combined with flushing, segmenting file writes, and defragmenting after completion will address it for many use cases. The waveform issue is that first the advanced write won’t take the data type due to the properties next to it, and that’s all it is like you said, and array of doubles, some standard components (t0, dt), and possibly some variants. Then second even if you were to write an array of doubles using the standard write vi it is not as performant. When using the advanced VI you specify the block sizes and it streams that to disk exactly as written. (I’m sure there’s a little more complexity at the c level here.) So you must write the same size every time, but it is quite fast and does not leak memory. 
     

    So, I see a space here where in general advanced tdms functions could be chosen given the condition that subsequent writes follow the same size as the first write (allowing to read that and perform the configure), and then to further that, could automatically unbundle a waveform type to package the properties up and write the array. 
     

    It’s a thought, and something I’ve encountered a few handfuls of times over the years and it’s a pain every time.  

  15. Hooovah, I appreciate this toolkit and the work you've done to make it. I have a common problem that I run into and eventually just have to bit the bullet and roll my own solution. When streaming large datasets to disk I have to use the TDMS Advanced vis to get it to avoid a memory leak. It is even worse with waveforms, though I would like to be able to write those directly you can't with the Advanced vis. So I wind up stripping the t0 and dt off and saving as waveform components, flushing the file to apply them, configuring block sizes, etc. Could this library be adapted to use the more performant vis, with some preconditions, say that all subsequent writes must be identical in size/composition, so that I can stream waveforms to disk? I attempted to use your size based file writer and ran into the same memory leaks I encountered when using the regular tdms files, described here.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.