Jump to content

Open G Zip Tools on Linux RT


Recommended Posts

3 hours ago, Jordan Kuehn said:

I can split this off onto another thread, but I copied a deflated string produced by Linux RT and tried to inflate it on Windows and it did not work. I took the original data and deflated/inflated all within Windows just fine. Is there a compatibility issue between the two OS implementations?

We have server applications running on cRIO/sbRIOs (Linux RT) and the clients run on Windows. The TCP-based client-server protocol uses @Rolf Kalbermatter's zip tools to deflate/inflate all the time, to speed up the communication in both directions. It runs without a hitch. I do not have a target to test on right now, but it sounds like there was something else affecting the test you ran. If you have an example of some data deflated/inflated on both targets that might help. Did you run the same version (which?) on both targets?

Link to comment
On 2/11/2022 at 3:43 PM, Jordan Kuehn said:

I can split this off onto another thread, but I copied a deflated string produced by Linux RT and tried to inflate it on Windows and it did not work. I took the original data and deflated/inflated all within Windows just fine. Is there a compatibility issue between the two OS implementations?

I can't guarantee that there is not some problem somewhere in a function, but I didn't find anything in my testing.

How did you copy the deflated string? As binary data or as string? If as string, are you sure your transfer mechanism didn't do some text translation such as automatic \n to \r\n translation somehow? Did you use the LabVIEW Text File Read and Write functions to write your strings? A deflated stream is not a text string but a byte stream, no matter if LabVIEW lets you display it as a string. It is not a problem for LabVIEW itself as it does not use special characters such as a terminating NULL character. But if you are not careful and use the File Text Write and Read functions in line conversion mode, your binary stream gets of course modified and that destroys the integrity of the binary information as the inflate algorithme expects it (and checks it with CRCs too). 

Link to comment
14 minutes ago, Jordan Kuehn said:

I will give it a more thorough test! I copied from one VI indicator to a constant in another application instance. I think. Knowing that it *should* work is already very helpful. Thank you both. 

Hmmm, clipboard copy! That has a very good chance of trying to be smart and to do text reformatting. I would definitely drag the entire control with all the data from one VI to the other, which should avoid Windows trying to be helpful. As a control, LabVIEW puts it in an application private format in the clipboard together with an image of the control. LabVIEW itself can pull the private format out of the clipboard, other applications will not understand that format and pull the image from the clipboard.

If you only select the text, LabVIEW will store it as normal ASCII text in the clipboard and Windows may try to do all kinds of things including trying to translate it to proper Windows text, which could replace all \r "characters" with \r\n and there is even the chance that the text goes through ASCII to UTF-16 and back to ASCII on the way through the clipboard and that is not always a fully 100% back and forth translation, even though they may look optically the same. Text encoding translations is a total pitta to fully understand.

Edited by Rolf Kalbermatter
  • Like 1
Link to comment
On 2/12/2022 at 5:39 PM, Rolf Kalbermatter said:

Hmmm, clipboard copy! That has a very good chance of trying to be smart and to do text reformatting. I would definitely drag the entire control with all the data from one VI to the other, which should avoid Windows trying to be helpful. As a control, LabVIEW puts it in an application private format in the clipboard together with an image of the control. LabVIEW itself can pull the private format out of the clipboard, other applications will not understand that format and pull the image from the clipboard.

If you only select the text, LabVIEW will store it as normal ASCII text in the clipboard and Windows may try to do all kinds of things including trying to translate it to proper Windows text, which could replace all \r "characters" with \r\n and there is even the chance that the text goes through ASCII to UTF-16 and back to ASCII on the way through the clipboard and that is not always a fully 100% back and forth translation, even though they may look optically the same. Text encoding translations is a total pitta to fully understand.

So I just tried that without success. I had several screenshots to post of what I did, etc. and then I tried it with providing the expected length and it worked just fine. Is this input required? I read the description where you say it will work for up to 94% compression unwired. I'm compressing a JSON string of basically an array of clusters (quite compressible). Would it be disadvantageous to wire a sufficiently large constant to this input rather than bundling the actual expected output with the data? It did also work in when I tested with a large input.

My use case here is to reduce bandwidth requirements when transferring JSON encoded status information via MQTT to a 3rd party system (non-LV). I hope to give them the requirement of inflating via zlib after delivery and then proceeding to use the JSON data as they like.

Link to comment

Start with something simple, then work from there... Here is an example of how it looks like with a simple test:


1434272668_simpledeflate-inflatetest.PNG.103f21e66b11da5d746ad9f8f4e10e7c.PNG

 

The deflated string is binary so the string indicator/control is set to hex for the deflated input/output...

If the content can be compressed too much and the expected length is not included it can fail yes....We had an issue with that where we could not change a protocol to include the length, we "fixed" it (increased the probability of success that is) by editing the inflate VI so that it would run a few extra buffer allocation rounds - you can do that too...

Link to comment
18 minutes ago, Mads said:

Start with something simple, then work from there... Here is an example of how it looks like with a simple test:


1434272668_simpledeflate-inflatetest.PNG.103f21e66b11da5d746ad9f8f4e10e7c.PNG

 

The deflated string is binary so the string indicator/control is set to hex for the deflated input/output...

If the content can be compressed too much and the expected length is not included it can fail yes....We had an issue with that where we could not change a protocol to include the length, we "fixed" it (increased the probability of success that is) by editing the inflate VI so that it would run a few extra buffer allocation rounds - you can do that too...

My simple test is quite like yours. It failed without the expected length wired. It worked with it wired. Thank you for the example. I will look into adjusting the inflate VI to auto-run a few more rounds, thank you for the suggestion.

I just provided the big picture use case if the additional context might shed some light on what I'm after. Definitely working up incrementally to using it in that implementation.

Link to comment
On 2/14/2022 at 6:38 PM, Jordan Kuehn said:

So I just tried that without success. I had several screenshots to post of what I did, etc. and then I tried it with providing the expected length and it worked just fine. Is this input required? I read the description where you say it will work for up to 94% compression unwired. I'm compressing a JSON string of basically an array of clusters (quite compressible). Would it be disadvantageous to wire a sufficiently large constant to this input rather than bundling the actual expected output with the data? It did also work in when I tested with a large input.

My use case here is to reduce bandwidth requirements when transferring JSON encoded status information via MQTT to a 3rd party system (non-LV). I hope to give them the requirement of inflating via zlib after delivery and then proceeding to use the JSON data as they like.

Ahhh well! Yes that was a choice I made at that point. Without a predefined length I have to loop with ever increasing (doubling every time) buffer sizes to try to inflate the string. But each time I try with a longer buffer, the ZLIB decoder will start filling the buffer until it runs out of buffer space. Then I have to increase the space and try again. The comment is actually wrong. It ends up looping 8 times which results in a buffer that will be 256 times as large as the input. That should still work with a buffer that has been compressed with over 99.6% actually! The only thing I could think of is to increase the buffer even more aggressively than 2^(x+1), maybe 4^(x+1)? That would with the current 8 iterations offer an up to 65536 times as big inflated buffer for an input buffer.

In each iteration the ZLIB stream decoder will work on more and more bytes and then if it is to small, all will be thrown away and started over again. A real performance intense operation and I also do not want to loop indefinitely, as there is always the chance that corrupted bits in the stream might throw the decoder off in a way that it never will terminate and then your application will be looping until it runs eventually out of memory which is a pretty hard crash in LabVIEW.

So if you know that your data is going to be very compressible, you have to do your own calculation and specify a starting buffer size that is big enough. If you do this over network I would anyhow recommend to prepend the uncompressed size to the stream. That really will help to not destroy the performance gain that you tried to achieve with the ZLIB compression in the first place.

Edited by Rolf Kalbermatter
  • Like 1
  • Thanks 1
Link to comment
  • 10 months later...

@Jordan Kuehn, have you been able to get this listed as an option in the software install list, or did you use another channel to get the package onto the device?

I assumed you could add a ldirectory of ipk files as a local feed, and that that would allow you to see it as an option in the software install menu of the RT image, but I could not get that to work...If you did, how did you do that?

I tried adding a local folder as a feed...but the package did not show up in the list of available software. And if this is the way we are supposed to go, is there a way to make one package with support for both arm and x64 e.g., or would you always need separate ipk files?

Link to comment
3 hours ago, Mads said:

I assumed you could add a ldirectory of ipk files as a local feed, and that that would allow you to see it as an option in the software install menu of the RT image....

I tried adding a local folder as a feed...but the package did not show up in the list of available software.

The feed needs to contain a Packages.gz file (which is a gzip'ed copy of a Packages file) that describes the available packages.

  • You can see a sample at http://download.ni.com/#ni-linux-rt/feeds/2022Q4/x64/main/x64/
  • You can use NI Package Manager to generate this for you! Put all your *.ipk files in a folder on Windows, then use Command Prompt/PowerShell to cd into that folder and call:
    "C:\Program Files\National Instruments\NI Package Manager\nipkg.exe" feed-create .

 

I don't think you can just use a local folder as a feed -- AFAIK, opkg can only retrieve feeds from a web server: https://readthedocs.web.cern.ch/display/MTA/[NILRT]+How+to+create+a+local+feed+for+Linux+RT

 

Here are my brief notes on how to install an *.ipk on a Linux RT system: https://jksh.github.io/LQ-Bindings/setup-nilrt.html (this page shows 3 ways: Adding a feed using NI MAX, adding a feed via an SSH console, or installing the *.ipk directly without a feed)

 

3 hours ago, Mads said:

is there a way to make one package with support for both arm and x64 e.g., or would you always need separate ipk files?

Not if your package contains compiled code. Each package's control file (and their corresponding entry in the Packages file) must specify the supported Architecture (e.g. "x64"). opkg/NI MAX/SystemLink will only show the packages that are compatible with your device architecture.

If the package is architecture-independent (e.g. if it installs TLS certificates or documentation), then you can specify "any" as the Architecture.

 

Edited by JKSH
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.