Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Mads last won the day on December 20 2018

Mads had the most liked content!

Community Reputation


About Mads

  • Rank
    Extremely Active
  • Birthday 12/01/1975

Profile Information

  • Gender
  • Location
    Bergen, Norway
  • Interests
    Trail running, skiing, science fiction, food and travel.

LabVIEW Information

  • Version
    LabVIEW 2018
  • Since

Contact Methods

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The checksum (well, CRC to be correct) will be generated by the same software that generates the archive in this case - and is then run through tests locally to ensure it is OK. So I feel confident in trusting the content from that point onwards if the CRC is OK, and the structure of the content is recognisable. It is the transfer in this case that is highly exposed to corruption... (involves several weak protocols and complex layers which I cannot change, - or at least not all of them at this stage)😲
  2. The project is on an sbRIO running Linux RT, that is partially why I preferred using the OpenG library. (The device delivering the zip-files to the sbRIO gives it an *extremely* short time to reply on whether the data is OK or not, so eliminating slow file operations is a must... With the correct checksum in the added header, I now run the crc32 calculation continuously on the incoming data, which enables me to verify the transfer instantly.🙂. A file size in the header also allows me to preallocate the file space up front - or deny the transfer at startup if there is not enough space for it anyway👍)
  3. Thanks for the feedback. On the project I'm working on I decided to just add a header to the zip files, I am managing the transfer of the files and can just strip off that header anyway. At first I wrote code to parse out the headers of the zip archive and grab the necessary checksums, but dropped that approach when I saw that the file checksums are calculated on the uncompressed data , meaning I would need to decompress them first just to figure out if the data was wrong. A better alternative would be the central directory checksum I guess...but I went for an even easier solution in this case as the added header gave me some extra benefits. It would be nice to have file verification in the OpenG Zip library someday though.
  4. The decompression in OpenG Zip does not seem to verify the checksums of the containing files. Having changed a few bytes in the middle of various zip files I will get an error from other tools (tested on Windows and Linux RT), but OpenG Zip will churn on it as if nothing is wrong. In one case the change even caused the decompression to generate a file that was hundred times bigger than the true content. Is there a quick fix to this, or would the entire library have to be updated (new dll etc) to get such functionality?
  5. Every now and then I hope for a bright future and open the latest NXG - and my head hurts. They've changed too much, and gained so little🤯. I immediately get irritated by how disrespectful the whole thing is of the strengths of LV, but try to push myself to get more used to it. Then I hit a brick wall in a lack of functionality, and my patience runs out. Back to LabVIEW 2018. Home sweet home☺️ (sure, the roof is leaking and the style is a bit 90's, but it beats NXG).
  6. I took my chances and upgraded...No trouble running the thing anymore, but it still does not accept my volume license (which was updated in November)...Probably just another update of that and it's all OK though (hopefully).
  7. There is also already an SP1 f1 installer out....(3rd of December is the release date) Here: http://www.ni.com/download/labview-development-system-2018-sp1/7889/en/ So is that an update to SP1 that needs to be applied after SP1, or is it included in the "new" (hopefully) SP1...or? Anyone who has been brave (or virtual) enough to install these already?😉
  8. We use OpenG quite a lot. Mainly the zip, array, directory and configuration file functions (the latter partly due to legacy issues, but also because it makes the configurations as readable as they can be). The zip functions are the only ones available for Real-time, and it is a big effort to make and maintain something like that (Thanks @rolfk?). Other functions have very useful subtleties, making them more useful than the alternatives (and sometimes the subleties are a problem instead, but then the functionality is not found elsewhere so they are just an acceptable downside). Of course you can write equivalent functions yourself, but that is a bit silly unless you have some requirement forcing you to reinvent wheels (thankfully we are not in that kind of business). There is not much development done on the official version, in the world of product life cycles I guess we would call it "mature" instead of dead (obsolete) though? We had quite a good discussion about the array function back in this thread https://lavag.org/topic/15980-openg-filter-array-revised/, where yours truly even posted a complete update, without any of it making it into the official version...Others here have made Xnode and VIM versions of that library too, utilizing the new conditional auto-indexing function now that it is more optimal than the old ways of doing it....so perhaps the official update path is dead (I remember butting my head in some logon issues the one time I tried contributing that way...), but the product is not? The old 80/20 rule is quite flawed when used to justify removal of code...but I do agree that some of OpenG could be removed at least if the dependencies were changed.
  9. Thanks for the PNG file writing VI. I could not get the snippet to work (lost its embedded code?) so I recreated it (attached, backsaved to LV2015 format), but then I could not really figure out how you got the Get Image invoke node to work on RT...unless you were not running it as a built application (it works from the IDE only)? In fact it seems a rather big problem to generate graph images directly on an RT target (I'm using cRIO-9063 with Linux RT), has anyone done that with a built application? Using the Picture functions (Plot Multi XY) is a dead end as well, as the Picture to Pixmap function is not supported on RT either.... Right now I have an application that reports alarm conditions to customers by sending an alarm description and the trend at the time of the event as a log file, but some of the users want to view the trends immediately by having them as images in the e-mail. I can do that easily when Get Image works, but not from my RT targets...Has anyone done that, generated graph images on an RT target without Get Image-support? Perhaps there is a CSV to graph image solution somewhere? I could probably find a cloud based data historian somewhere that is able to receive data logs by e-mail (?) or FTP, and then have the users use that web site instead to view the data (any good suggestions for such a service?). Continuously streaming individual data points to such a historian (which seems more of the norm) is less ideal as the data link we have available is not that good (sending small compressed emails every now and then is OK). Write PNG on LVRT.zip
  10. I would create actors/clonable module(s) for the different devices yes. That way you can dynamically create and destroy as many "loops" as you want.We do this in all our distributed monitoring solutions. Before all the frameworks we have available now, like DQMH, Actor Framework, the Message Library etc., existed we just cloned device templates VIs, where each such template takes care of pretty much all the tasks related to that device (with the help of some centralized services for logging etc).We do the same for communication interfaces. So, for each serial port e.g. we have a port handler that the devices use as a middle-man to share that port. Nowadays we use frameworks like DQMH to do much the same. Devices are now clonable instrument DQMH-modules that get initialized with a device object - which in turn contains an interface object in its private data (composition) which it uses to talk to interface DQMH modules. The various modules are created/destroyed by separate DQMH modules - a "Device manager", an "Interface manager" etc. We use broadcast events in the interface modules for example to facilitate debugging. If we need to figure out what it going on on a given interface, we can activate an observer which will then stream that data as UDP traffic to a debug client on the network...(One advantage of routing all the communication through such interface handlers instead of using semaphores or other mechanisms to share access.)
  11. Stephen - did you see this video I made of the behavior of the Malleable VI when trying to call a *protected* VI? Looks kind of funny, or?
  12. Increased quirkiness can be a silent killer... Malleable VIs combined with LV classes is not the busiest road, I guess. I'm sure there will be more than just me getting frustrated. Happy to be the first to voice it
  13. I meant that the VIM is member of a class yes. I've been informed now that this can be achieved as long as the VIM uses an accessor when operating on private data...Now at first that did not seem like a solution as I then expected the accessor to have to be public, and I do not want that particular private data available at all, but the accessor *can* be private, so that makes it a solution. Not a very intuitive one - but at least it is doable. (It seems a bit strange that the VIM is not allowed direct access to private data, but it can still call a private method to effectively do the same thing (so why not allow it to access the private data without the accessor in the first place...)).
  14. Just ran into an issue with malleable VIs I had not thought about before, and now that I have I cannot find any mentioning of it anywhere (?): Because mallable VIs have to be marked as inlined, you cannot use them in classes if the VI is to be public.... It then breaks because inlining it means accessing private data in the calling function. In my case I wanted to use it instead of a polymorphic approach to supporting read/write operations with different data types on a file class. Or am I missing something? It all reminded me a bit about this old thread about the lack of support for polymorphic VIs in classes -
  15. The platform bundle web downloader looks more like the right one (http://www.ni.com/download/labview-development-system-2018/7406/en/). It does not state the name "Platform Bundle" until you have drilled down a bit. I see now that that one does not include NXG, nor the OPC toolkit either (unless the latter has been renamed and is part of the communications package?). Have they chosen to separate NXG completely, contrary to what was done in 2017?
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.