Jump to content

Mads

Members
  • Posts

    456
  • Joined

  • Last visited

  • Days Won

    32

Everything posted by Mads

  1. One thing we do to help during memory leak testing is to place exaggerated or artificial memory allocations at critical points in the code, to make it more obvious when a resource is created and destroyed (or not...) πŸ•΅οΈβ€β™€οΈ That is not an option for the native functions...πŸ™ but, depending on the code, you might be able to run an accelerated life test instead...
  2. We do this in most of our sensor monitoring applications; each sensor you add to your system for example is (handled by) such a clone. Every serial port we use for example is also shared between sensor clones through cloned brokers. Client connections with various other systems are another part handled by preallocated clones; dynamically spawned on incoming connections. Communication internally is mostly handled through functional globals (CVTs, circular buffers, Modbus registers etc), queues and notifiers. Externally it's mostly through Modbus TCP, RTU, OPC UA or application specific TCP -based protocols. These applications have run 24/7 for years without any intervention, on Windows computers (real or virtual), and sb(cRIO targets. In some of them we use DQMH, and have not run into any issues with that so far either. If a memory leak is too small to be detected within hours or a few days of testing, it is probably so small that it will not cause a crash for years either.
  3. Security in the age of cloud computing and IoT is a huge challenge. I do not think we should start on that discussion here. But you guys seem to assume that that means we can resort to the old ways of doing business - That NI and others should not leverage these things fully, but try to guide their users to the old ways by making sure the new ones are intentionally crippled. The fact is that most of us are way down the rabbit hole already, ignoring the risks because the benefits are too enticing or the business or societal pressure too high. If people can make a business of delivering services that are at the same level of risk as the customer is already taking in other areas (in fact in the particular case that triggered my interest in this - the security would be improved compared to the current solution - imagine that), but NI is holding them back because they think the security challenges has to be 100% solved first...well...that is a recipe for a dwindling business. The starting point of my digression was something that the supplier in fact is already partly working on. They just have not gotten around to it yet. So it is not like you are defending something that they themselves think is the holy grail of security limitations either. Arguing that the current solution is as good as it gets is never really a winning strategy.
  4. I think I have outlined the complaints quite enough already, that part was just a digression. The main point was how far off the mark (or "left handed" in this context) SystemLink is as a solution for the mentioned request in the Idea Exchange.
  5. Oh, I know how to do it. With my left hand πŸ˜‰ Not my preference. It can be done better.
  6. If sufficient security for many a use case is not possible to achieve without having to put each customer on a separate server, and the creation and licensing of those servers have to be a manual process repeated every time a new customer want such a service...*and* none of this can in be abstracted into a larger platform that makes the process of managing this a breeze for users at, in this case, two ends to use...I would argue the problem is mainly a lack of imagination.
  7. I think that is a very old fashioned way of thinking.
  8. This reminded me of this idea exchange thread: https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Offline-Distribution-for-Real-Time-Application/idi-p/1415250 Sure, the fact that the offline installers idea was marked as in development due to SystemLink is probably just because someone though it might accidentally be a solution for that too, but in general it seems to me that NI is designing more and more "left-handed scissors features" 😞 (SystemLink Cloud is by the way is so crippled compared to the stand-alone solution that you cannot use it for trending data for different customers (Access control on tag groups is not available, only on applications for example). To get enough access control you have to create your own SystemLink servers for each customer and get those online yourself (no cloud hosting of those at the click of a button from NI). And if you need to do regular analysis on the incoming data in SystemLink you cannot insert a set of VIs to do that, no - you need Diadem or resort to python (because "Diadem is much more powerful than LabVIEW" (!))...)
  9. Not really, but that would be one way to attack it yes, if it was not for the classes.
  10. Hahaha, both of them apply very nicely πŸ˜† I had only thought of the first one.
  11. Sure. The idea exhange and the whole NXG vs current LabVIEW scenario always makes me think of one particular song by the Fugees...πŸ˜‰
  12. Ah, it's just the classes that mess things up then. One more vote for the idea of a single file distribution for classes then on the idea exchange... (or even better as fabions mentions in the comments; an upgraded and backwards compatible llb format with support for subdirectories? πŸ‘).
  13. Any tricks to this? (I'm using LV2018) If I try to target an lvlib to an llb (because I want the llb to be a single file, to be used as a plugin), I keep getting several folders outside the llb, and none in the llb. This is with the whole lvlib included and the destination set to an output directory set to be an llb. Right now the lvlib I want included in an llb is the JKI Serialization.lvlib, which has classes included as well so it is not the simplest lvlib. I have tried all kinds of variations on where to target dependencies too..Including the lvlibs in the executable seems to be the only way to get everything nicely packaged into one file, but in this case I do not want to update the executable, I want it portable with a/several plugin(s). Not excluding unused items and allowing the build to modify libraries seem to help avoid subVIs ending up in directories outside of the target directory in some instances, but it does not help getting lvlibs into an llb. The second tidiest working solution seems to be to target the lvlibs to a directory. That fills the directory with hundreds of files (a mess I do not like), and you might get into naming collisions if you point several lvlibs to that directory because the name spacing is not kept (could have automatically separated the subVIs to namespaced folders e.g.). Using packed project libraries is perhaps one solution, but that is cumbersome, especially when dealing with lots of different targets.
  14. The application in question - would it otherwise behave smoothly with the 900 MB file, if it was able to load it, or would it become so sluggish that it would not make any sense to load that much data anyhow (i.e. the technical issue might just be of technical interest...)? Why do you not just put a limit on the file size you will load? You can always get a handle on how much a file of x megabytes typically takes when loaded, and calculate your suggested limit based on that -. either alone or combined with a reading of the available memory. If the file is above the limit and the processing permits it, you could offer the user to decimate the data or extract a subsection of it. You can also allow the user to proceed with the full file, but at least you have given him a warning.. If a crash will erase previous work the user might opt out...and if not, it will not look as bad when it does crash.
  15. Passing data between executables on the same machine that happens to have the TCP stack loaded because it has a network interface anyway does normally not require a loopback adapter (unless any of the requirements I listed are in effect). If this was a serial link, then sure - you would need a physical or virtual null modem installed. The local TCP traffic never passes through any adapter anyway. As described in the first sentence here (where the need for loopback is in place because they want to capture the truly local traffic): https://wiki.wireshark.org/CaptureSetup/Loopback You can fire up the client-server examples in LabVIEW and run those with localhost, as long as the machine happens to have a single NIC installed. Any client-server will be able to do that. That's why I was wondering what's different here.
  16. Tested the 4.2 beta with success on a Linux RT x64 target today (cRIO-9030), where I have never gotten it to work previously. Compressed and decompressed folders with multiple files and subfolders, and used the inflate/deflate functions. The files that were compressed were also transferred to a PC to verify them there, and vice-versa.
  17. What is the role of the loopback adapter in this case? Do you need it to monitor the traffic through Wireshark for example? Or is the machine without a single physical network adapter so you have the loopback installed just to get access to networking? Or is it to handle a routing issue? Otherwise the link could be fully local, with all the shortcuts that allows the network driver to take.
  18. Downloaded it and so far I've tested it on LabVIEW 2019 and Linux RT ARM (cRIO-9063) with success (compression/decompression of files and folders and deflate/enflate on strings). I'll try a different target type later today. Only trouble so far is with the package format - my VIPM does not like opening ogp files from Windows (get an access error), but I can open it and install it manually from VIPM...That might be a local issue though.
  19. There is also a version 4.2 in the works with more 64-bit support - as discussed here: On a side note; I used version 4.1 now from LabVIEW 2019 on a Linux RT for ARM target and got build errors that I do not get in 2018. I have not investigated it much yet though so it might just be a local phenomenon: From the build log: Deploying ZLIB Open Read File__ogtk.viZLIB Open Read File__ogtk.vi loaded with errors on the target and was closed. LabVIEW: (Hex 0x627) The function name for the lvzlib.*:lvzip_unzOpenCurrentFile3:C node cannot be found in the library. To correct this error, right-click the Call Library Function Node and select Configure from the shortcut menu. Then choose the correct function name. LabVIEW: (Hex 0x627) The function name for the lvzlib.*:lvzip_unzOpenCurrentFile2:C node cannot be found in the library. To correct this error, right-click the Call Library Function Node and select Configure from the shortcut menu. Then choose the correct function name.
  20. Great. I would be glad to test it. I mainly work on ARM-based Linux RT targets myself, and the occational old VxWorks cFP-target.
  21. I noticed on sourceforge that there is a version 4.2 of OpenG Zip. Will it be released as a package anytime soon?
  22. Daisy-chained / multi-dropped RS485 with a master-slave protocol? If you need an example of how that can be done in LabVIEW you can look at this one which is based on one of the industry-standard protocols for such communication - Modbus :
  23. The checksum (well, CRC to be correct) will be generated by the same software that generates the archive in this case - and is then run through tests locally to ensure it is OK. So I feel confident in trusting the content from that point onwards if the CRC is OK, and the structure of the content is recognisable. It is the transfer in this case that is highly exposed to corruption... (involves several weak protocols and complex layers which I cannot change, - or at least not all of them at this stage)😲
  24. The project is on an sbRIO running Linux RT, that is partially why I preferred using the OpenG library. (The device delivering the zip-files to the sbRIO gives it an *extremely* short time to reply on whether the data is OK or not, so eliminating slow file operations is a must... With the correct checksum in the added header, I now run the crc32 calculation continuously on the incoming data, which enables me to verify the transfer instantly.πŸ™‚. A file size in the header also allows me to preallocate the file space up front - or deny the transfer at startup if there is not enough space for it anywayπŸ‘)
  25. Thanks for the feedback. On the project I'm working on I decided to just add a header to the zip files, I am managing the transfer of the files and can just strip off that header anyway. At first I wrote code to parse out the headers of the zip archive and grab the necessary checksums, but dropped that approach when I saw that the file checksums are calculated on the uncompressed data , meaning I would need to decompress them first just to figure out if the data was wrong. A better alternative would be the central directory checksum I guess...but I went for an even easier solution in this case as the added header gave me some extra benefits. It would be nice to have file verification in the OpenG Zip library someday though.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.