Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 08/18/2015 in all areas

  1. Warning: This shouldn't come as a surprise given the title of the thread (as well as who's posting it) but this is NOT officially supported by NI. Don't use this for anything you don't want to break! I ran a VI that recursively opened every VI in the LabVIEW installation directory and scanned it for Call Library nodes, then saved anything it found to a spreadsheet. And guess what it found in the palette API? Functions that open and save "resource" files, which happen to be the way VI files are internally stored, as well as some other LabVIEW files. It lets you manipulate the internal resources as an array of clusters. Now in case you didn't already know, the front panel and block diagram are stored as binary resources (known as "heaps") in this file, and these functions can't parse that format. So it's not too useful, right? Wrong. Does this dialog look familiar to anyone? That's the hidden internal settings dialog, known for some reason as Ned. To access it, add LVdebugKeys=True to your LabVIEW.ini file and restart LabVIEW if it's already running. Now hold Control+Shift and press D, N. You have to press the keys relatively quickly for it to work. (You can press D, H instead to open Heap Peek, which lets you view the internal representation of objects, as well as their exact location in memory--think about how the latter might be useful!) Now do you see that option I have selected? "Heap Save Format (Binary2)". Click that a few times, and you'll see one of the options is XML. Yep, it turns out LabVIEW has a hidden XML-based VI format. It even opens just fine with the heap save format set to the default. Keep in mind only the heaps are saved in this format; the rest of the file is still binary. But that format can be parsed by those library functions I found. Unfortunately, it seems the block diagram has some sort of checksum and the VI won't load if that's wrong. I know this because after making a simple change to the XML (changing the block diagram's background color) it didn't load, and there was a a 16-byte section in the file that was changed with seemingly-random data. I suspect that this is MD5, considering that seems to be the standard LabVIEW uses, but I tried calculating the MD5 hash of certain parts of the file and it didn't seem to match. Here's two VI's that you can use to turn resource files (like VI files) into resource cluster arrays, and vice versa. Load Resource File.vi Save Resource File.vi And here's a VI that will automatically set the heap save format to XML (using the private method "Application.Call Internal Command"), save a VI (from a refnum), and then put the heap save format back to what it was before. So it basically just saves a VI in the XML format. Save VI with XML Heaps.vi
    1 point
  2. Thread split with discussions on Channel Wires moved to here.
    1 point
  3. Don't we already have that with shared variables?
    1 point
  4. Well. I don't know anything about RT Linux itself or should I say, I don't have one, but I expect it has SSH which pretty much all Unix-like systems have. It will use public key encryption for the SSH sessions.. If it has the NI webserver then that too would be using public key encryption for SSL If they are talking about special NI technology where you give a TCPIP primitive a certificate (like the HTTP API) then it will use SSL or some propriety protocol then that is interesting. If they have integrated it transparently into network streams that would be fantastic. If its just that it it gains a new ability because its on Linux rather than VxWorks or Windows. Then that is not very interesting to me. You'd have to be more specific though. For most public key encryption, signatures and certificates; the Encryption Compendium for LabVIEW has it covered. If you want to play around with PGP, encryption then I highly recommend GPG4Win as the tool of choice. Don't give me any of your command line rubbish . Get ye over to Linux, heathen
    1 point
  5. Short answer is no. you can't easily modify dependency locations. Longer answer is yes. you can modify environment search paths to point to the new target and delete the old target files to force re-linking, but this is really the hard way and it's going to get you into re-linking trouble/hell. Correct answer is that you're not using vi packages correctly. The code that goes into a VI package should be able to stand on its own. You should be able to debug/test/validate all VI package source code before you build it into a package. Think of a VI package as formally released reusable source code. If you feel the need to constantly switch between built packages and source code, then you're not adequately validating/testing before building the package. Hope this helps.
    1 point
  6. I'm on the same page as Shaun. Maybe it's a matter of habit, but for those of us who have been developing in LabVIEW for years, a wire always controls data flow and execution. This is a golden rule that suddenly doesn't apply anymore, and that's quite disturbing I find.
    1 point
  7. What I actually said was breaking dataflow is a consequence, not a weakness. The issue I have with this is that wires have always been the method to be able to sequence execution. Its how data flows in LabVIEW and is very intuitive as an analogy! We have always been taught that we should use them instead of sequence structures and wire error clusters to make execution order unambiguous and repeatable. Well. This type of wire doesn't do any of that and if you think it does you are in a whole world of hurt. I can guarantee that people will expect it to though, especially when single stepping or highlight execution debugging etc. I also said. I'm willing to be convinced. When we do break dataflow it would be helpful to know where data pops out but why not do something with existing asynchronous methods like events? So. Show me the killer app. Lets see how easy it makes understanding the Actor Framework. Lets upgrade some existing software to use this technology so we can compare the benefits directly. Is it just a fancy global variable with a confusing wire type or is it like buffered TCPIP endpoints?
    1 point
  8. TL;DR: There are further developments that will layer on top of channel wires, but the hypothesis we are testing is that channel wires are valuable to write code faster (fewer mouse clicks than setting up other mechanisms mentioned above) that is more comprehensible to future readers of the code and more verifiable for tools like VI Analyzer or code review buddies. Feedback on the hypothesis is welcome. The goal with channels is to provide a visualization of the asynch communication code paths. Shaun may not see evaporating wires as a weakness, but most users do. Users generally, in our observation, do better learning code bases where the code on screen shows what's going on instead of implying what's going on. When data disappears into a queue/event/global/shared-variable/etc., we observe developers unable to keep in their heads where that connection goes. It also means it is impossible for developer to assert all of the points in the code that have their fingers on a comm channel. So channel wires depict the communication. They can run into subVIs. Most of them can be probed (probing is not something we added to all the channel types... we're still playing with how that debugging experience should work). The channel wires also give the compiler a chance to substitute the underlying communications mechanism on different targets. Queues are desktop specific; resource refs are FPGA specific. A channel that just expresses the protocol desired might be compiled one way on one target and another way on a different target with no need to re-code the subVIs. We're also playing with wires that express connections across targets. No idea how that will turn out.
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.