Jump to content

smithd

Members
  • Content Count

    688
  • Joined

  • Last visited

  • Days Won

    40

Everything posted by smithd

  1. Similar issue in nxg: https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Malleable-control-labels/idi-p/3782013/page/6#comments In this case its not the openg functions but is instead the built-in variant parsing functionality that has changed, but its all the same sort of general "metaprogramming" category.
  2. thats quite the hidden gem🎅 🦄
  3. smithd

    LabVIEW Memes

    I'm not bothered by the size of that guy, I'm bothered that everything is serially executing
  4. i think those variant functions should be a tiny meaningless fraction of the overall execution time. Have you profiled it?
  5. In the compiler doc here (https://www.ni.com/tutorial/11472/en/) theres mention of a yieldIfNeeded block which is inserted into loops which allows for coordination with the rest of the runtime. If you just have the one while loop, you are still having to check against the runtime engine if you should keep running or if you need to pause and let other code run. Its not just a simple jump. tl;dr I dont think there is any way to do this with a regular loop. A timed loop may work. The overhead should be minimal compared to any actual code you wish to run in the loop, and if that isn't true labview is probably not the tool for the job.
  6. You're also what?! Make sure your lavag email is not @ni.com and then pm Michael. He can fixit.
  7. Before or after you tell those darn kids to get off your lawn?
  8. 2019 is almost certainly done already. Since its integrating an addon, it could come in SP1, but I'd bet 2020 if it happens.
  9. isn't that...an xcontrol? anywho, voted up
  10. My vague understanding is that labview exes are basically special zip files with an executable header tacked on with the instruction to load the labview runtime. The runtime then takes over and loads the code. This obviously changes with the fast file format, but the general concept is probably about the same. Point being, the exe is still a separate OS process, it just happens to immediately load up a copy of the lvruntime dll which can be shared.
  11. I don't know the exact details, but it looks like you could use a system-wide mutex: https://docs.microsoft.com/en-us/windows/desktop/api/synchapi/nf-synchapi-createmutexa I'm assuming linux has something similar. Or since its long-running, another option would be to do whole thing where you make a "blah.lock" file somewhere, like what git does. But thats clearly iffy.
  12. That much is definite -- processes can't share anything
  13. I don't know for sure (maybe @Rolf Kalbermatter does) but looking at this KB (https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019L5JSAU&l=en-US) indicates that its basically calling win32 LoadLibrary, and the help for that (https://docs.microsoft.com/en-us/windows/desktop/api/libloaderapi/nf-libloaderapi-loadlibrarya) seems to indicate there can only be one (per process):
  14. Coincidentally I just wrote a couple as well. I agree they seem OK for simple things, but the whole question of when and how they wake up makes them less usable. The main reason I want an Xcontrol is that a lot of things are combined controls+indicators. For example I have an indicator with the current value and the setpoint and some other stuff, and a control for the setpoint -- but you can't bind to user events, so you have to pick a proper 'direction' for the control and use a property for the other way and you end up having to marshal data around anyway. I don't hate them as much as I once did, but I don't like them. I think the q guy had it right, basically just grouping control references together and using standard functions to manipulate the group as one: https://sine.ni.com/nips/cds/view/p/lang/en/nid/214228
  15. guys this was literally october of last year please stop bumping it and yes, I know this is the forum equivalent of reply-all "please unsubscribe me from this mailing list" at work. no way around it 👻
  16. For fun I thought I'd make a list of the reasons I can remember why people choose sometimes choose UDP over TCP. Connection overhead of TCP (initiating a connection) Mainly a big deal with web browsers (each page has to connect to several domains, and each takes a few (usually 2 I believe) TCP connections, which introduces latency) This is part of why HTTP/3 exists Not a big deal for a 2 hour test where you open one connection Don't need packet de-duplication or re-transmits video streaming or there is an application-specific usage pattern that makes application-layer handling of faults the better route (HTTP/3) This application needs reliable transmission as it does not implement reliability at a higher level Want to avoid ordered transmission/head-of-line blocking This really means you are implementing multiplexing at the application level rather than at the TCP level -- its a hell of a lot easier to open 10 TCP connections, especially in applications on closed networks which are not "web scale" This is the reason HTTP/2 exists. HTTP/2 has connection multiplexing on TCP, HTTP/3 has connection multiplexing over UDP. Given the reliable transmission and rate requirement, I'm assuming ordered transmission is desired Want to avoid congestion control Bad actor attempting to cause network failures or: self-limited bandwidth use This application falls under this category or: Implement congestion control at the application layer (HTTP/3) Memory/CPU usage of tcp implementation Erm...labview Network engineers want to heavily fiddle with parameters and algorithms without waiting for the OS kernel to update HTTP/3 is supposed to be faster because of this -- TCP is tuned for 20 years ago or so its been said, and HTTP/3 can be tuned for modern networks I'm assuming this is not Michael On a closed network, for this application, its hard to see a benefit to UDP. (It occurs to me Michael never said it was a closed network, but if he put a pharlap system on the internet...😵)
  17. On a closed network where lost packets are unlikely, TCP with nagle turned off should have minimal overhead/latency vs udp. Is there any way to send the ARP manually -- IE call into the winsock api (PharLap ) and force an ARP every 5 minutes to refresh the table?
  18. it also needs fpga: https://www.ni.com/pdf/manuals/378049a.pdf
  19. The usual reason files are missing from that menu is if you don't have LabVIEW FPGA installed, or if you installed a new software package and didn't re-install the drivers (vision acquisition, rio, etc). I've only ever used the flexrio-based cameralink card but I think the driver is slightly different.
  20. 2013 is my go-to for stable+old. 2010+11 had the new compiler, 12...I dunno what happened with 12, but it sure seemed to be unstable. Its probably a lie that my brain made up, but it always seems like the even years are much worse than the odd years, with the exception being that 2018 seems pretty solid. 2013 has the new compiler (2010) with the performance issues resolved (2012), auto-concatenating terminals (2012), somewhat improved web service scheme, get class instance by name, dotnet CLR got bumped to 4.0 (I know this may be a negative to you ). The only big issue I know of is the attached code comment arrow thing, which results in an instacrash at a later date if you duplicate any multi-frame structure that has one inside. You miss out on custom right click menus (2015), VIMs and read-only DVRs (2017), and the CLI, python node, and type disable structure (2018) -- of these only the custom right click menu is probably helpful.
  21. Not sure if this was a typo, but RT-FIFOs are between loops on the RT side. You would use a Host-to-Target FPGA FIFO to pass data down to the FPGA for output. These fifo types do not support clusters, so you'd have to develop your own scheme for passing data down. This may be as simple as sending values in chunks of N, but you do have to watch out for overflows. If you are sending data from RT to FPGA there is a likelihood of underflow, depending on the output rate. You would probably wish to load all your file data into the fifo first and then tell the FPGA to start reading. This all depends of course on how much data you have. If you have 10 minutes of data at a rate of 1 Hz, this is overkill but would still work. If you have 10 minutes of data at 100 kHz, then you likely won't have enough memory, so you'll want to preload the FIFO and then feed data as fast as you can. You'll also want to keep in mind the signals side of this. I'm not great in this area, but I would imagine that you are attempting to replicate a continuous signal, you'll want to output as fast as possible to reduce frequency spikes as the output changes.
  22. I'm definitely not expecting anything he said to be set in stone, I was just curious about the concept. And to reinforce one of the bigger use cases that comes to mind for me That sounds like a pretty cool concept. I don't really have any more specific thoughts, it just sounds like a nice way forward. I do have one more somewhat related...I guess question, that comes to mind. So I think for me the exciting part is all of the reference lifetime-related stuff, but your first bullet point is improved memory management for large data sets. Most large data sets are such because they are for analysis, and most of the analysis functions (both for images as well as for waveforms) are C DLLs. My question, if you want to call it that, is does that confound things at all? For example I know some types passed to the CLFN require full data copies anyway (ie the substring, per that buffer allocations thread I linked to earlier).
  23. Fair enough on all the above -- I think the answer is that we are agreeing for the most part, I'm just being pushy 🧐 . To follow up on the above and on AQ's point, my last comment was more or less attempting to reach this question, but didn't quite get there: How does this work together with the existing system? I think most of my questions and comments above were basically revolving around how these concepts apply within the world of a normal VI. To try to illustrate what I'm getting at, I'll go back to my vision application above. If I understand you, it seems like the way it is now, rebar references may exist only within rebar diagrams. But if I have my 5 vision-related loops described above, my logger will definitely be using standard file refnums, acquisition will be using imaq refnums, networking will have tcp refnums, so presumably all these top level loops will be standard, non-rebar VIs. But I want to share my data between them safely and without copies. So I'm questioning what the approach is for melding the two types of code. Is the idea to make more of the refnum types into rebar types, so there is a tcp rebar and a imaq rebar and a file rebar (this is what I was trying to get at with the "baggage-free VI" comment)? Or would the reverse happen? Or am I totally misunderstanding?
  24. Most of your responses made sense so I'm just going to pick out the few bits that made less: The local variable lifetime is kind of my point -- its life isn't really limited because the data space for a reentrant VI remains live for the duration of an application. Similarly, I was under the impression the shift register value is also permanent. As an example, I seem to recall a while back running a loop that allocates a giant (>>1mb) array on a shift register but never wire it out of the VI. if you do that, the large array remains part of the VI's data usage. From the perspective of the details you mentioned, yes, that giant array is on the heap somewhere, but all that data is still live until something overwrites it, making that data location permanent as well, even if initialized. Good explanation, but for the refnum point I keep coming back around to -- yes, this is something I absolutely would love to see improved and resolved, but it seems like a bandaid over an underlying flaw in how labview does dataflow. Now thats something I can definitely get behind. If NXG can have "gvi"s or "good, baggage-free VIs without depending on the design decisions from 3 decades ago that might not have aged as well" I might switch sooner But in all seriousness, it seems like you've made your own type of VI with compiler rules associated with it. Is there a way to expand that more broadly, making life simpler enough for the compiler (per your points above) that it can figure out the lifetime of objects? IE in order to reach the goal of stateless functions, but without relying on the user to manage lifetimes?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.