Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. I don't know the exact details, but it looks like you could use a system-wide mutex: https://docs.microsoft.com/en-us/windows/desktop/api/synchapi/nf-synchapi-createmutexa I'm assuming linux has something similar. Or since its long-running, another option would be to do whole thing where you make a "blah.lock" file somewhere, like what git does. But thats clearly iffy.
  2. That much is definite -- processes can't share anything
  3. I don't know for sure (maybe @Rolf Kalbermatter does) but looking at this KB (https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019L5JSAU&l=en-US) indicates that its basically calling win32 LoadLibrary, and the help for that (https://docs.microsoft.com/en-us/windows/desktop/api/libloaderapi/nf-libloaderapi-loadlibrarya) seems to indicate there can only be one (per process):
  4. Coincidentally I just wrote a couple as well. I agree they seem OK for simple things, but the whole question of when and how they wake up makes them less usable. The main reason I want an Xcontrol is that a lot of things are combined controls+indicators. For example I have an indicator with the current value and the setpoint and some other stuff, and a control for the setpoint -- but you can't bind to user events, so you have to pick a proper 'direction' for the control and use a property for the other way and you end up having to marshal data around anyway. I don't hate them as much as I once did, but I don't like them. I think the q guy had it right, basically just grouping control references together and using standard functions to manipulate the group as one: https://sine.ni.com/nips/cds/view/p/lang/en/nid/214228
  5. guys this was literally october of last year please stop bumping it and yes, I know this is the forum equivalent of reply-all "please unsubscribe me from this mailing list" at work. no way around it 👻
  6. For fun I thought I'd make a list of the reasons I can remember why people choose sometimes choose UDP over TCP. Connection overhead of TCP (initiating a connection) Mainly a big deal with web browsers (each page has to connect to several domains, and each takes a few (usually 2 I believe) TCP connections, which introduces latency) This is part of why HTTP/3 exists Not a big deal for a 2 hour test where you open one connection Don't need packet de-duplication or re-transmits video streaming or there is an application-specific usage pattern that makes application-layer handling of faults the better route (HTTP/3) This application needs reliable transmission as it does not implement reliability at a higher level Want to avoid ordered transmission/head-of-line blocking This really means you are implementing multiplexing at the application level rather than at the TCP level -- its a hell of a lot easier to open 10 TCP connections, especially in applications on closed networks which are not "web scale" This is the reason HTTP/2 exists. HTTP/2 has connection multiplexing on TCP, HTTP/3 has connection multiplexing over UDP. Given the reliable transmission and rate requirement, I'm assuming ordered transmission is desired Want to avoid congestion control Bad actor attempting to cause network failures or: self-limited bandwidth use This application falls under this category or: Implement congestion control at the application layer (HTTP/3) Memory/CPU usage of tcp implementation Erm...labview Network engineers want to heavily fiddle with parameters and algorithms without waiting for the OS kernel to update HTTP/3 is supposed to be faster because of this -- TCP is tuned for 20 years ago or so its been said, and HTTP/3 can be tuned for modern networks I'm assuming this is not Michael On a closed network, for this application, its hard to see a benefit to UDP. (It occurs to me Michael never said it was a closed network, but if he put a pharlap system on the internet...😵)
  7. On a closed network where lost packets are unlikely, TCP with nagle turned off should have minimal overhead/latency vs udp. Is there any way to send the ARP manually -- IE call into the winsock api (PharLap ) and force an ARP every 5 minutes to refresh the table?
  8. it also needs fpga: https://www.ni.com/pdf/manuals/378049a.pdf
  9. The usual reason files are missing from that menu is if you don't have LabVIEW FPGA installed, or if you installed a new software package and didn't re-install the drivers (vision acquisition, rio, etc). I've only ever used the flexrio-based cameralink card but I think the driver is slightly different.
  10. 2013 is my go-to for stable+old. 2010+11 had the new compiler, 12...I dunno what happened with 12, but it sure seemed to be unstable. Its probably a lie that my brain made up, but it always seems like the even years are much worse than the odd years, with the exception being that 2018 seems pretty solid. 2013 has the new compiler (2010) with the performance issues resolved (2012), auto-concatenating terminals (2012), somewhat improved web service scheme, get class instance by name, dotnet CLR got bumped to 4.0 (I know this may be a negative to you ). The only big issue I know of is the attached code comment arrow thing, which results in an instacrash at a later date if you duplicate any multi-frame structure that has one inside. You miss out on custom right click menus (2015), VIMs and read-only DVRs (2017), and the CLI, python node, and type disable structure (2018) -- of these only the custom right click menu is probably helpful.
  11. Not sure if this was a typo, but RT-FIFOs are between loops on the RT side. You would use a Host-to-Target FPGA FIFO to pass data down to the FPGA for output. These fifo types do not support clusters, so you'd have to develop your own scheme for passing data down. This may be as simple as sending values in chunks of N, but you do have to watch out for overflows. If you are sending data from RT to FPGA there is a likelihood of underflow, depending on the output rate. You would probably wish to load all your file data into the fifo first and then tell the FPGA to start reading. This all depends of course on how much data you have. If you have 10 minutes of data at a rate of 1 Hz, this is overkill but would still work. If you have 10 minutes of data at 100 kHz, then you likely won't have enough memory, so you'll want to preload the FIFO and then feed data as fast as you can. You'll also want to keep in mind the signals side of this. I'm not great in this area, but I would imagine that you are attempting to replicate a continuous signal, you'll want to output as fast as possible to reduce frequency spikes as the output changes.
  12. I'm definitely not expecting anything he said to be set in stone, I was just curious about the concept. And to reinforce one of the bigger use cases that comes to mind for me That sounds like a pretty cool concept. I don't really have any more specific thoughts, it just sounds like a nice way forward. I do have one more somewhat related...I guess question, that comes to mind. So I think for me the exciting part is all of the reference lifetime-related stuff, but your first bullet point is improved memory management for large data sets. Most large data sets are such because they are for analysis, and most of the analysis functions (both for images as well as for waveforms) are C DLLs. My question, if you want to call it that, is does that confound things at all? For example I know some types passed to the CLFN require full data copies anyway (ie the substring, per that buffer allocations thread I linked to earlier).
  13. Fair enough on all the above -- I think the answer is that we are agreeing for the most part, I'm just being pushy 🧐 . To follow up on the above and on AQ's point, my last comment was more or less attempting to reach this question, but didn't quite get there: How does this work together with the existing system? I think most of my questions and comments above were basically revolving around how these concepts apply within the world of a normal VI. To try to illustrate what I'm getting at, I'll go back to my vision application above. If I understand you, it seems like the way it is now, rebar references may exist only within rebar diagrams. But if I have my 5 vision-related loops described above, my logger will definitely be using standard file refnums, acquisition will be using imaq refnums, networking will have tcp refnums, so presumably all these top level loops will be standard, non-rebar VIs. But I want to share my data between them safely and without copies. So I'm questioning what the approach is for melding the two types of code. Is the idea to make more of the refnum types into rebar types, so there is a tcp rebar and a imaq rebar and a file rebar (this is what I was trying to get at with the "baggage-free VI" comment)? Or would the reverse happen? Or am I totally misunderstanding?
  14. Most of your responses made sense so I'm just going to pick out the few bits that made less: The local variable lifetime is kind of my point -- its life isn't really limited because the data space for a reentrant VI remains live for the duration of an application. Similarly, I was under the impression the shift register value is also permanent. As an example, I seem to recall a while back running a loop that allocates a giant (>>1mb) array on a shift register but never wire it out of the VI. if you do that, the large array remains part of the VI's data usage. From the perspective of the details you mentioned, yes, that giant array is on the heap somewhere, but all that data is still live until something overwrites it, making that data location permanent as well, even if initialized. Good explanation, but for the refnum point I keep coming back around to -- yes, this is something I absolutely would love to see improved and resolved, but it seems like a bandaid over an underlying flaw in how labview does dataflow. Now thats something I can definitely get behind. If NXG can have "gvi"s or "good, baggage-free VIs without depending on the design decisions from 3 decades ago that might not have aged as well" I might switch sooner But in all seriousness, it seems like you've made your own type of VI with compiler rules associated with it. Is there a way to expand that more broadly, making life simpler enough for the compiler (per your points above) that it can figure out the lifetime of objects? IE in order to reach the goal of stateless functions, but without relying on the user to manage lifetimes?
  15. Definitely Kudod, but Mercer's responses sound like this is never gonna happen. The abort button thing in particular seems to also apply to rebar, although it's not clear if the nxg abort has the same semantics. I like your point on references, but the counter is also good to bring up -- since everything is top level, reliable sharing is impossible between two things with different lifetimes unless one is always long lived and defined as the owner. This is often fine, but it can bite people who are unaware.
  16. It seems interesting, and very cool that NXG lets you do stuff like this, but I do have some questions and comments. All of the below is without having installed it. I tried, but it requires NI package manager 19. NIPM 19 is not released as far as I can tell (I downloaded the latest, 18.5.1, from ni.com and when it loads it doesn't seem able to reach the server). More to the point, why does NIPM continuously have this issue where it requires a specific version to unpack the files? I honestly don't think I've ever successfully installed a package with NIPM on the first go <_< To start, I think you need to get more technical on the overview. For example, you say things like local variables and shift registers are "places" and that "places" have lifetimes...what is the lifetime of a local variable or a shift register? The explanation of how in rebar the sets of wires correspond to a single 'place' which...sounds just like a standard labview buffer allocation. My point is that you're making something for the type of person who might say "oh neat a rust+labview mashup!!" but then have a 1000ft explanation of how it works 👻 This is just a nitpick, but labview already makes data copies explicit. What it doesn't make explicit is optimizations that remove the need for copies . Also, they aren't copy dots, they are buffer allocations... Its obvious that LabVIEW knows what to do with a visa refnum (and file refnums, and any other refnum I can think of) when the application closes...which I think is a positive for LabVIEW, so I'm not sure how often its actually valuable to define cleanup (especially given it sounds like the cleanup is off diagram). That having been said, I can think of one super exciting use for this which is hardware access via a dll. If this tool lets you say "this pointer sized int is a pointer and when the program exits call this function" thats pretty awesome. And much better than the unreserve callback nonsense in the CLFN. I'm kind of confused by this and it probably comes from my comments at the top about not really getting whats going on under the hood. To me the problem with the existing system is that the refnums point at global things. For example, if I allocate a queue inside of A.vi, and I have no terminal which outputs the value of the queue refnum, if A.vi finished executing the compiler SHOULD kill the queue.....except that queues are global. I'd much much much much rather have the compiler figure out the lifetime of the queue for me and kill it when the wire can't go anywhere else. It seems clear that this addon allows for that, which is awesome, but why do we need an addon? I thought the whole point of dataflow was that the compiler knew about the lifetime of data and could make intelligent decisions based on that? I don't know enough about iterators and generators, but why do you think anonymous functions are aided by this? To be honest its not clear to me why anonymous functions aren't possible right now, nor closures. I think a good summary of my comments above is basically this: Hey, thats pretty neat -- why isn't this already part of the language? I wanted to finish up with an application-specific question. One of the big use cases for avoiding copies is image processing, but unfortunately imaq images are not only global, but they are not reference counted like named queues (ie worst of both worlds). So if, for example, you are an image acquisition and you want to share your latest snap with all your closest friends (the logging loop, the network loop, the processing loop, the display loop) its difficult to share the references in such a way that when all the other loops are done, the image is freed. If it were a queue, you could obtain several named references to the same queue and the queue only goes away when all the references are dead. I ended up implementing this myself with a DVR, but it still requires that users call the 'release' function -- ideally, and what I think rebar brings to the table, is an automatic release. Does this rebar system solve that? If so, sweet. You should share it with the imaq group
  17. I'm guessing shaun would say this one: https://lvs-tools.co.uk/software/websocket-api-labview-library/ But the one on vipm seems good enough to me. A lot of the others are only 1 half or the other.
  18. Yeah websockets are easy, theres 12 libraries out there for setting them up using the tcp primitives. 50 ms delay could be nagle -- if its a closed network, 50 ms is at least 25 round trips, likely more. One possibility for the long long delay is if you're using hostname rather than numeric IP address. The hostname lookup is a disaster for any networking code, at least on windows.
  19. When you say cdaq, you mean normal cdaq or one of windows/rt cdaqs? Assuming windows/rt, maybe if you want some basic messages just set up a web service with a named queue to forward messages to. Then you don't need an actor, just call the http api directly (or spawn it as a task).
  20. DAQmx might not work right unless you increase the thread count, depending on how many of those actors have blocking functions
  21. Its worth mentioning that LabVIEW should do this for you, its called loop invariant code motion, so don't worry about it for performance reasons.You should put the code wherever its more readable. In this case, I'd argue the most readable form is leaving the math where it is, but moving the source values (array size, iteration terminal, control) closer in. I'd also say use a local variable for the control -- the downside to this is that, in fact, using a local variable will stop the loop invariant code motion from happening 😕 In this particular case, I'd like to mention that you can make code more readable through various means -- one is code movement, another is labelling wires or using self-labelled elements like clusters. You can also add comments, but before you go adding comments I would always always look at the code and see if it can be changed to use a different tool to make the meaning more self-evident. To stick with the current section of code: personally, I find pretty much all iteration-based math to be incomprehensible. I sat here for like a full minute trying to figure out why on earth you were taking max(0, i-10) ...but stepping back for a second, it looks like it should probably just be a case structure with two cases, "..10" and "11..". There is always a case structure in this code, but in your version its hidden inside the max() function and if you use a case structure its explicit. Then you have the problem of "why 10?", but at least anyone reading the code is not stuck back at "why maximum?". Along these lines, one thing that might really help you is if you find yourself a buddy and try to explain the code to them (bit by bit, i mean, definitely not all at once). As logman said the application isn't really all that complicated, and I would bet that anyone with a semi-softwarish or logical background could follow along if you walked them through it. The purpose would be to rethink code like the above. If you're sitting there explaining to another human being "well I want the index of this listbox to be 0 up until iteration 10 and then I want it to start incrementing by 1, so what I did is I took the maximum of 0 and the iteration terminal and then built that into an array with 0 and used that as the index" you might start thinking to yourself "man I really need to make this easier to follow".
  22. I have two answers which might help: Answer A: Purchase Teststand and learn it. Without specific details I cant be sure, but it looks like the sort of application teststand is built for. If someone gives you crap about the cost, I'd argue that for sequential things like you've shown here teststand is a lot more maintainable (the bus factor for the code above is likely 1, and potentially 0 after a few months away from it). I'd also add that it has a lot of standardized reporting stuff built in...and if you're calibrating stuff, this would seem to be critical. So seriously, at least let someone from NI spend an hour giving you a demo. Answer B: Looks to me like your first step might be to take some of those 50 front panel controls and put them into subVIs that are set to run as dialogs (in VI options, show front panel when called and close front panel when done). Those dialogs can take some of the logic thats in your main loop and organize it a bit -- "this event case is associated with this user input", and so on. Those dialogs can return small clusters with the configuration (Monitor is a cluster of model[str] and serial[str] and resistance[u16]). If at the end of this you still have a ton of wires, as hooovahh says you can turn that into a clustersaurus/BFC, but its still better organized (clustersaurus->monitor->model vs just having a control called "monitor model" sitting out there). Once you have some of what I'm going to call the "non-business logic" (eg the UI logic) out of the way, I think a state machine is a reasonable migration point to start with. I would add a caveat to this, which is you should also learn about different communication tools within labview -- in this specific case, queues and user events (a type of queue which works with the event structure) or "channel wires" which are intended as a simpler wrapper around the queue/event concept). I say this because it looks like there are several long-running tasks without user interaction, so creating parallel loops to run different tasks seems like the next logical step. In general you would use a queue to send data (eg "start running standard cal") from the UI loop to the tasks loop, and use a user event to send information (eg "I'm done" or "an error occurred") from the task loop back to the UI loop. drjdp has a video on some of the considerations for this here although it may be too much for you now -- hes coming at it from the other end "I've been using this pattern for a while and heres where bad stuff happens". Once you've mastered this version, and if you feel like its still complex, the next step would be to dive further into frameworks (things like delacor's QMH or drjdp's messenger library or ni's actor framework which are arguably in order of increasing abstract-ness) -- in these frameworks more stuff happens asynchronously from one another which can make the code more modular (the "standard cal process" is always over in this one library while the user dialog input is always over in this other library) but theres obviously a big learning curve and frameworks tend to require you fit into them rather than the reverse.
  23. Fair enough, its definitely been a while. Last I remember I had to have the whole visual C sdk on my machine because it wanted to find some standard header. To me this is one of the bigger annoyances -- I dont' want to fix up the h file, that seems iffy to me. I suppose it generally doesn't matter since its just use to help the script along, but... Usually strings and arrays are OK, but yeah big structs can be iffy due to offsets -- that is once nice thing about this LLVM-based tool, is it tells you all the byte offsets so technically one could pass in an appropriately sized buffer and restructure the data...but thats a pain as well.
  24. just curious if anyone has ever made an attempt to improve scripting out of dlls. I know theres the built in import wizard, but I've literally never seen that work (has anyone here?). The reason i'm asking is that I was thinking about it and came across this tool: https://github.com/CastXML/CastXML (binaries from the build link at the bottom here: https://github.com/CastXML/CastXMLSuperbuild) Essentially it takes a C header (or any C file I guess) and processes it with LLVM to produce an XML tree of all the functions and types. For example it takes the header here: https://github.com/nanomsg/nng/blob/master/include/nng/nng.h and produces xml which includes stuff like this: <Typedef id="_8" name="ptrdiff_t" type="_285" context="_1" location="f1:51" file="f1" line="51"/> <Typedef id="_9" name="wchar_t" type="_286" context="_1" location="f1:90" file="f1" line="90"/> <Typedef id="_14" name="intptr_t" type="_285" context="_1" location="f4:182" file="f4" line="182"/> <Union id="_78" name="nng_sockaddr" context="_1" location="f6:151" file="f6" line="151" members="_328 _329 _330 _331 _332 _333" size="1088" align="64"/> <Struct id="_61" name="nng_aio" context="_1" location="f6:96" file="f6" line="96" incomplete="1"/> <Struct id="_68" name="nng_sockaddr_in6" context="_1" location="f6:124" file="f6" line="124" members="_315 _316 _317" size="160" align="16"/> <Field id="_315" name="sa_family" type="_25" context="_68" access="public" location="f6:125" file="f6" line="125" offset="0"/> <Field id="_316" name="sa_port" type="_25" context="_68" access="public" location="f6:126" file="f6" line="126" offset="16"/> <Field id="_317" name="sa_addr" type="_385" context="_68" access="public" location="f6:127" file="f6" line="127" offset="32"/> <Typedef id="_79" name="nng_sockaddr" type="_334" context="_1" location="f6:158" file="f6" line="158"/> <Enumeration id="_80" name="nng_sockaddr_family" context="_1" location="f6:160" file="f6" line="160" size="32" align="32"> <EnumValue name="NNG_AF_UNSPEC" init="0"/> <EnumValue name="NNG_AF_INPROC" init="1"/> <EnumValue name="NNG_AF_IPC" init="2"/> <EnumValue name="NNG_AF_INET" init="3"/> <EnumValue name="NNG_AF_INET6" init="4"/> <EnumValue name="NNG_AF_ZT" init="5"/> </Enumeration> <Function id="_84" name="nng_close" returns="_293" context="_1" location="f6:194" file="f6" line="194" mangled="?nng_close@@9" attributes="dllimport"> <Argument type="_55" location="f6:194" file="f6" line="194"/> </Function> <Function id="_87" name="nng_setopt" returns="_293" context="_1" location="f6:205" file="f6" line="205" mangled="?nng_setopt@@9" attributes="dllimport"> <Argument type="_55" location="f6:205" file="f6" line="205"/> <Argument type="_338" location="f6:205" file="f6" line="205"/> <Argument type="_339" location="f6:205" file="f6" line="205"/> <Argument type="_5" location="f6:205" file="f6" line="205"/> </Function> <FundamentalType id="_293" name="int" size="32" align="32"/> <FundamentalType id="_294" name="unsigned char" size="8" align="8"/> <FundamentalType id="_295" name="unsigned int" size="32" align="32"/> <PointerType id="_353" type="_352" size="64" align="64"/> <PointerType id="_354" type="_62" size="64" align="64"/> <PointerType id="_355" type="_47" size="64" align="64"/> Its pretty easy to parse because everything has an ID -- I've got some truly gross code for converting all the struct and enum definitions into labview which is simple enough. I guess I'm just testing the waters to see if anyone else thinks this is useful. My usual strategy right now is to minimize the number of C calls I make, but if I could just magically import a header and get the library made for me, it would be pretty cool. But on the other hand, theres so many challenges associated with that which I hadn't thought about going in, like handling structs by value, or what to do with numeric return values (is it an error code? is 0 good or bad?). The particular library I selected as a test case above has two features that are super annoying -- its main 'session' reference is not a pointer like in most C apis, but a struct containing a single int, and it also has a union type..for some reason. So even making this library work, which I thought would be an easy way to try it out, has ballooned in complexity a bit. Thoughts?
  25. Also, this is kind of a side point but the concept is the same. Has anyone noticed that reentrant VIs get super slow sometimes when debugging? Its not always, but I can't figure out what conditions might be causing it. Its like you're going along debugging some code, you step into a reentrant VI, and everything just stops, and it takes like 20-30 seconds for the window to materialize. I know its not just one computer, sadly.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.