Jump to content

ShaunR

Members
  • Posts

    4,929
  • Joined

  • Days Won

    301

Posts posted by ShaunR

  1. 20 hours ago, hooovahh said:

    The newest version of LabVIEW I have installed is 2022 Q3.  I had 2024, but my main project was a huge slow down in development so I rolled back.  I think I have some circular library dependencies, that need to get resolved.  But still same code, way slower.  In 2022 Q3 I opened the example here and it locked up LabVIEW for about 60 seconds. But once opened creating a constant was also on the order of 1 or 2 seconds.  QuickDrop on create controls on a node (CTRL+D) takes about 8 seconds, undo from this operation takes about 6.  Basically any drop, wire, or delete operation is 1 to 2 seconds.  Very painful.  If you gave this to NI they'd likely say you should refactor the VI so it has smaller chunks instead of one big VI.  But the point is I've seen this type of behavior to a lesser extent on lots of code.

    Dadreamer is talking about minutes per change though. I still think the symptom is probably exacerbated by XNodes but probably not the fundamental problem.

  2. 16 hours ago, dadreamer said:

    Looks like Block Diagram Binary Heap (BDHb) resource took 1.21 MB and the rest is for the others. There are 120 Match Regular Expression XNodes on the diagram. If each XNode instance is 10 KB approximately, and they all are get embedded into the VI, we get 10 * 120 = 1200 KB. The XNode's icon is copied many times as well (DSIM fork). So, the conclusion is that we shouldn't use XNodes for multiple parallel calls. The less, the better, right?

    Ok. The load time seems to reduce with these tokens:

    Still the editing is sluggish tho.

    Editing a constant in your test VI only results in a pause for about 1.5 secs on my machine. It's the same in 2025 and 2009 (back-saved to 2009 is only 1.3MB, FWIW). I think you may be chasing something else. There was a time when on some machines the editing operations would result in long busy cursors of the order of 10-20 secs - especially after LabVIEW 2011.  Not necessarily XNodes either (although XNodes were the suspect). I don't think anyone ever got to the bottom of it and I don't think NI could replicate it.

  3. 22 hours ago, dadreamer said:

    Well, I see no issues when running XNodes at the run-time, when everything is generated and compiled. What I see is some noticeable lags at the edit time. Say, I have 50 or even 100 instances of one or two XNodes in one VI, set to their own parameters each. When compiled, all is fine. But when I make some minor change (create a constant, for example), LabVIEW starts to regenerate code for all the XNodes in that VI. And it can take a minute or so! Even on a top-notch computer with NVMe SSD and loads of RAM. Anyone experienced this? I've never seen such a behaviour, when dealing with VIM's. Tried to reproduce this with a bunch of Match Regular Expression XNodes in a single VI. Not on such a large scale, but the issue remains. Moreover the whole VI hierarchy opens super slowly, but this I've already noticed before, when dealt with third party XNodes.

    xn.vi 1.8 MB · 0 downloads

    1.8 MB? :blink:

  4. 22 hours ago, Rolf Kalbermatter said:

    The popular serializer/deserializer problem. The serializer is never really the hard part (it can be laborious if you have to handle many data types but it's doable) but the deserializer gets almost always tricky. Every serious programmer gets into this problem at some point, and many spend countless hours to write the perfect serializer/deserializer library, only to abandon their own creation after a few attempts to shoehorn it into other applications. 🙂

    We are on a hiding to nothing as we can't create objects. I abandoned those thoughts over a decade ago (maybe even 2 decades ago :ph34r:). It is feasible with scripting but so slow and won't work in built applications. The half-way house is to use scripting to make the labview prototypes for typdefs and handlers then load dynamically as plugins but it's a lot of infrastructure just to propagate what are hopefully rare changes. I did play with that (hence my question to hooovahh) but in the end went for a string based solution to avoid it altogether.

    What happened to AQ's behemoth of a serializer? Did he ever get that working?

  5. 51 minutes ago, hooovahh said:

    No but that is a great suggestion to think about for future improvements. At the moment I could do the reverse though. Given the Request/Reply type defs, generate the JSON strings describing the prototypes.  Then replacing the Network Streams with HTTP, or TCP could mean other applications could more easily control these remote systems.

    That's the easy bit. It's much easier getting stuff into other forms than it is reconstituting them because we can't create objects and primitives. This is why JSON libraries have very straightforward encoders that can take any type but you have all sorts of awkward VI's for getting them out into LabVIEW again.

    If you are going to use JSON strings you might as well not use LabVIEW types at all ;). Add the Network Stream endpoint to it and you're good to go. Getting it back out again is where you will find the problems unless, of course, the device uses strings too (SCPI)

  6. 18 hours ago, hooovahh said:

    Yeah I tried making it as elegant as possible, but as you said there are limitations, especially with data type propagation.  I was hoping to use XNodes, or VIMs to help with this, but in practice it just made things overly difficult.  I do occasionally get variant to data conversion issues, if say the prototype of a request changes, but it didn't get updated on the remote system. But since I only work in LabVIEW, and since I control the release of all the builds, it is fairly manageable.  Sometimes to avoid this I will make a new event entirely, to not break backwards compatibility with older systems, or I may write version mutation code, but this has performance hits that I'd rather not have. Like you said, not always elegant.

    Have you played with scripting event prototypes and handlers from JSON strings?

  7. 30 minutes ago, hooovahh said:

    Yes. The transport mechanism could have been anything, and as I mentioned I probably should have gone with pure TCP but it was the quickest way to get it working.

    LabVIEW network streams are fine. I wouldn't worry about the transport too much. Network Streams have a nice way of integrating into API's.

    What I was looking at were network events where you don't have to synchronise prototypes throughout a system on different machines (changing an event prototype usually breaks an Event  Structure). Everything for my events is a string so this is not a problem but it makes parsing tricky. This was one of the reasons I wanted "Named Events" (events can be named like queues can) but they botched the downstream polymorphism. I was wondering whether you had found an elegant way of serialising events (a bit like protocol buffers but without the compiler).

  8. 31 minutes ago, hooovahh said:

    Variants and type defs.  There is a type def for the request, and a type def for the reply, along with the 3 VIs for performing the request, converting the request, and sending the response.  All generated with scripting along with the case to handle it.  Because all User Events are the same data type, they can be registered in an array at once, like a publisher/subscriber model.  Very useful for debugging since a single location can register for all events and you can see what the traffic is.  There is a state in a state machine for receiving each request for work to be done, and in there is the scripted VI for handling the conversion from variant back to the type def, and then type def back to variant for the reply.

    When you perform a remote request, instead of sending the user event to the Power Supply actor, it gets sent to the Network Streams actor.  This will get the User Event, then send the data as a network stream, along with some other house keeping data to the remote system.  The remote system has its Network Stream actor running and will get it, then it will pull out the data, and send the User Event, to its own Power Supply actor.  That actor will do work, then send a user event back as the reply.  The remote Network Stream actor gets this, then sends the data back to the host using a Network Stream. Now my local Network Stream actor gets it, and generates the user event as the reply.  The reason for the complicated nature, is it makes using it very simple.

    IC. so you have created a cloning mechanism of User Events - reconstruct pre-defined User Event primitives locally with the data sent over the stream?

  9. 15 hours ago, hooovahh said:

    I have a network stream actor (not NI's actor but whatever) that sits and handles the back and forth. When you want work to happen like "Set PSU Output" you can state which instance you are asking it to (because the actor is reentrant), and which network location you want. The same VI is called, and can send the user event to the local instance, or will send a user event to the Network Stream loop, which will send the request for a user event to be ran on the remote system, and then reverse it to send the reply back if there is one.  I like the flexibility of having the "Set PSU Output" being the same VI I call if I am running locally, or sending the request to be done remotely.  So when I talked about running a sequence, it is the same VIs called, just having its destination settings set appropriately.

    Yes. It is the "send user event" that I'm having difficulty with. User events are always local and require a prototype so how do you serialise a user event to send it over a stream?

  10. On 5/4/2025 at 1:28 PM, viSci said:

    Nicely described approach Shaun.  I am doing something similar with a lab automation project that involves vacuum chamber and multi-zone temperature control.  Elected to use the messenger framework, which supports 'spinning up' instances and TCP capability for remote devices like cRIO.  The messenger architecture also supports many types of asynchronous messaging beyond simple synchronous command/response.  Your idea of decoupling EC sequencing logic is good, moving it out of the EC subsystem to allow synchronization with other subsystems during ramp/soak profiles.   Hey remember Test Stand Lite?  This is where an such a scripting component would really shine and be a great benefit to the community.  

    You can script with text files with #1. Just adding a feature to delay N ms gets you most of the way to a full scripting language (conditionals and for-loops are what's left but a harder proposition). However, for a more general solution I use services in #2  with queue's for inputs and events for outputs. There was a discussion ages ago at whether queues were needed at all since LabVIEW events have a hidden queue but you can't push to the front of them (STOP message ;)) and, at the time, you couldn't clear them so I opted for proper queues. So, architecturally, I use "many to one" for inputs and "one to many" for outputs.

  11. OK. This is how I design systems like this.

    1. TCP. Each subsystem has a TCP interface. This allows spinning up "instances" then connecting to them even across networks. You can rationalise the TCP API and I usually use a thin LabVIEW wrapper around SCPI (most of your devices will support SCPI). You can also use it to make non SCPI compliant devices into SCPI ones (your Environmental Chamber - EC - is probably one that doesn't support SCPI). If you do this right, you can even script entire tests from a text file just using SCPI commands.

    2. Services. Each subsystem offers services. These are for when #1 isn't enough and we need state. A good example of this is your Environmental Chamber. It is likely you will have temperature profiles that  control signals and measurements need to be synchronized with. While services may be devices (a DVM, for example), services can also be synchronization logic that sequences multiple devices. If you put that logic in your EC code, it will fix that code for that specific sequence so don't do that. Instead  use services to glue other devices (like the DVM and EC) into synchronization processes. Along with #1, this will form the basis of recipe's that can incorporate complex state and sequencing. In this way you will compartmentalize your system into reusable modules. First thing you should do is make a "Logging" service. Then when your devices error they can report errors for your diagnostics. The second thing should be a service that "views" the log in real-time so you can see what's going on, when it's going on. (This is why we have logging levels).

    3. Global State. If you have 1 & 2 this can be anything. It can be a text file with a list of SCPI commands (#1). It can be a service you wrote in #2, Test Stand, web page or a bash/batch script. This is where you use your recipe's to fulfill a test requirement.

    4. You will need to think carefully about how the subsystems talk to each other. For example. Using SCPI a MEAS :VOLT:DC? command returns almost instantly (command-response pattern). However for the EC you may want to wait until  a particular temperature has been reached before issuing MEAS :VOLT:DC?. The problem here is that SCPI is command-response but the behavior required is event driven. One could make the the TCP interface (#1) of the EC accept MEAS :TEMP? where the command doesn't return unless the target setpoint has been reached. However, this won't work reliably and requires internal state and checks for the edge cases though. So it may actually be better in #2. There are a number of ways to address these things using #1, #2 or #3 and that is why you are getting the big bucks.

    You will notice I haven't mentioned specific technologies here (apart for TCP). For #1 you shouldn't need anything other than reentrant VI's and VI Server. For #2 you can use your favourite foot-shooting method but notice that you are not limited to one type and can choose an architecture for the specific task (they don't all have to be QMH, for example). For #3 you don't even have to use LabVIEW.

  12. 3 hours ago, Rolf Kalbermatter said:

    Well, except that LLBs only have one hierarchy level and no possibility to make VIs private for external users.

    You actually have 2 levels with LLB's (semanticly) and that's more than enough for me.

    I also don't agree with all the private stuff. Protected should be the minimal resolution so people can override if they want to but still be able to modify everything without hacking the base. This only really makes sense in non-LabVIEW languages though so protected might as well be private in LabVIEW. And don't get me started with all that guff on "Friends" :lol:

    But in terms of containers, external users can call what they like as far as I'm concerned but just know only the published API is supported. So making stuff private is a non-issue to me. If I'm feeing generous and want them to call stuff then I make it a Top Level vi in the LLB. Everything else is support stuff for the top level VI's so call it at your peril.

    I still maintain PPL's are just LLB's wearing straight-jackets and foot-shooting holsters. :P

  13. On 4/11/2025 at 11:19 AM, Rolf Kalbermatter said:

    Libraries are the pre-request to creating packed libraries

    *prerequisite.

    Packed libraries are another feature that doesn't really solve any problems that you couldn't do with LLB's. At best it is a whole new  library type to solve a minor source code control problem.

  14. There isn't anything really special about lvlibs. They are basically containers with a couple of bells and whistles. If you look at them with a text editor you will see it's basically a list of VI's in an  XML format.

    The main reason I use them is that they can be protected with the NI 3rd Party Activation Toolkit. A secondary reason I use them is for organisation and partitioning. It would be frowned upon by many but I use lvlibs for the ability to add virtual directories and self populating directories for organisation and contain the actual VI's in llb's for ease of distribution. 

    I don't see them as a poor-mans class, rather a llb with project-like features.

  15. I find it interesting that spam really wasn't an issue until the forums were upgraded. :frusty:

    I run old software on my website and I've noticed a reduction in spam attempts as time goes on and the scanners update to newer exploits. I was getting spam through the on-site contact form as they were bypassing the CAPTCHA. It's prevented with a simple .htaccess RewriteCond but when I recently upgraded the website OS I turned it off. It took a month for a scanner to find it and start spamming and it only sent every hour. A few years ago it took something like 30 minutes and they sent every 5 minutes.

    By far the most effective methods to stop spam are

    1. Checking for reverse DNS resolution.
    2. Checking against known blacklists (like spamhaus.org).
    3. Offering honeypot files or directories (spider traps).  

    #2 tends to have a low false positive rate but [IMHO] even 1 false positive is unacceptable for mail - although might be acceptable for a forum. 

    I also wrote a spam plugin for my CMS which basically did the above first 2 things and a couple of other things like checking against a list of common disposable email addresses, checking user agents and so on. The way those things work is they tend to ban the IP address for an amount of time but I didn't want to ban someone that was trying to send an message through the site maybe because an email had bounced ; so I turned it off.

  16. 58 minutes ago, Rolf Kalbermatter said:

    The namespace of the subVIs themselves changes, so I'm afraid that separation of the compiled code alone is not enough. The linking information in the diagram heap has to be modified to record the new name which now includes the library namespace. As long as it is only a recompilation of the subVI, separation of compiled code in the caller indeed should not require a recompilation of the caller, but name changes of subVIs still do.

    In fact the automatic relinking from non-namespaced VIs to library namespaced VIs is a feature of the development environment but there is no similar automatic reversal of this change.

    If that's the case then is this just a one-time, project-wide, recompilation? Once relinked with the new namespaces then there shouldn't be any more relinking and recompiling required (except for those that have changed or have compiled code as part of the VI).

  17. On 4/23/2024 at 8:52 AM, Rolf Kalbermatter said:

    The change to "librarize" all OpenG functions is a real change in terms of requiring any and every caller to need to be recompiled. This can't be avoided so I'm afraid you will either have to bite the sour apple and make a massive commit or keep using the last OpenG version that was not moved to libraries (which in the long run is of course not a solution).

    Wasn't separate compiled code meant to resolve this issue? Is it just that some of the VI's were created before this option and so still keep compiled code?

  18. 10 hours ago, BTS_detroGuy said:

    ShaunR, To my surprise the PURE LABVIEW solution is working great for reading and logging the video stream. Unfortunately two issues i am facing.

    1) The video stream it's saving to Disk is not saving the audio data. However i can play the muted video in VLC player after the file is finalized.

    2) I have not been able to parse the 'Data' to extract the video and audio data for live display.

    I found another solution that uses FFMPEG but it seem to corrupt the first few frames. I will keep trying.

    I liked the RTSP solution better (compared to VLC DLL based solution)  because it provides the TCP connection ID. I am hoping to use it for sending the PTZ commands once i figure out the right ones.

    Data in RTSP Stream.png

    Indeed. It's not a full solution as it doesn't support multiple streams, audio or other encoding types. But if you want to get the audio then you need to add the decoding case (parse is the nomenclature used here) for the audio packets in the read payload case structure.

    image.png.d8b6564d73915d3cc3ee6ea07e2c8696.png

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.