Jump to content

ShaunR

Members
  • Posts

    4,977
  • Joined

  • Days Won

    310

Posts posted by ShaunR

  1. On 5/15/2024 at 5:13 PM, infinitenothing said:

    The problem is that some day the customer will buy a new Apple laptop and that new laptop will not support LV2023. We need maintenance releases of LabVIEW RTE to keep it all working.

    It doesn't have to. Just back-save (:D) to a version that supports the OS then compile under that version. If you are thinking about forward compatibility then all languages gave up looking for that unicorn many years ago.

    54 minutes ago, Rolf Kalbermatter said:

    It seems they are going to make normal ordering of perpetual licenses possible again.

    That is excellent news.

  2. Anyone interested in QUIC? I have a working client (OpenSSL doesn't support server-side ATM but will later this year).

    I feel I need to clarify that when I say I have a working client, that's without HTTP3 (just the QUIC transport). That means the "Example SSL Data Client" and "Example SSL HTTP Client TCP" can use QUIC but things like "Example SSL HTTP Client GET" cannot (for now).

    If you are interested, then now's the time to put in your use-cases, must haves and nice-to-haves.

    I'm particularly interested in the use-cases as QUIC has the concept of multiplexed streams so may benefit from a complete API (similar to how the SSH API has channels) rather than just choosing between TLS/DTLS/QUIC as it now operates.

    • Like 1
  3. 2 hours ago, Bruniii said:

    Thanks for trying!

    How "easy" is to use GPUs in LabVIEW for this type of operations? I remeber reading that I'm supposed to write the code in C++, where the CUDA api is used, compile the dll and than use the labview toolkit to call the dll. Unfortunally, I have zero knowlodge in basically all these step.

    There is a GPU Toolkit if you want to try it. No need to write wrapper DLL's. It's in VIPM so you can just install it and try. Don't bother with the download button on the website-it's just a launch link for VIPM and you'd have to log in.

    One afterthought. When benchmarking you must never leave outputs unwired (like the 2d arrays in your benchmark). LabVIEW will know that the data isn't used anywhere and optimise to give different results than when in production. So you should at least do something like this:

    image.png.7e3a75210f3399b5a2f93a0ddc299a5c.png

    On my machine your original executed in ~10ms. With the above it was ~30ms.

  4. Nope. I can't beat it. To get better performance i expect you would probably have to use different hardware (FPGA or GPU).

    Self auto-incrementing arrays in LabVIEW are extremely efficient and I've come across the situation previously where decimate is usually about 4 times slower. Your particular requirement requires deleting a subsection  at the beginning and end of each acquisition so most optimisations aren't available.

    Just be aware that you have a fixed number of channels and hope the HW guys don't add more or make a cheaper version with only 2.

  5. 3 hours ago, Rolf Kalbermatter said:

    I have so far not found a way that makes those paths automatically fixup at package creation, since the path seems to need to be absolute

    This is why it takes me hours to make an ECL build that works and is one of the many reasons only Windows is now supported (can load from same dir). Even then. I have to fight VIPM to get things in the right places.

    I refuse to do #2.

  6. Well. There's a few problems but the reason it's not showing the next image is because you increment the counter during acquisition until it's 300 and when you start the next acquisition it indexes into the path array at 300 (which yields Not A Path).

    You've confused your 30 second timer with the file index. Make a proper timer with a time function and increment the index on stop.

  7. There are surprising few situations where a parallel for loop (pLoop) is the solution. There are so many caveats and foot-shooting opportunities even if you ignore the caveats imposed by the IDE dialogue.

    For example. For the pLoop to operate as you would imagine, Vi's that are called must be reentrant (and preferably preallocated clones). If a called VI is not reentrant then the loop will wait until it finishes before calling it in another parallel loop (that's just how dataflow works). If a called VI is set to reentrant shared clones then you get the same problems as with any shared clone that has memory but multiplied by the number of loop iterations.

    Another that you often come across with shared, connectionless resources (say, raw sockets) is that you cannot guarantee the order that the underlying resource is accessed in. If it is, say, a byte stream then you would have to add extra information in order to reconstruct the stream which may or may not be possible. I have actual experience of this and it is why the ECL Ping functionality cannot be called in a pLoop.

    image.png.a5969331193eb9629f4b5be0d086fbbd.png

     

  8. 1 hour ago, Rolf Kalbermatter said:

    You really may be stressing LabVIEW's window management capabilities beyond reasonable borders with so many subpanels present at the same time

    Agreed. Even just 100 and UI updates become a bit flakey.

    image.png.fa543eaa93c1499f04a1ce98a236906a.png

  9. 8 hours ago, fabric said:

    I've enjoyed this hack for many years, but noticed it is not working in LV2023Q1

    See here for problem description: https://forums.ni.com/t5/LabVIEW/Darren-s-Weekly-Nugget-05-10-2010/m-p/4360614/highlight/true#M1280554

    They have fixed a bug, is my guess. Concatenating by using null char is a huge security smell.

    Multiple file types are [supposed to be] defined by using the semicolon separator. Does "*csv;*txt" not work?

     

  10. On 3/9/2024 at 10:36 PM, Mahbod Morshedi said:

    I was just wondering about the "array with tag" or "Var attribute" since there is not much information about the latter. I am unsure if the data is more susceptible to corruption, gets lost, or if there are other complications I do not know about. 

    The variant attribute cannot have duplicates. If you set it, it overwrites the previous values. With the "array with tag" you can have duplicates.

    • Like 1
  11. 7 hours ago, David Boyd said:

    are you thinking that "shared" (vs. "preallocated")

    Yes. that's exactly what I am thinking (but poorly communicated). This is a common known gotcha for VI's with shift register memories (not the first call primitive per say).

    It will probably only bite you when you have multiple instances and where it's being used with different CRC types with different integer lengths. 

    Here's an example:

    sub VI set to Preallocated (what we expect-11 more than the intialise value)

    image.png.b53f1c7cc68ed6549779cf8bfd2474cb.png

    sub VI set to Shared:

    image.png.79dbcce4bbb46d57e1c7b678dba85376.png

    If you run continuously, you will see other values as different threads become available at different times.

     

    rentrant clones.zip

  12. Nice.

    This is probably one of the only times (1 in 1,000,000) I would suggest an xnode may be preferable-and specifically for the lookup tables which should always be more performant.

    With an xnode, one can pre-calculate the tables at design time based on the type and save the cost of generating the table at first run run. This will also mean that the calculation will be constant time whether first called or later. xnodes are tricky and complicated beasts so I could understand not wanting to go down this hairy rabbit hole littered with rusty nails.

    Speaking of the table generation; I noticed you have the VI set to reentrant clones. I think this will be  a problem when you run multiple instances as the shift registers may not contain the values you expect per instance.

    image.png.d155925409b1f826d68d0fd96a7d7d9d.png

  13. 19 hours ago, hooovahh said:

    Oh but I did look up the 2023 roadmap, and unicode support was changed to "Future Development".

    My takeaway from the roadmap is that they are concentrating on interoperability and relegating products to backend services. This was the direction NI were taking before Emerson but  expect it's taken on a new impetus since the takeover. I expect the awful gRPC to be leading the charge so they can plug the NI products to their products. Unicode support isn't a consideration for that since the UI will be elsewhere.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.