Jump to content

ShaunR

Members
  • Posts

    4,862
  • Joined

  • Days Won

    296

ShaunR last won the day on July 9

ShaunR had the most liked content!

Profile Information

  • Gender
    Male

LabVIEW Information

  • Version
    LabVIEW 2009
  • Since
    1994

Recent Profile Visitors

35,548 profile views

ShaunR's Achievements

  1. I've no experience with Redis. For monitoring/supervision/alerting, you don't need local storage at all. MQTT might be worth looking at for that. Sorry I couldn't be more helpful but looks like a cool project.
  2. My tuppence is that anything is better than Network Shared Variables. My first advice is choose one or two, not a jenga tower of many. What's the use-case here? Is the data real-time (you know what I mean) over the network? Are the devices dependent on data from another device or is the data accumulated for exploitation later? If it's the latter, I would go with SQLite locally (for integrity and reliability) and periodic merges to MySQL [Maria] remotely (for exploitation). Both of those technologies have well established API's in almost all languages.
  3. I can't even find threads I posted in last week. It's a skill, to be sure.
  4. Because I can immediately test the correctness of any of those VI's by pressing run and viewing the indicators. Nope. That's just a generalisation based on your specific workflow. If you have a bug, you may not know what VI it resides in and bugs can be introduced retrospectively because of changes in scope. Bugs can arise at any time when changes are made and not just in the VI you changed. If you are not using blackbox testing and relying on unit tests, your software definitely has bugs in it and your customers will find them before you do. Again. That's just your specific workflow. The idea of having "debugging sessions" is an anathema to me. I make a change, run it, make a change, run it. That's my workflow - inline testing while coding along with unit testing at the cycle end. The goal is to have zero failures in unit testing or, put it another way, unit and blackbox testing is the customer! Unlike most of the text languages; we have just-in-time compilation - use it. I can quantitively do that without running unit tests using a front panel. What's your metric for being happy that a VI works well without a front panel? Passes a unit test? It may be in the codebase for 30 years but when debugging I may need to use the suspend (see below) to trace another bug through that and many other VI's. There is a setting on subVI's that allow the FP to suspend the execution of a VI and allow modification of the data and run it over and over again while the rest of the system carries on. This is an invaluable feature which requires a front panel This is simply not true and is a fundamental misunderstanding of how exe's are compiled. Can't wait for the complaint about the LabVIEW garbage collector. We'll agree to disagree.
  5. ECL Version 4.5.0 was released with CoAP support. Next on the hit-list will be QUIC (when it's supported by OpenSSL properly). I'm also thinking of opening up the Socket VI's as a public API so people can create their own custom protocols.
  6. You want a table per channel. If you want to decimate, then use something like (rowid %% %d == 0) where %d is the decimation number of points. The graph display will do bilinear averaging if it's more than the number of pixels it can show so don't bother with that unless you want a specific type of post analysis. Be aware of aliasing though. The above is a section of code from the following example. You are basically doing a variation of it. It selects a range and displays Decimation number of points from that range but range selection is obtained by zooming on the graph rather than a slider. The query update rate is approximately 100ms and it doesn't change much for even a few million data points in the DB. It was a few versions ago but I did do some benchmarking of SQLite. So to give you some idea of what effects performance:
  7. I'm not sure what you mean by "not fully transparent" but if you want to get rid of the border you can do something like this. sr_test1.zip
  8. Not so much mind boggling - I used to support VxWorks . It's not just Apple OS's though. Linux is similar. The same mind-set pervades both ecosystems. I used to support Mac, Linux and Windows for my binary based products because LabVIEW made it look easy. Mac was the first to go (nobody used it anyway) then Linux went (they are still in denial about distribution).
  9. It doesn't have to. Just back-save () to a version that supports the OS then compile under that version. If you are thinking about forward compatibility then all languages gave up looking for that unicorn many years ago. That is excellent news.
  10. This wouldn't be much of an issue since you could always use an older version of LabVIEW to compile for that customer. However. Now LabVIEW is subscription based so hopefully you have kept copies of your old LabVIEW installation downloads.
  11. Anyone interested in QUIC? I have a working client (OpenSSL doesn't support server-side ATM but will later this year). I feel I need to clarify that when I say I have a working client, that's without HTTP3 (just the QUIC transport). That means the "Example SSL Data Client" and "Example SSL HTTP Client TCP" can use QUIC but things like "Example SSL HTTP Client GET" cannot (for now). If you are interested, then now's the time to put in your use-cases, must haves and nice-to-haves. I'm particularly interested in the use-cases as QUIC has the concept of multiplexed streams so may benefit from a complete API (similar to how the SSH API has channels) rather than just choosing between TLS/DTLS/QUIC as it now operates.
  12. There is a GPU Toolkit if you want to try it. No need to write wrapper DLL's. It's in VIPM so you can just install it and try. Don't bother with the download button on the website-it's just a launch link for VIPM and you'd have to log in. One afterthought. When benchmarking you must never leave outputs unwired (like the 2d arrays in your benchmark). LabVIEW will know that the data isn't used anywhere and optimise to give different results than when in production. So you should at least do something like this: On my machine your original executed in ~10ms. With the above it was ~30ms.
  13. Nope. I can't beat it. To get better performance i expect you would probably have to use different hardware (FPGA or GPU). Self auto-incrementing arrays in LabVIEW are extremely efficient and I've come across the situation previously where decimate is usually about 4 times slower. Your particular requirement requires deleting a subsection at the beginning and end of each acquisition so most optimisations aren't available. Just be aware that you have a fixed number of channels and hope the HW guys don't add more or make a cheaper version with only 2.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.