Jump to content

Eleven Ways to Update an Indicator from within a subVI. Their Relative Performances and Quite Far Reaching Consequences...


Recommended Posts

Yes, I know, you wanted to do this some day too. So I did it for you. Just run (and then stop) the Main VI from the attached set (Saved in LabVIEW 2016 32-bit). I suspect (and hopeSmiley Tongue) the numbers will be quite a surprise and even a shock, especially for the fans of one particular method and some very aggressively promoted frameworks which use that method.

Fastest update of indicator from subVI.zip

  • Thanks 1
Link to comment
1 hour ago, jacobson said:

Would you be able to summarize your findings in a table?

I'm also interested in what frameworks/methods you were thinking of.

Well, the very point of putting together this code is for people to see the numbers for themselves and that I didn't "cheat" on any of the tested methods to make some look better than others. Their significantly different performance numbers are real.

In summary, the main findings are:

1. (Widely known) Passing a control reference to a subVI or a VI running in parallel to the VI where the control(indicator) is located with the purpose of using that reference to update that control/indicator is the worst you can do.

2. (Not so well known) Using "user events" in event structures for the same purpose mentioned and moreover, for implementing asynchronous communications in messaging architectures between "modules", "actors", parallel loops, etc. instead of regular queues and notifiers and now channels is (softly speaking) not a very good idea either from the performance point of view.

3. Even the fastest "channels" are not better than the "good old" notifiers and queues. At least channels give more features and allow to do things impossible before. Which hardly can be said about user events.

DQMH is one of the frameworks which rely heavily on such use of event structures one can recall right away. There are others, I bet. 

Edited by styrum
Link to comment

I my test (After setting all to disable debugging) User Events are second only to Queues and Notifiers. They're 20% faster than Channel (High Speed Stream). And Notifiers and User Events are very close in performance.  Sometimes User Events win, other times Notifiers, depends on which other methods are running concurrently.

This is in a VM with only 2 cores.

Edited by shoneill
Link to comment

LV2018_64, 6 true cores linux, debugging disabled in main and all subvis. A typical run of a few seconds: Notifier 5.18M, Lossy queue 4.27M, Unbound queue 3.81M, High speed channel  3.2M, whereas Lossy channel 26k, Tag channel 19k, Stream channel 18k, Indicator reference 10k, and the remaining three 8.6k. Usere event is the second last, 8655, better only than channel messenger (8612). Quite different than @shoneill's. What can be the affecting factors?

ETA: Inconclusive. I've been getting quite different results running a few times run Main.vi at time critical priority, then at other priorities, closed/reopened labview, then again at normal. Some system effect (not load) might affect all this?

Edited by ensegre
Link to comment

There also appears to be A LOT of interaction between things.  If I disable the Value Property Node, the User Event gets into the same realm as the Notifier and Queue.  Removing the channels makes it even more the similar.  Maybe the Queue Status in every single VI is doing something?

Link to comment

Ok, OK, my bad, I left Debugging enabled. This is not a commercial app. This is is just some "food for thought", and something to play with, which you would otherwise not have the time to put together yourself. But now you can find some time (much less) to play with it. The goal was to get people to play with it, find flaws, "unfairness", etc. So I am glad that happened so quickly. Here is a version that has debugging disabled plus a "sequential" flavor of the whole thing.

Some disclaimers right away:

1. Yes, I deliberately do not count the iterations completed by sender loops but not completed by the corresponding receivers by the time the senders are done, because I do want to count only fully completed "transactions" in evaluating "performance"

2. No, I don't say that lossy and non-lossy methods are comparable "1 to 1" in general context. But for the declared purpose of this particular experiment, updating an indicator in a UI VI, where the user is interested only in the latest value, it is fair to put them "together".

Fastest update of indicator from subVI (3).zip

Edited by styrum
Link to comment
9 hours ago, crossrulz said:

There also appears to be A LOT of interaction between things.  If I disable the Value Property Node, the User Event gets into the same realm as the Notifier and Queue.  Removing the channels makes it even more the similar.  Maybe the Queue Status in every single VI is doing something?

Yes, checking the watchdog queue takes a lot of time (if not most of) in each sender iteration. Please check out a "sequential version" in the latest posts or try putting a Disable structure around those Preview Queue Element nodes in the sender VIs (you will have to stop everything with the stop button on the toolbar of the Main then).

Link to comment

Of the messaging things plain queues should be the fastest message-oriented option because everything else (I think) is built on top. Notifiers, channels, etc use queues. All these options are pretty quick.

>Last I remember reading, user event queues rely on some of the same underlying bits, but the UI-oriented and pub-sub nature of the code makes them a bit slower. Still generally fast enough for anything anyone ever uses them for.

>>User events are completely nondeterministic (every option in this category is nondeterministic technically, but user events behave like garbage if you stress them).

Property nodes obviously require locking the UI thread and suck, but control indices are sufficiently fast for most anything.

If you eliminate the update-oriented part -- just sharing scalar values in other words -- then the fastest would be a global (requires a lock for write but I think the read side doesn't), then a DVR in read-only mode, then a standard DVR or FGV implementation (both need to lock a mutex). These are all faster than the message oriented variations, obviously, but thats because you get less than half the functionality.

 

  • Like 1
Link to comment

A note on "Messaging":

A Messaging system is one where different bits of information "messages" come through and are handled one-by-one at the same point.  Because different messages are mixed together, the communication cannot be lossy.  Even if you have no messages that represent must-not-be-missed commands, you will still have the problem of missing the latest update of an indicator, because of an update to a different indicator. 

This is different from the case of using multiple separate (possibly lossy) communication methods to update independant indicators (often, this kind of system uses the terminology of "tags").  Because they are separate, the latest value of each "tag" is never lost.  But this is not "messaging".  

Considerations of whether to use "messaging" or "tags" is another conversation.

Link to comment
3 hours ago, drjdpowell said:

A note on "Messaging":

A Messaging system is one where different bits of information "messages" come through and are handled one-by-one at the same point.

......

Because they are separate, the latest value of each "tag" is never lost.  But this is not "messaging".  

I disagree with this characterisation. The losslessness is not a citeria for being a message-based system or not - nor is central handling. Put simply. Messages are just descriptors with or without data and a "tag" is just a message without data.

Link to comment
4 hours ago, drjdpowell said:

Substitute more intuitive terms then.  My point is that the OP is criticising "aggressively promoted frameworks", DQMH I guess, as if the point of such things is raw indicator update speed.

I can't substitute terms because you defined a specific architecture as the generic "message" - with which I disagree.

OP: Just for fun and giggles. Turn on "Synchronous Display" on the indicators. :ph34r:

Link to comment

the terminology NI was using a few years ago was tag/stream/message and i believe the descriptions are as follows:

messages have a non-fixed rate, mixed data types, and should generally be as lossless as possible with minimal latency, with the note that anything requiring confirmation of action must be request-response which takes you down the rabbit hole of idempotence (ie what happens if the response is lost, request is reissued -- does the customer get 2 orders?). Messages are usually 1:N (N workers for 1 producer) or N:1 (N clients for 1 process), but this isn't required.

streams have a fixed rate, fixed type, and generally latency isn't an issue but throughput is. Losslessness is a must. Usually 1:1.

tags are completely lossy with a focus on minimal latency. The addon to this would be an 'update' which is basically a message oriented tag (eg notifier, single element queue). Usually 1:N with 1 writer and N readers.

all three overlap, but I think these three concepts make sense.

Link to comment
7 hours ago, smithd said:

the terminology NI was using a few years ago was tag/stream/message and i believe the descriptions are as follows:

messages have a non-fixed rate, mixed data types, and should generally be as lossless as possible with minimal latency, with the note that anything requiring confirmation of action must be request-response which takes you down the rabbit hole of idempotence (ie what happens if the response is lost, request is reissued -- does the customer get 2 orders?). Messages are usually 1:N (N workers for 1 producer) or N:1 (N clients for 1 process), but this isn't required.

streams have a fixed rate, fixed type, and generally latency isn't an issue but throughput is. Losslessness is a must. Usually 1:1.

tags are completely lossy with a focus on minimal latency. The addon to this would be an 'update' which is basically a message oriented tag (eg notifier, single element queue). Usually 1:N with 1 writer and N readers.

all three overlap, but I think these three concepts make sense.

SOAP also defines a specific "message" pattern. These are all product specific 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.