styrum Posted June 17, 2019 Report Share Posted June 17, 2019 Yes, I know, you wanted to do this some day too. So I did it for you. Just run (and then stop) the Main VI from the attached set (Saved in LabVIEW 2016 32-bit). I suspect (and hope) the numbers will be quite a surprise and even a shock, especially for the fans of one particular method and some very aggressively promoted frameworks which use that method. Fastest update of indicator from subVI.zip 1 Quote Link to comment
jacobson Posted June 18, 2019 Report Share Posted June 18, 2019 Would you be able to summarize your findings in a table? I'm also interested in what frameworks/methods you were thinking of. Quote Link to comment
styrum Posted June 18, 2019 Author Report Share Posted June 18, 2019 (edited) 1 hour ago, jacobson said: Would you be able to summarize your findings in a table? I'm also interested in what frameworks/methods you were thinking of. Well, the very point of putting together this code is for people to see the numbers for themselves and that I didn't "cheat" on any of the tested methods to make some look better than others. Their significantly different performance numbers are real. In summary, the main findings are: 1. (Widely known) Passing a control reference to a subVI or a VI running in parallel to the VI where the control(indicator) is located with the purpose of using that reference to update that control/indicator is the worst you can do. 2. (Not so well known) Using "user events" in event structures for the same purpose mentioned and moreover, for implementing asynchronous communications in messaging architectures between "modules", "actors", parallel loops, etc. instead of regular queues and notifiers and now channels is (softly speaking) not a very good idea either from the performance point of view. 3. Even the fastest "channels" are not better than the "good old" notifiers and queues. At least channels give more features and allow to do things impossible before. Which hardly can be said about user events. DQMH is one of the frameworks which rely heavily on such use of event structures one can recall right away. There are others, I bet. Edited June 18, 2019 by styrum Quote Link to comment
shoneill Posted June 18, 2019 Report Share Posted June 18, 2019 Can you save it in an older version of LabVIEW? Quote Link to comment
shoneill Posted June 18, 2019 Report Share Posted June 18, 2019 nvm, I opened it in 2019 VM. Sigh. Please, don't try to benchmark with debugging enabled. Quote Link to comment
shoneill Posted June 18, 2019 Report Share Posted June 18, 2019 (edited) I my test (After setting all to disable debugging) User Events are second only to Queues and Notifiers. They're 20% faster than Channel (High Speed Stream). And Notifiers and User Events are very close in performance. Sometimes User Events win, other times Notifiers, depends on which other methods are running concurrently. This is in a VM with only 2 cores. Edited June 18, 2019 by shoneill Quote Link to comment
ensegre Posted June 18, 2019 Report Share Posted June 18, 2019 (edited) LV2018_64, 6 true cores linux, debugging disabled in main and all subvis. A typical run of a few seconds: Notifier 5.18M, Lossy queue 4.27M, Unbound queue 3.81M, High speed channel 3.2M, whereas Lossy channel 26k, Tag channel 19k, Stream channel 18k, Indicator reference 10k, and the remaining three 8.6k. Usere event is the second last, 8655, better only than channel messenger (8612). Quite different than @shoneill's. What can be the affecting factors? ETA: Inconclusive. I've been getting quite different results running a few times run Main.vi at time critical priority, then at other priorities, closed/reopened labview, then again at normal. Some system effect (not load) might affect all this? Edited June 18, 2019 by ensegre Quote Link to comment
crossrulz Posted June 18, 2019 Report Share Posted June 18, 2019 There also appears to be A LOT of interaction between things. If I disable the Value Property Node, the User Event gets into the same realm as the Notifier and Queue. Removing the channels makes it even more the similar. Maybe the Queue Status in every single VI is doing something? Quote Link to comment
styrum Posted June 18, 2019 Author Report Share Posted June 18, 2019 (edited) Ok, OK, my bad, I left Debugging enabled. This is not a commercial app. This is is just some "food for thought", and something to play with, which you would otherwise not have the time to put together yourself. But now you can find some time (much less) to play with it. The goal was to get people to play with it, find flaws, "unfairness", etc. So I am glad that happened so quickly. Here is a version that has debugging disabled plus a "sequential" flavor of the whole thing. Some disclaimers right away: 1. Yes, I deliberately do not count the iterations completed by sender loops but not completed by the corresponding receivers by the time the senders are done, because I do want to count only fully completed "transactions" in evaluating "performance" 2. No, I don't say that lossy and non-lossy methods are comparable "1 to 1" in general context. But for the declared purpose of this particular experiment, updating an indicator in a UI VI, where the user is interested only in the latest value, it is fair to put them "together". Fastest update of indicator from subVI (3).zip Edited June 18, 2019 by styrum Quote Link to comment
styrum Posted June 18, 2019 Author Report Share Posted June 18, 2019 (edited) styrum Member 06-18-2019 04:17 PM Set Control By Index added. Wow, it is fast! "Speed" calculation in the "sequential" version corrected to count the iterations of the receiver loops. Fastest update of indicator from subVI (5).zip Edited June 18, 2019 by styrum Quote Link to comment
styrum Posted June 18, 2019 Author Report Share Posted June 18, 2019 9 hours ago, crossrulz said: There also appears to be A LOT of interaction between things. If I disable the Value Property Node, the User Event gets into the same realm as the Notifier and Queue. Removing the channels makes it even more the similar. Maybe the Queue Status in every single VI is doing something? Yes, checking the watchdog queue takes a lot of time (if not most of) in each sender iteration. Please check out a "sequential version" in the latest posts or try putting a Disable structure around those Preview Queue Element nodes in the sender VIs (you will have to stop everything with the stop button on the toolbar of the Main then). Quote Link to comment
smithd Posted June 19, 2019 Report Share Posted June 19, 2019 Of the messaging things plain queues should be the fastest message-oriented option because everything else (I think) is built on top. Notifiers, channels, etc use queues. All these options are pretty quick. >Last I remember reading, user event queues rely on some of the same underlying bits, but the UI-oriented and pub-sub nature of the code makes them a bit slower. Still generally fast enough for anything anyone ever uses them for. >>User events are completely nondeterministic (every option in this category is nondeterministic technically, but user events behave like garbage if you stress them). Property nodes obviously require locking the UI thread and suck, but control indices are sufficiently fast for most anything. If you eliminate the update-oriented part -- just sharing scalar values in other words -- then the fastest would be a global (requires a lock for write but I think the read side doesn't), then a DVR in read-only mode, then a standard DVR or FGV implementation (both need to lock a mutex). These are all faster than the message oriented variations, obviously, but thats because you get less than half the functionality. 1 Quote Link to comment
styrum Posted June 19, 2019 Author Report Share Posted June 19, 2019 The matching discussion on NI forums: https://forums.ni.com/t5/LabVIEW/Eleven-Ways-to-Update-an-Indicator-from-within-a-subVI-Their/td-p/3938618 Quote Link to comment
drjdpowell Posted June 19, 2019 Report Share Posted June 19, 2019 A note on "Messaging": A Messaging system is one where different bits of information "messages" come through and are handled one-by-one at the same point. Because different messages are mixed together, the communication cannot be lossy. Even if you have no messages that represent must-not-be-missed commands, you will still have the problem of missing the latest update of an indicator, because of an update to a different indicator. This is different from the case of using multiple separate (possibly lossy) communication methods to update independant indicators (often, this kind of system uses the terminology of "tags"). Because they are separate, the latest value of each "tag" is never lost. But this is not "messaging". Considerations of whether to use "messaging" or "tags" is another conversation. Quote Link to comment
ShaunR Posted June 19, 2019 Report Share Posted June 19, 2019 3 hours ago, drjdpowell said: A note on "Messaging": A Messaging system is one where different bits of information "messages" come through and are handled one-by-one at the same point. ...... Because they are separate, the latest value of each "tag" is never lost. But this is not "messaging". I disagree with this characterisation. The losslessness is not a citeria for being a message-based system or not - nor is central handling. Put simply. Messages are just descriptors with or without data and a "tag" is just a message without data. Quote Link to comment
drjdpowell Posted June 19, 2019 Report Share Posted June 19, 2019 Substitute more intuitive terms then. My point is that the OP is criticising "aggressively promoted frameworks", DQMH I guess, as if the point of such things is raw indicator update speed. Quote Link to comment
styrum Posted June 19, 2019 Author Report Share Posted June 19, 2019 It is indeed amazing and sad how attractive and popular "straw man argument" is. I won't even say anything else. Quote Link to comment
ShaunR Posted June 19, 2019 Report Share Posted June 19, 2019 4 hours ago, drjdpowell said: Substitute more intuitive terms then. My point is that the OP is criticising "aggressively promoted frameworks", DQMH I guess, as if the point of such things is raw indicator update speed. I can't substitute terms because you defined a specific architecture as the generic "message" - with which I disagree. OP: Just for fun and giggles. Turn on "Synchronous Display" on the indicators. Quote Link to comment
smithd Posted June 20, 2019 Report Share Posted June 20, 2019 the terminology NI was using a few years ago was tag/stream/message and i believe the descriptions are as follows: messages have a non-fixed rate, mixed data types, and should generally be as lossless as possible with minimal latency, with the note that anything requiring confirmation of action must be request-response which takes you down the rabbit hole of idempotence (ie what happens if the response is lost, request is reissued -- does the customer get 2 orders?). Messages are usually 1:N (N workers for 1 producer) or N:1 (N clients for 1 process), but this isn't required. streams have a fixed rate, fixed type, and generally latency isn't an issue but throughput is. Losslessness is a must. Usually 1:1. tags are completely lossy with a focus on minimal latency. The addon to this would be an 'update' which is basically a message oriented tag (eg notifier, single element queue). Usually 1:N with 1 writer and N readers. all three overlap, but I think these three concepts make sense. Quote Link to comment
styrum Posted June 20, 2019 Author Report Share Posted June 20, 2019 The NI definitions of tag, stream and message/command are given, for example, in this cRIO guide (p. 29): http://www.ni.com/pdf/products/us/fullcriodevguide.pdf Quote Link to comment
ShaunR Posted June 20, 2019 Report Share Posted June 20, 2019 7 hours ago, smithd said: the terminology NI was using a few years ago was tag/stream/message and i believe the descriptions are as follows: messages have a non-fixed rate, mixed data types, and should generally be as lossless as possible with minimal latency, with the note that anything requiring confirmation of action must be request-response which takes you down the rabbit hole of idempotence (ie what happens if the response is lost, request is reissued -- does the customer get 2 orders?). Messages are usually 1:N (N workers for 1 producer) or N:1 (N clients for 1 process), but this isn't required. streams have a fixed rate, fixed type, and generally latency isn't an issue but throughput is. Losslessness is a must. Usually 1:1. tags are completely lossy with a focus on minimal latency. The addon to this would be an 'update' which is basically a message oriented tag (eg notifier, single element queue). Usually 1:N with 1 writer and N readers. all three overlap, but I think these three concepts make sense. SOAP also defines a specific "message" pattern. These are all product specific Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.