Jump to content

Recommended Posts

Welcome to Lava!

The loops in both of your examples are connected with Boolean wires, which forces them to execute sequentially. This is certainly not what you want. Also, both examples are variations of the Producer/Consumer design pattern.

You probably want to store data lossless, so a queue is fine. Just make sure that the queue does not eat up all of your memory (storing data should be faster than capturing new data). Maybe use TDMS files for efficient data storage. Use events if you have a command-like process with multiple sources (i.e. the UI loop could send a command to the data loop to change the output file).

Displaying data is very slow and resource intensive. You should think about how much data to display and how often to update. It is probably not a good idea to update 24 MB worth of data every second. A good rule of thumb is to update once every 100 ms (which is fast enough to give the user the impression of being responsive) and only show the minimum amount of data necessary.

In this case you could utilize the data data queue and simply check for the next value every 100 ms, then decimate and display the data. Here is an example:

PC3WL.png.01cdc337dfeb1abf1e278975be41524c.png

Link to post
Share on other sites

Thankyou very much for your reply. The Boolean is wired to stop terminal only indicate to terminate. The data is being written to tdms files. In reality the different loops are independent. Essentially what your are saying is only one loop dequeues and other loops only examine the element and perform necessary operations on it. That way only one queue is necessary. The data block is in a DVR so there is only one copy the data block. In the case of events, The producer triggers the event to indicate new data is present and the consumers process the data. Is there a drawback in this. Again, I thank you very much for your reply.  

Link to post
Share on other sites
1 hour ago, Bhaskar Peesapati said:

Essentially what your are saying is only one loop dequeues and other loops only examine the element and perform necessary operations on it. That way only one queue is necessary.

That is correct. Since the UI loop can run at a different speed, there is no need to send it all data. It can simply look up the current value from the data queue at its own pace without any impact on one of the other loops.

1 hour ago, Bhaskar Peesapati said:

The data block is in a DVR so there is only one copy the data block.

How is a DVR useful in this scenario?

Unless there are additional wire branches, there is only one copy of the data in memory at all times (except for the data shown to the user). A DVR might actually result in less optimized code.

1 hour ago, Bhaskar Peesapati said:

In the case of events, The producer triggers the event to indicate new data is present and the consumers process the data. Is there a drawback in this.

Events are not the right tool for continuous data streaming.

  • It is much more difficult to have one loop run at a different speed than the other, because the producer decides when an event is triggered.
  • Each Event Structure receives its own data copy for every event.
  • Each Event Structure must process every event (unless you want to fiddle with the event queue 😱).
  • If events are processed slower than the producer triggers them, the event queue will eventually use up all memory, which means that the producer must run slower than the slowest consumer, which is a no-go. You probably want your producer to run as fast as possible.

Events are much better suited for command-like operations with unpredictable occurrence (a user clicking a button, errors, etc.).

Link to post
Share on other sites
1 hour ago, LogMAN said:

That is correct. Since the UI loop can run at a different speed, there is no need to send it all data. It can simply look up the current value from the data queue at its own pace without any impact on one of the other loops.

How is a DVR useful in this scenario?

Unless there are additional wire branches, there is only one copy of the data in memory at all times (except for the data shown to the user). A DVR might actually result in less optimized code.

Events are not the right tool for continuous data streaming.

  • It is much more difficult to have one loop run at a different speed than the other, because the producer decides when an event is triggered.
  • Each Event Structure receives its own data copy for every event.
  • Each Event Structure must process every event (unless you want to fiddle with the event queue 😱).
  • If events are processed slower than the producer triggers them, the event queue will eventually use up all memory, which means that the producer must run slower than the slowest consumer, which is a no-go. You probably want your producer to run as fast as possible.

Events are much better suited for command-like operations with unpredictable occurrence (a user clicking a button, errors, etc.).

I exclusively use events for messages and data,even for super high rate data. The trick is to have multiple events so that processes can listen to just those they care about.

  • Like 1
Link to post
Share on other sites

Your replay makes sense. In the case of using queues how can I ensure that the process which is removing the element does not run before other processes have worked on the current element. TCPIP process is like display process. Thanks again for your insights.

image.png.e4dca25e5971bd97ec23a5b5dadba8b8.png

image.png

Link to post
Share on other sites
15 hours ago, Neil Pate said:

I exclusively use events for messages and data,even for super high rate data. The trick is to have multiple events so that processes can listen to just those they care about.

Doesn't that require the producer to know about its consumers?

14 hours ago, Bhaskar Peesapati said:

In the case of using queues how can I ensure that the process which is removing the element does not run before other processes have worked on the current element. TCPIP process is like display process.

You don't. Previewing queues is a lossy operation. If you want lossless data transfer to multiple concurrent consumers, you need multiple queues or a single user event that is used by multiple consumers.

If the data loop dequeues an element before the UI loop could get a copy, the UI loop will simply have to wait for the next element. This shouldn't matter as long as the producer adds new elements fast enough. Note that I'm assuming a sampling rate of >100 Hz with a UI update rate somewhere between 10..20 Hz. For anything slower, previewing queue elements is not an optimal solution.

Edited by LogMAN
Link to post
Share on other sites
4 hours ago, LogMAN said:

Doesn't that require the producer to know about its consumers?

No, not at all. My producers just publish data onto their own (self-created and managed) User Event. Consumers can choose to register for this event if they care about the information being generated. The producer has absolutely no idea who or even how many are consuming the data.

Edited by Neil Pate
spelling
Link to post
Share on other sites
5 hours ago, LogMAN said:

Doesn't that require the producer to know about its consumers?

You don't. Previewing queues is a lossy operation. If you want lossless data transfer to multiple concurrent consumers, you need multiple queues or a single user event that is used by multiple consumers.

If the data loop dequeues an element before the UI loop could get a copy, the UI loop will simply have to wait for the next element. This shouldn't matter as long as the producer adds new elements fast enough. Note that I'm assuming a sampling rate of >100 Hz with a UI update rate somewhere between 10..20 Hz. For anything slower, previewing queue elements is not an optimal solution.

From this discussion I understand this. My requirement is that I process all data without loss. This means I need to have multiple queues. That means 12 MB * # of of consumers is the data storage requirement. With 1 DVR that will be 12 MB storage plus 1 byte to indicate to consuming events that new data is available. It is true as you mentioned that if a consumer does not process data fast enough (like in a logging process) it could consume up memory that  I need to guard against. Even though EVENTS are not designed for data processing, the data storage requirement does not increase with consumers. All consumers will listen only for 'NeDataAvailable' trigger and all of the events can read from the same DVR. I appreciate any corrections to my thinking. Thanks

Link to post
Share on other sites
1 hour ago, Neil Pate said:

No, not at all. My producers just publish data onto their own (self-created and managed) User Event. Consumers can choose to register for this event if they care about the information being generated. The producer has absolutely no idea who or even how many are consuming the data.

Okay, so this is the event-driven producer/consumer design pattern. Perhaps I misunderstood this part:

20 hours ago, Neil Pate said:

The trick is to have multiple events so that processes can listen to just those they care about.

If one consumer runs slower than the producer, the event queue for that particular consumer will eventually fill up all memory. So if the producer had another event for these slow-running consumers, it would need to know about those consumers. At least that was my train of thought 🤷‍♂️😄

Link to post
Share on other sites
37 minutes ago, Bhaskar Peesapati said:

All consumers will listen only for 'NeDataAvailable' trigger and all of the events can read from the same DVR.

This will force your consumers to execute sequentially, because only one of them gets to access the DVR at any given time, which is similar to connecting VIs with wires.
You could enable Allow Parallel Read-only Access, so all consumers can access it at the same time, but then there will be could be multiple data copies.

37 minutes ago, Bhaskar Peesapati said:

My requirement is that I process all data without loss. This means I need to have multiple queues. That means 12 MB * # of of consumers is the data storage requirement. With 1 DVR that will be 12 MB storage plus 1 byte to indicate to consuming events that new data is available.

Have you considered sequentially processing?

Each consumer could pass the data to the next consumer when it is done. That way each consumer acts as a producer for the next consumer until there is no more consumer.
It won't change the amount of memory required, but at least the data isn't quadrupled and you can get rid of those DVRs (seriously, they will hunt you eventually).

Edited by LogMAN
Link to post
Share on other sites
7 hours ago, LogMAN said:

This will force your consumers to execute sequentially, because only one of them gets to access the DVR at any given time, which is similar to connecting VIs with wires.
You could enable Allow Parallel Read-only Access, so all consumers can access it at the same time, but then there will be could be multiple data copies.

Have you considered sequentially processing?

Each consumer could pass the data to the next consumer when it is done. That way each consumer acts as a producer for the next consumer until there is no more consumer.
It won't change the amount of memory required, but at least the data isn't quadrupled and you can get rid of those DVRs (seriously, they will hunt you eventually).

Looks like you do not like to use DVRs. I read and write to DVR as property in a class in the following way only. Do you still think I am might run into problems.  If so what do you suspect. The write or SET happens only in open place.

image.png.d037b33f337ba3f2e714d2b6a91f01e6.png

Link to post
Share on other sites
13 hours ago, LogMAN said:

Okay, so this is the event-driven producer/consumer design pattern. Perhaps I misunderstood this part:

If one consumer runs slower than the producer, the event queue for that particular consumer will eventually fill up all memory. So if the producer had another event for these slow-running consumers, it would need to know about those consumers. At least that was my train of thought 🤷‍♂️😄

My consumers always (by design) run faster than the producer. At some point any architecture is going to fall over even with the biggest buffer in the world if data is building up anywhere. User Events or queues or whatever, if you need lossless data it is being "built up" somewhere.

Link to post
Share on other sites
7 hours ago, Bhaskar Peesapati said:

Looks like you do not like to use DVRs. I read and write to DVR as property in a class in the following way only. Do you still think I am might run into problems.  If so what do you suspect. The write or SET happens only in open place.

image.png.d037b33f337ba3f2e714d2b6a91f01e6.png

They way you're using DVRs here, pulling the full data out of the in-place structure, forces a lot of copying.  You have to do the work inside the event structure if you wish to prevent copying.

Edited by drjdpowell
  • Like 1
Link to post
Share on other sites
6 hours ago, Bhaskar Peesapati said:

Looks like you do not like to use DVRs. I read and write to DVR as property in a class in the following way only. Do you still think I am might run into problems.  If so what do you suspect. The write or SET happens only in open place.

image.png.d037b33f337ba3f2e714d2b6a91f01e6.png

I have only one project that uses multiple DVRs to keep large chunks of data in memory, which are accessed by multiple concurrent processes for various reasons. It works, but it is very difficult to follow the data flow without a chart that explains how the application works.

In many cases there are good alternatives that don't require DVRs and which are easier to maintain in the long run. The final decision is yours, of course. I'm not saying that they won't work, you should just be aware of the limitations and feel comfortable using and maintaining them. For sure I'll not encourage you to use them until all other options are exhausted.

1 hour ago, Neil Pate said:

At some point any architecture is going to fall over even with the biggest buffer in the world if data is building up anywhere. User Events or queues or whatever, if you need lossless data it is being "built up" somewhere.

I agree. To be clear, it is not my intention to argue against events for sending data between loops. I'm sorry if it comes across that way.

My point is that the graphical user interface probably doesn't need lossless data, because that would throttle the entire system and I don't know of a simple way to access a subset of data using events, when the producer didn't specifically account for that.

  • Like 1
Link to post
Share on other sites
10 hours ago, LogMAN said:

I agree. To be clear, it is not my intention to argue against events for sending data between loops. I'm sorry if it comes across that way.

My point is that the graphical user interface probably doesn't need lossless data, because that would throttle the entire system and I don't know of a simple way to access a subset of data using events, when the producer didn't specifically account for that.

No need to apologise, it did not come across like that at all.

There is no rule that says you have to update your entire GUI every time a big chunk of data comes in. Its perfectly ok to have the GUI consumer react to the "data in" type event and then just ignore it if its not sensible to process. Assuming your GUI draw routines are pretty fast then its just about finding the sweet spot of updating the GUI at a sensible rate but being able to get back to processing (maybe ignoring!) the next incoming chunk.

That said, I normally just update the whole GUI though! I try and aim for about 10 Hz update rate, so things like DAQ or DMA FIFO reads chugging along at 10 Hz and this effectively forms a metronome for everything. I have done some work on a VST with a data rate around 100 MS/s for multiple channels, and I was able to pretty much plot that in close to real-time. Totally unnecessary, yes, but possible.

Edited by Neil Pate
Link to post
Share on other sites
10 hours ago, Neil Pate said:

No need to apologise, it did not come across like that at all.

There is no rule that says you have to update your entire GUI every time a big chunk of data comes in. Its perfectly ok to have the GUI consumer react to the "data in" type event and then just ignore it if its not sensible to process. Assuming your GUI draw routines are pretty fast then its just about finding the sweet spot of updating the GUI at a sensible rate but being able to get back to processing (maybe ignoring!) the next incoming chunk.

That said, I normally just update the whole GUI though! I try and aim for about 10 Hz update rate, so things like DAQ or DMA FIFO reads chugging along at 10 Hz and this effectively forms a metronome for everything. I have done some work on a VST with a data rate around 100 MS/s for multiple channels, and I was able to pretty much plot that in close to real-time. Totally unnecessary, yes, but possible.

My consumers also tend to update the whole GUI if it doesn't impact the process negatively (it rarely does). I was looking for a solution that doesn't require each consumer to receive their own copy in order to save memory. But as @Bhaskar Peesapati already clarified, there are multiple consumers that need to work lossless, which changes everything. Discarding events will certainly prevent the event queue to run out of memory. I actually have a project where I decided to use the Actor Framework to separate data processing from UI and which filters UI update messages to keep it at about 10 Hz. Same thing, but with a bunch of classes. I'm pretty sure there are not many ways to write more code for such a simple task 😅

Link to post
Share on other sites

RAM is virtually free these days. As much as I love and absolutely strive for efficiency there is just no point in sweating about several MB of memory. There is no silver bullet, if I need to do multiple things with a piece of data it is often so much easier to just make a copy and forget about it after that (so multiple Queues, multiple consumers of a User Event, whatever).

It is not uncommon for a PC to have 32 GB of RAM, and even assuming we are using only 32 bit Windows that still means nearly 3 GB of RAM available for your application which is actually an insane amount.

  • Like 1
Link to post
Share on other sites

I have to admit I didn't read every word of this thread...but...

In this type of scenario, I would normally use a Notifier, not a Queue. If the GUI loop can run fast enough to display all of that data, that's great. If not, some data is skipped. A lossy Notifier is perfect for this scenario.

 

Link to post
Share on other sites
7 hours ago, Reds said:

In this type of scenario, I would normally use a Notifier, not a Queue. If the GUI loop can run fast enough to display all of that data, that's great. If not, some data is skipped. A lossy Notifier is perfect for this scenario.

A Notifier is only usable is you only have one type of message coming in, and your display doesn't include any history, like a chart.  

Personally, I keep the data transfer lossless, but just make the actual front panel display update lossy. 

  • Like 1
Link to post
Share on other sites

I was wondering how you make the front panel display update lossy.  Currently, when I want to do this, I check the Time of the event, and if the event is, let's say >1s old, I don't bother updating the display/indicator.  If that how you go about it, or is there another way?  Is there a way to only get the latest event and ignore the rest?  I'm asking this mainly from the perspective of using your Messenger Library toolkit.

 

Thanks,

Bruce 

Edited by bmoyer
Link to post
Share on other sites
1 hour ago, bmoyer said:

I was wondering how you make the front panel display update lossy.  Currently, when I want to do this, I check the Time of the event, and if the event is, let's say >1s old, I don't bother updating the display/indicator.  If that how you go about it, or is there another way?  Is there a way to only get the latest event and ignore the rest?  I'm asking this mainly from the perspective of using your Messenger Library toolkit.

You can use the Flush Event Queue function to clear out old events.

  • Like 1
Link to post
Share on other sites

+1 for flushing the event queue.

Here is another solution that involves unregistering and re-registering user events.
Whenever the event fires, the event registration is destroyed. At the same time it sets the timeout for the timeout case, which will re-register the event and disable the timeout case again.

This could be useful in situations where the consumer runs much (order of magnitudes) slower than the producer, in which case it makes sense to reconstruct the queue every time the consumer desires a new value. I haven't done any benchmarks, nor used it in any real world application so far, but it works.

SelfBalancingUserEventStructure.png.45aa28e50a7966c8359359e8a605aa00.png

SelfBalancingUserEventStructure_TimeoutCase.png.55d740e2853cd2f129a4782471546daa.png

Link to post
Share on other sites
1 hour ago, bmoyer said:

I was wondering how you make the front panel display update lossy.  Currently, when I want to do this, I check the Time of the event, and if the event is, let's say >1s old, I don't bother updating the display/indicator.  If that how you go about it, or is there another way?  Is there a way to only get the latest event and ignore the rest?  I'm asking this mainly from the perspective of using your Messenger Library toolkit.

It's documented in the DEV Template.  I use a feedback timeout that is default 0 ms from every case but the Timeout case (which feeds back -1).  This makes the Timeout case execute exactly once after the event queue is empty.   Use a "Display needs update" boolean flag to record if the display needs updating in the timeout case.

550518896_2020-10-1615_42_09-DEVActorTemplate.lvclass_Actor.viBlockDiagramonAllMessangingRelated_lvpr.png.bf6bacb7d4c8afe7d52bf36859789f92.png

One can do this with Queues as well.

With this one is able to look at all incoming data, but do expensive display operations only as often as possible (but as quickly as possible).  So you could, for example, have information coming in a 100 a second, and display using fancy techniques (like complex 2D pictures) that take 100ms+.  This is more responsive and a look more robust than the "drop old stuff" technique.

 

  • Like 1
  • Thanks 1
Link to post
Share on other sites

Thanks for everyone's feedback.  I wasn't aware of the Flush Event Queue, so I if needed, hopefully I'll remember to use it in the future.

As for that comment in the Timeout case, I guess I never fully understood or saw an example of the concept being implemented so I wasn't exactly sure what it meant.

So, I'm guessing that I put the data in the Actor Internal Data cluster along with a needs display update boolean (for each type of data that I want with delayed display).  When the code returns to the Timeout case, read the various update booleans and act on them there, or (probably a cleaner method) call the cases (in the "Follow-on Actions and Error Handling") that update those indicators/controls.

Is that about right?

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.