Bhaskar Peesapati Posted October 10, 2020 Report Share Posted October 10, 2020 I have a design question. I am designing a system which transfer large amount of data (about 12 mb ever 500 ms for displaying, saving etc. Of the two choices presented below which one of two will be a better choice. Thanks in advance for all the advice. Quote Link to comment
LogMAN Posted October 10, 2020 Report Share Posted October 10, 2020 Welcome to Lava! The loops in both of your examples are connected with Boolean wires, which forces them to execute sequentially. This is certainly not what you want. Also, both examples are variations of the Producer/Consumer design pattern. You probably want to store data lossless, so a queue is fine. Just make sure that the queue does not eat up all of your memory (storing data should be faster than capturing new data). Maybe use TDMS files for efficient data storage. Use events if you have a command-like process with multiple sources (i.e. the UI loop could send a command to the data loop to change the output file). Displaying data is very slow and resource intensive. You should think about how much data to display and how often to update. It is probably not a good idea to update 24 MB worth of data every second. A good rule of thumb is to update once every 100 ms (which is fast enough to give the user the impression of being responsive) and only show the minimum amount of data necessary. In this case you could utilize the data data queue and simply check for the next value every 100 ms, then decimate and display the data. Here is an example: Quote Link to comment
Bhaskar Peesapati Posted October 10, 2020 Author Report Share Posted October 10, 2020 Thankyou very much for your reply. The Boolean is wired to stop terminal only indicate to terminate. The data is being written to tdms files. In reality the different loops are independent. Essentially what your are saying is only one loop dequeues and other loops only examine the element and perform necessary operations on it. That way only one queue is necessary. The data block is in a DVR so there is only one copy the data block. In the case of events, The producer triggers the event to indicate new data is present and the consumers process the data. Is there a drawback in this. Again, I thank you very much for your reply. Quote Link to comment
LogMAN Posted October 10, 2020 Report Share Posted October 10, 2020 1 hour ago, Bhaskar Peesapati said: Essentially what your are saying is only one loop dequeues and other loops only examine the element and perform necessary operations on it. That way only one queue is necessary. That is correct. Since the UI loop can run at a different speed, there is no need to send it all data. It can simply look up the current value from the data queue at its own pace without any impact on one of the other loops. 1 hour ago, Bhaskar Peesapati said: The data block is in a DVR so there is only one copy the data block. How is a DVR useful in this scenario? Unless there are additional wire branches, there is only one copy of the data in memory at all times (except for the data shown to the user). A DVR might actually result in less optimized code. 1 hour ago, Bhaskar Peesapati said: In the case of events, The producer triggers the event to indicate new data is present and the consumers process the data. Is there a drawback in this. Events are not the right tool for continuous data streaming. It is much more difficult to have one loop run at a different speed than the other, because the producer decides when an event is triggered. Each Event Structure receives its own data copy for every event. Each Event Structure must process every event (unless you want to fiddle with the event queue 😱). If events are processed slower than the producer triggers them, the event queue will eventually use up all memory, which means that the producer must run slower than the slowest consumer, which is a no-go. You probably want your producer to run as fast as possible. Events are much better suited for command-like operations with unpredictable occurrence (a user clicking a button, errors, etc.). Quote Link to comment
Neil Pate Posted October 10, 2020 Report Share Posted October 10, 2020 1 hour ago, LogMAN said: That is correct. Since the UI loop can run at a different speed, there is no need to send it all data. It can simply look up the current value from the data queue at its own pace without any impact on one of the other loops. How is a DVR useful in this scenario? Unless there are additional wire branches, there is only one copy of the data in memory at all times (except for the data shown to the user). A DVR might actually result in less optimized code. Events are not the right tool for continuous data streaming. It is much more difficult to have one loop run at a different speed than the other, because the producer decides when an event is triggered. Each Event Structure receives its own data copy for every event. Each Event Structure must process every event (unless you want to fiddle with the event queue 😱). If events are processed slower than the producer triggers them, the event queue will eventually use up all memory, which means that the producer must run slower than the slowest consumer, which is a no-go. You probably want your producer to run as fast as possible. Events are much better suited for command-like operations with unpredictable occurrence (a user clicking a button, errors, etc.). I exclusively use events for messages and data,even for super high rate data. The trick is to have multiple events so that processes can listen to just those they care about. 1 Quote Link to comment
Bhaskar Peesapati Posted October 10, 2020 Author Report Share Posted October 10, 2020 Your replay makes sense. In the case of using queues how can I ensure that the process which is removing the element does not run before other processes have worked on the current element. TCPIP process is like display process. Thanks again for your insights. Quote Link to comment
LogMAN Posted October 11, 2020 Report Share Posted October 11, 2020 (edited) 15 hours ago, Neil Pate said: I exclusively use events for messages and data,even for super high rate data. The trick is to have multiple events so that processes can listen to just those they care about. Doesn't that require the producer to know about its consumers? 14 hours ago, Bhaskar Peesapati said: In the case of using queues how can I ensure that the process which is removing the element does not run before other processes have worked on the current element. TCPIP process is like display process. You don't. Previewing queues is a lossy operation. If you want lossless data transfer to multiple concurrent consumers, you need multiple queues or a single user event that is used by multiple consumers. If the data loop dequeues an element before the UI loop could get a copy, the UI loop will simply have to wait for the next element. This shouldn't matter as long as the producer adds new elements fast enough. Note that I'm assuming a sampling rate of >100 Hz with a UI update rate somewhere between 10..20 Hz. For anything slower, previewing queue elements is not an optimal solution. Edited October 11, 2020 by LogMAN Quote Link to comment
Neil Pate Posted October 11, 2020 Report Share Posted October 11, 2020 (edited) 4 hours ago, LogMAN said: Doesn't that require the producer to know about its consumers? No, not at all. My producers just publish data onto their own (self-created and managed) User Event. Consumers can choose to register for this event if they care about the information being generated. The producer has absolutely no idea who or even how many are consuming the data. Edited October 11, 2020 by Neil Pate spelling Quote Link to comment
Bhaskar Peesapati Posted October 11, 2020 Author Report Share Posted October 11, 2020 5 hours ago, LogMAN said: Doesn't that require the producer to know about its consumers? You don't. Previewing queues is a lossy operation. If you want lossless data transfer to multiple concurrent consumers, you need multiple queues or a single user event that is used by multiple consumers. If the data loop dequeues an element before the UI loop could get a copy, the UI loop will simply have to wait for the next element. This shouldn't matter as long as the producer adds new elements fast enough. Note that I'm assuming a sampling rate of >100 Hz with a UI update rate somewhere between 10..20 Hz. For anything slower, previewing queue elements is not an optimal solution. From this discussion I understand this. My requirement is that I process all data without loss. This means I need to have multiple queues. That means 12 MB * # of of consumers is the data storage requirement. With 1 DVR that will be 12 MB storage plus 1 byte to indicate to consuming events that new data is available. It is true as you mentioned that if a consumer does not process data fast enough (like in a logging process) it could consume up memory that I need to guard against. Even though EVENTS are not designed for data processing, the data storage requirement does not increase with consumers. All consumers will listen only for 'NeDataAvailable' trigger and all of the events can read from the same DVR. I appreciate any corrections to my thinking. Thanks Quote Link to comment
LogMAN Posted October 11, 2020 Report Share Posted October 11, 2020 1 hour ago, Neil Pate said: No, not at all. My producers just publish data onto their own (self-created and managed) User Event. Consumers can choose to register for this event if they care about the information being generated. The producer has absolutely no idea who or even how many are consuming the data. Okay, so this is the event-driven producer/consumer design pattern. Perhaps I misunderstood this part: 20 hours ago, Neil Pate said: The trick is to have multiple events so that processes can listen to just those they care about. If one consumer runs slower than the producer, the event queue for that particular consumer will eventually fill up all memory. So if the producer had another event for these slow-running consumers, it would need to know about those consumers. At least that was my train of thought 🤷♂️😄 Quote Link to comment
LogMAN Posted October 11, 2020 Report Share Posted October 11, 2020 (edited) 37 minutes ago, Bhaskar Peesapati said: All consumers will listen only for 'NeDataAvailable' trigger and all of the events can read from the same DVR. This will force your consumers to execute sequentially, because only one of them gets to access the DVR at any given time, which is similar to connecting VIs with wires. You could enable Allow Parallel Read-only Access, so all consumers can access it at the same time, but then there will be could be multiple data copies. 37 minutes ago, Bhaskar Peesapati said: My requirement is that I process all data without loss. This means I need to have multiple queues. That means 12 MB * # of of consumers is the data storage requirement. With 1 DVR that will be 12 MB storage plus 1 byte to indicate to consuming events that new data is available. Have you considered sequentially processing? Each consumer could pass the data to the next consumer when it is done. That way each consumer acts as a producer for the next consumer until there is no more consumer. It won't change the amount of memory required, but at least the data isn't quadrupled and you can get rid of those DVRs (seriously, they will hunt you eventually). Edited October 11, 2020 by LogMAN Quote Link to comment
Bhaskar Peesapati Posted October 12, 2020 Author Report Share Posted October 12, 2020 7 hours ago, LogMAN said: This will force your consumers to execute sequentially, because only one of them gets to access the DVR at any given time, which is similar to connecting VIs with wires. You could enable Allow Parallel Read-only Access, so all consumers can access it at the same time, but then there will be could be multiple data copies. Have you considered sequentially processing? Each consumer could pass the data to the next consumer when it is done. That way each consumer acts as a producer for the next consumer until there is no more consumer. It won't change the amount of memory required, but at least the data isn't quadrupled and you can get rid of those DVRs (seriously, they will hunt you eventually). Looks like you do not like to use DVRs. I read and write to DVR as property in a class in the following way only. Do you still think I am might run into problems. If so what do you suspect. The write or SET happens only in open place. Quote Link to comment
Neil Pate Posted October 12, 2020 Report Share Posted October 12, 2020 13 hours ago, LogMAN said: Okay, so this is the event-driven producer/consumer design pattern. Perhaps I misunderstood this part: If one consumer runs slower than the producer, the event queue for that particular consumer will eventually fill up all memory. So if the producer had another event for these slow-running consumers, it would need to know about those consumers. At least that was my train of thought 🤷♂️😄 My consumers always (by design) run faster than the producer. At some point any architecture is going to fall over even with the biggest buffer in the world if data is building up anywhere. User Events or queues or whatever, if you need lossless data it is being "built up" somewhere. Quote Link to comment
drjdpowell Posted October 12, 2020 Report Share Posted October 12, 2020 (edited) 7 hours ago, Bhaskar Peesapati said: Looks like you do not like to use DVRs. I read and write to DVR as property in a class in the following way only. Do you still think I am might run into problems. If so what do you suspect. The write or SET happens only in open place. They way you're using DVRs here, pulling the full data out of the in-place structure, forces a lot of copying. You have to do the work inside the event structure if you wish to prevent copying. Edited October 12, 2020 by drjdpowell 1 Quote Link to comment
LogMAN Posted October 12, 2020 Report Share Posted October 12, 2020 6 hours ago, Bhaskar Peesapati said: Looks like you do not like to use DVRs. I read and write to DVR as property in a class in the following way only. Do you still think I am might run into problems. If so what do you suspect. The write or SET happens only in open place. I have only one project that uses multiple DVRs to keep large chunks of data in memory, which are accessed by multiple concurrent processes for various reasons. It works, but it is very difficult to follow the data flow without a chart that explains how the application works. In many cases there are good alternatives that don't require DVRs and which are easier to maintain in the long run. The final decision is yours, of course. I'm not saying that they won't work, you should just be aware of the limitations and feel comfortable using and maintaining them. For sure I'll not encourage you to use them until all other options are exhausted. 1 hour ago, Neil Pate said: At some point any architecture is going to fall over even with the biggest buffer in the world if data is building up anywhere. User Events or queues or whatever, if you need lossless data it is being "built up" somewhere. I agree. To be clear, it is not my intention to argue against events for sending data between loops. I'm sorry if it comes across that way. My point is that the graphical user interface probably doesn't need lossless data, because that would throttle the entire system and I don't know of a simple way to access a subset of data using events, when the producer didn't specifically account for that. 1 Quote Link to comment
Neil Pate Posted October 12, 2020 Report Share Posted October 12, 2020 (edited) 10 hours ago, LogMAN said: I agree. To be clear, it is not my intention to argue against events for sending data between loops. I'm sorry if it comes across that way. My point is that the graphical user interface probably doesn't need lossless data, because that would throttle the entire system and I don't know of a simple way to access a subset of data using events, when the producer didn't specifically account for that. No need to apologise, it did not come across like that at all. There is no rule that says you have to update your entire GUI every time a big chunk of data comes in. Its perfectly ok to have the GUI consumer react to the "data in" type event and then just ignore it if its not sensible to process. Assuming your GUI draw routines are pretty fast then its just about finding the sweet spot of updating the GUI at a sensible rate but being able to get back to processing (maybe ignoring!) the next incoming chunk. That said, I normally just update the whole GUI though! I try and aim for about 10 Hz update rate, so things like DAQ or DMA FIFO reads chugging along at 10 Hz and this effectively forms a metronome for everything. I have done some work on a VST with a data rate around 100 MS/s for multiple channels, and I was able to pretty much plot that in close to real-time. Totally unnecessary, yes, but possible. Edited October 12, 2020 by Neil Pate Quote Link to comment
LogMAN Posted October 13, 2020 Report Share Posted October 13, 2020 10 hours ago, Neil Pate said: No need to apologise, it did not come across like that at all. There is no rule that says you have to update your entire GUI every time a big chunk of data comes in. Its perfectly ok to have the GUI consumer react to the "data in" type event and then just ignore it if its not sensible to process. Assuming your GUI draw routines are pretty fast then its just about finding the sweet spot of updating the GUI at a sensible rate but being able to get back to processing (maybe ignoring!) the next incoming chunk. That said, I normally just update the whole GUI though! I try and aim for about 10 Hz update rate, so things like DAQ or DMA FIFO reads chugging along at 10 Hz and this effectively forms a metronome for everything. I have done some work on a VST with a data rate around 100 MS/s for multiple channels, and I was able to pretty much plot that in close to real-time. Totally unnecessary, yes, but possible. My consumers also tend to update the whole GUI if it doesn't impact the process negatively (it rarely does). I was looking for a solution that doesn't require each consumer to receive their own copy in order to save memory. But as @Bhaskar Peesapati already clarified, there are multiple consumers that need to work lossless, which changes everything. Discarding events will certainly prevent the event queue to run out of memory. I actually have a project where I decided to use the Actor Framework to separate data processing from UI and which filters UI update messages to keep it at about 10 Hz. Same thing, but with a bunch of classes. I'm pretty sure there are not many ways to write more code for such a simple task 😅 Quote Link to comment
Neil Pate Posted October 13, 2020 Report Share Posted October 13, 2020 RAM is virtually free these days. As much as I love and absolutely strive for efficiency there is just no point in sweating about several MB of memory. There is no silver bullet, if I need to do multiple things with a piece of data it is often so much easier to just make a copy and forget about it after that (so multiple Queues, multiple consumers of a User Event, whatever). It is not uncommon for a PC to have 32 GB of RAM, and even assuming we are using only 32 bit Windows that still means nearly 3 GB of RAM available for your application which is actually an insane amount. 1 Quote Link to comment
Reds Posted October 15, 2020 Report Share Posted October 15, 2020 I have to admit I didn't read every word of this thread...but... In this type of scenario, I would normally use a Notifier, not a Queue. If the GUI loop can run fast enough to display all of that data, that's great. If not, some data is skipped. A lossy Notifier is perfect for this scenario. Quote Link to comment
drjdpowell Posted October 16, 2020 Report Share Posted October 16, 2020 7 hours ago, Reds said: In this type of scenario, I would normally use a Notifier, not a Queue. If the GUI loop can run fast enough to display all of that data, that's great. If not, some data is skipped. A lossy Notifier is perfect for this scenario. A Notifier is only usable is you only have one type of message coming in, and your display doesn't include any history, like a chart. Personally, I keep the data transfer lossless, but just make the actual front panel display update lossy. 1 Quote Link to comment
bmoyer Posted October 16, 2020 Report Share Posted October 16, 2020 (edited) I was wondering how you make the front panel display update lossy. Currently, when I want to do this, I check the Time of the event, and if the event is, let's say >1s old, I don't bother updating the display/indicator. If that how you go about it, or is there another way? Is there a way to only get the latest event and ignore the rest? I'm asking this mainly from the perspective of using your Messenger Library toolkit. Thanks, Bruce Edited October 16, 2020 by bmoyer Quote Link to comment
crossrulz Posted October 16, 2020 Report Share Posted October 16, 2020 1 hour ago, bmoyer said: I was wondering how you make the front panel display update lossy. Currently, when I want to do this, I check the Time of the event, and if the event is, let's say >1s old, I don't bother updating the display/indicator. If that how you go about it, or is there another way? Is there a way to only get the latest event and ignore the rest? I'm asking this mainly from the perspective of using your Messenger Library toolkit. You can use the Flush Event Queue function to clear out old events. 1 Quote Link to comment
LogMAN Posted October 16, 2020 Report Share Posted October 16, 2020 +1 for flushing the event queue. Here is another solution that involves unregistering and re-registering user events. Whenever the event fires, the event registration is destroyed. At the same time it sets the timeout for the timeout case, which will re-register the event and disable the timeout case again. This could be useful in situations where the consumer runs much (order of magnitudes) slower than the producer, in which case it makes sense to reconstruct the queue every time the consumer desires a new value. I haven't done any benchmarks, nor used it in any real world application so far, but it works. Quote Link to comment
drjdpowell Posted October 16, 2020 Report Share Posted October 16, 2020 1 hour ago, bmoyer said: I was wondering how you make the front panel display update lossy. Currently, when I want to do this, I check the Time of the event, and if the event is, let's say >1s old, I don't bother updating the display/indicator. If that how you go about it, or is there another way? Is there a way to only get the latest event and ignore the rest? I'm asking this mainly from the perspective of using your Messenger Library toolkit. It's documented in the DEV Template. I use a feedback timeout that is default 0 ms from every case but the Timeout case (which feeds back -1). This makes the Timeout case execute exactly once after the event queue is empty. Use a "Display needs update" boolean flag to record if the display needs updating in the timeout case. One can do this with Queues as well. With this one is able to look at all incoming data, but do expensive display operations only as often as possible (but as quickly as possible). So you could, for example, have information coming in a 100 a second, and display using fancy techniques (like complex 2D pictures) that take 100ms+. This is more responsive and a look more robust than the "drop old stuff" technique. 1 1 Quote Link to comment
bmoyer Posted October 16, 2020 Report Share Posted October 16, 2020 Thanks for everyone's feedback. I wasn't aware of the Flush Event Queue, so I if needed, hopefully I'll remember to use it in the future. As for that comment in the Timeout case, I guess I never fully understood or saw an example of the concept being implemented so I wasn't exactly sure what it meant. So, I'm guessing that I put the data in the Actor Internal Data cluster along with a needs display update boolean (for each type of data that I want with delayed display). When the code returns to the Timeout case, read the various update booleans and act on them there, or (probably a cleaner method) call the cases (in the "Follow-on Actions and Error Handling") that update those indicators/controls. Is that about right? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.