Jump to content

Recommended Posts

Hi,

I was thinking of the lossy queue issue today that has been discussed on this forum earlier. I came up with a solution that I think should work for most of practical use cases. The idea is to proxy the enqueue operation via an user event and handle the user event in a proxy loop or thread. As long as the proxy-loop is guaranteed to run at priority comparanble or higher than the producer loop and the queue operations themselves are not the bottleneck in the system, this solution should work. The user event can be replaced with a second queue but user events are more practical to use.

It's starting to be late at night, so I ask you dear LAVA gurus, do you find loop holes. The attached VI is for 8.0.

http://forums.lavag.org/index.php?act=attach&type=post&id=5697

Link to comment

QUOTE(Tomi Maila @ May 2 2007, 02:46 PM)

I was thinking of the lossy queue issue today that has been discussed http://forums.lavag.org/index.php?showtopic=3406&pid=16372' target="_blank">on this forum earlier. I came up with a solution that I think should work for most of practical use cases. The idea is to proxy the enqueue operation via an user event and handle the user event in a proxy loop or thread.

Perhaps I'm being a bit thick, but what's the disadvantage to protecting a fixed-size queue with a semaphore? I can see that multiple Enqueue points might experience a slight delay when enqueing.

Link to comment

QUOTE(eaolson @ May 3 2007, 09:28 AM)

Perhaps I'm being a bit thick, but what's the disadvantage to protecting a fixed-size queue with a semaphore? I can see that multiple Enqueue points might experience a slight delay when enqueing.

Thinking about it that way, do you need to protect it at all? If you have a non-reentrant wrapper around your enqueue function then you already have a sort of semaphore on the writing of the queue. If you want to protect read and write then make it a functional global wrapper.

Then use the same logic, if you go to enqueue and it is full, dequeue one. If you don't protect the read then the worst case is you have a race condition as to which queue entry you drop -- but it is a lossy queue so it doesn't matter which one you drop.

Or am I missing something here?

Link to comment

Or am I missing something here?

Lossy queues are usually needed when the queue writing process needs to be deterministic to avoid source buffer overflow, and the queue consuming process is often undeterministic. You need lossy queue because the undeterministic consumer is sometimes slow consuming the queue and the queue get's full. The problem with locking is that it makes the producer thread undeterministic. You can no more rely on the fact that you always succeed to enqueue your stuff. This may result in source buffer overflow. A common use case is that you try to enqueue the elements in your DAQ hardware buffer, whichi is of limited size. You need to clean-up the buffer deterministically to avoid overflow. If you lock your queue, you cannot guarantee this anymore.

EDIT: Read crelfs articles Deterministic Data Acquisition and Control with Real-Time and Deterministic Data Acquisition and Control with Real-Time - Part 2 to learn more on deterministic operations.

Tomi

Link to comment

QUOTE(Tomi Maila @ May 3 2007, 07:25 PM)

Lossy queues are usually needed when the queue writing process needs to be deterministic to avoid source buffer overflow, and the queue consuming process is often undeterministic. You need lossy queue because the undeterministic consumer is sometimes slow consuming the queue and the queue get's full. The problem with locking is that it makes the producer thread undeterministic. You can no more rely on the fact that you always succeed to enqueue your stuff. This may result in source buffer overflow. A common use case is that you try to enqueue the elements in your DAQ hardware buffer, whichi is of limited size. You need to clean-up the buffer deterministically to avoid overflow. If you lock your queue, you cannot guarantee this anymore.

Why not to make the queue bigger and dequeue it as usual? And if the queue will be nearly full, then flush it and signal a warning to the user.

Eugen

Link to comment

QUOTE(Eugen Graf @ May 3 2007, 09:46 PM)

Why not to make the queue bigger and dequeue it as usual? And if the queue will be nearly full, then flush it and signal a warning to the user.

The memory is sometimes limited. If it wasn't you wouldn't need a lossy queue. I don't understand what's the benefir of warning the user? What can the user do about anything?

Tomi

Link to comment

QUOTE(Tomi Maila @ May 3 2007, 09:02 PM)

The memory is sometimes limited. If it wasn't you wouldn't need a lossy queue. I don't understand what's the benefir of warning the user? What can the user do about anything?

He can buy a better PC (faster and with more memory), he can turn off other tasks and processes, he can sample data slower, he can call you and you can recommend it to him.

I think it's wrong to drop anything and nobody knows about is, and if he makes posprocessing of this data, he see other results as expected.

Eugen

Link to comment

QUOTE(Tomi Maila @ May 3 2007, 10:25 AM)

I am not questioning the need for lossy queues. I am questioning the need to have some separate messaging mechanism to do a "lossy enqueue". Why signal someone else to do the logic of

  1. check if queue is full
  2. dequeue if full
  3. enqueue

when you can just do it in a subvi. The subvi is only blocking to multiple writers. How often do you have multiple writers to a single data queue? If this is the case, then you are right -- some messaging mechanism is necessary to decouple it. However there is no need to lock the reading of the queue. This would be a standard read function so you maintain determinisim.

http://forums.lavag.org/index.php?act=attach&type=post&id=5715

QUOTE(eaolson @ May 3 2007, 10:32 AM)

.

AQ is talking about multiple writers accessing the same *native* enqueue function which is non-blocking. What I am suggesting is very similar to what Tomi suggested later in that same thread, but he mentioned wrapping all the queue functions into a sub-vi. This would block the read and the write, and cause the problems he was referring to (above quote).

David

Link to comment

QUOTE(Tomi Maila @ May 3 2007, 12:25 PM)

Lossy queues are usually needed when the queue writing process needs to be deterministic to avoid source buffer overflow, and the queue consuming process is often undeterministic.

Most of the solutions discussed in these threads seem to be of the (1) dequeue if full, (2) enqueue element sort. Since step (1) may or may not execute, these all seem guaranteed to not be deterministic. If determinism is important, you probably want an RT FIFO. Since these can overwrite data if the FIFO is full, they seem to act as lossy queues, though they don't work with all datatypes.

Link to comment

It seems like your approach will work. I wrote a different version of your method using only queues and no user events. The wait function in the proxy loop can be removed without problem. I just kept it to be consistent with your implementation.

http://forums.lavag.org/index.php?act=attach&type=post&id=5718

http://forums.lavag.org/index.php?act=attach&type=post&id=5719 (LV8.0)

Let's not forget that an event structure is just a queue after all. I like queues over event structures because you can see the inner workings. Your producer event is basically an unbounded queue which is what I've created in my example. In order for all of this to work, you need to make sure the proxy loop is much faster than than the producer loop. Otherwise, your producer queue or user event queue will continue to grow in size because the proxy cannot keep up in the dequeue process.

What I would like to see from NI is a feature/switch/setting in the dequeue function to allow us to remove a range of elements at once instead of one at a time (return all elements). Then there would be no need for a proxy. The dequeue would remove the number of elements available. Edit: Well, now that I think of it, this is currently possible with the Flush Queue function. Perhaps this is the solution here. Not Sure

Link to comment

QUOTE(Michael_Aivaliotis @ May 4 2007, 09:30 AM)

What I would like to see from NI is a feature/switch/setting in the dequeue function to allow us to remove a range of elements at once instead of one at a time (return all elements). Then there would be no need for a proxy. The dequeue would remove the number of elements available. Edit: Well, now that I think of it, this is currently possible with the Flush Queue function. Perhaps this is the solution here. Not Sure

I don't think flush queue would be a replacement for lossy queues. The problem is that if consumer loops is undeterministic, then the queue may get full, no matter how many elements you are able to dequeue at once. Consider for example that your consumer runs in UI thread and the UI thread get's busy for an external DLL call or heavy UI refresh. As a result your queue can gets full because your flush queue does't get called in time. This results in producer not being able to enqueue elements and the determinism of producer loop is lost as it becomes entangled with the consumer loop.

Link to comment

QUOTE(Tomi Maila @ May 4 2007, 12:08 AM)

I don't think flush queue would be a replacement for lossy queues. The problem is that if consumer loops is undeterministic, then the queue may get full, no matter how many elements you are able to dequeue at once. Consider for example that your consumer runs in UI thread and the UI thread get's busy for an external DLL call or heavy UI refresh. As a result your queue can gets full because your flush queue does't get called in time. This results in producer not being able to enqueue elements and the determinism of producer loop is lost as it becomes entangled with the consumer loop.
I agree that the situation you describe is bad, however this is a simplification of what happens in a properly architected system. Understanding that you are consuming data, you would not put your consumer code in a situation where there is a chance for it to hang due to UI interactions. A dedicated parallel process would probably be instantiated for this. It's not that important that your consumer be fast but that it always consumes at a steady rate without stopping.

If you leave the queue unbounded (or a very high value) then you will notice the queue size stabilize at an acceptable value. This would not work with a fixed small queue size of 10. I modified the example to use a flush queue and even set the producer to 1ms and the system kept up fine. I agree however that when a limited queue size is required, this would not work. On the other hand, letting your queue size grow or fluctuate is not good for determinism either.

All of this really depends on the speeds involved. If you have a fast producer loop with a really slow consumer then the proxy design seems like overkill since you will be missing so much data that it's probably better to use a notifier mechanism instead.

Overall, your suggested design should work but I probably would use RTFIFO's in regards to LabVIEW RT and determinism.

Link to comment

QUOTE(Michael_Aivaliotis @ May 4 2007, 01:15 AM)

I agree that RTFIFOs are the best options for the determinism that has been discussed.

Regarding the other methods discussed here to implement a lossy queue, I want to hear why it is necessary to have a proxy. I ran some benchmarks on my machine to see how long it took to enqueue an element in the producer loop, with the following results. 'Lossy enqueue' refers to the code seen in the proxy loop of Michael's code -- Enqueue, Dequeue if full, Enqueue.

  1. Enqueue to proxy queue with no size limit -- 1 microsecond
  2. Lossy enqueue with no subVI (queue full) -- 15 microseconds
  3. Lossy enqueue with no subVI (queue not full) -- 1 microsecond
  4. Lossy enqueue with subVI (queue full) -- 18 microseconds
  5. Lossy enqueue with subVI (queue not full) -- 2 microseconds

The lossy enqueue method with no subVI suffers from problems if you have multiple writers to the same queue, see here.

It seems to me that if you are having problems handling a jitter of 15 microseconds then you have a high speed application that could benefit from the determinism of an RT system and RTFIFOs. But for the majority of applications wanting a lossy queue, why go through the bother of having a proxy? Just make a subVI that does a lossy enqueue and use it directly in the producer loop, like my post above.

David

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.