Mads Posted March 16, 2021 Report Share Posted March 16, 2021 (edited) Does anyone here know how LabVIEW decides in which order multiple instances waiting to enqueue to the same size-limited queue get to enqueue?😵 If e.g. a consumer has a queue of length 1 (in this case it is previewing, processing, then dequeing to ensure no producer is fooled into thinking that the consumer has started on their delivery just because it was allowed to enqueue 1 element...) and multiple producers try to enqueue to this consumer, I have (intuitively / naively) assumed that if Producer A started to wait to enqueue at t=x, and other producers try to enqueue at t>x, producer A would always be the first to be allowed to enqueue.. This does not seem to be the case (?). Instead producers that begin to wait to enqueue while others are already waiting sometimes get to enqueue prior to those. This causes the waiting time for some producers to increase unpredictably, because a variable amount of producers cut in line. This phenomenon occurs in a rather complex piece of code I have, so I wrote a simulator to see if I could recreate the issue - but so far my simulator seems to adhere to my previous assumption; Each producer gets to wait the same amount of time because they keep their place in the enqueuing order. This might just be a weakness in the simulation though...or I have so far misinterpreted what is really going on. If my assumption about priority was wrong all along it would solve the case faster though....(then it becomes a quetion of how to solve it in a manner where producers actually have such a priority 🤨) Edited March 16, 2021 by Mads Quote Link to comment
Mads Posted March 16, 2021 Author Report Share Posted March 16, 2021 (edited) If the enqueue and wait for a reply function of all producers sharing the same consumer is put in a non-reentrant VI, the execution *is* scheduled according to the order in which the producers call it. So that is one solution. Assuming that the execution of enqueue calls (or rather their access to the bounded queue) would be stacked in the same way; ordered by the time of call, seems a bit less obvious now that I think about it, but the level of queue jumping is still surprising. If for example there are 5 producers, 4 will at all times be waiting to enqueue (while the fifth in this case has finished enqueuing and is waiting for the consumer to return a result). In my test case, where each exchange between a producer and the consumer will take 1 second, a fully ordered execution should consistently complete a round in 5 seconds (which it does if I use the non-reentrant VI solution). - Instead, due to all the seemingly random enqueue-queue (!) jumping, the period of processing for any given producer can spike as high as 15 seconds😱! That means it has been bypassed by the others several full rounds before it got first in line. Edited March 16, 2021 by Mads Quote Link to comment
ShaunR Posted March 16, 2021 Report Share Posted March 16, 2021 7 minutes ago, Mads said: If the enqueue and wait for a reply function of all producers sharing the same consumer is put in a non-reentrant VI, the execution *is* scheduled according to the order in which the producers call it. So that is one solution. Assuming that the execution of enqueue calls (or rather their access to the bounded queue) would be stacked in the same way; ordered by the time of call, seems a bit less obvious now that I think about it, but the level of queue jumping is still surprising. If for example there are 5 producers, 4 will at all times be waiting to enqueue (while the fifth in this case has finished enqueuing and is waiting for the consumer to return a result). In my test case, where each exchange between a producer and the consumer will take 1 second, a fully ordered execution should consistently complete a round in 5 seconds (which it does if I use the non-reentrant VI solution). - Instead, due to all the seemingly random enqueue-queue (!) jumping, the period of processing for any given producer can spike as high as 15 seconds😱! That means it has been bypassed by the others several full rounds before it got first in line. Are you using Shared Reentrant? Try using Preallocate. Quote Link to comment
Mads Posted March 17, 2021 Author Report Share Posted March 17, 2021 (edited) 11 hours ago, ShaunR said: Are you using Shared Reentrant? Try using Preallocate. The original producers acquire a reference to the consumer queue, and call enqueue in a pre-allocated reentrant VI...But when multiple copies of these are waiting in parallell to enqueue, the time at which they started to wait does not decide when they get to do the enqueue. So what I did as a test was to test how this worked if the reentrant VI was non-reentrant (which does not work in the actual system as VIs enqueuing to a different consumer should not wait for the same VI, but just to test now) - and that made everything run according to the time of call. I guess this comes down to the internals of the enqueue function; when you have multiple enqueuers waiting, I thought the queuing system would keep track of when the various callers had first requested access - and then, when the queue has room for another element, assigned access to the originally first request. Instead it looks as if there is something like a polling system in the background (running when it is hung in a wait to enqueue state) that makes it random which of the waiting enqueuers will win the race... Edited March 17, 2021 by Mads Quote Link to comment
Yair Posted March 17, 2021 Report Share Posted March 17, 2021 I haven't checked the behavior now, but I did remember this discussion from some years ago - https://forums.ni.com/t5/LabVIEW/Queues-Enqueue-Dequeue-Element-Scheduling/m-p/2367134#M737149 Note that in the post I linked to AQ does say that "the writers are serviced in the order they put in their request to write, so there's never a starvation" and that only the dequeues could be starved, but maybe this behaves differently in limited size queues or maybe this is different in different LV versions? 1 Quote Link to comment
Mads Posted March 17, 2021 Author Report Share Posted March 17, 2021 (edited) 39 minutes ago, Yair said: I haven't checked the behavior now, but I did remember this discussion from some years ago - https://forums.ni.com/t5/LabVIEW/Queues-Enqueue-Dequeue-Element-Scheduling/m-p/2367134#M737149 Note that in the post I linked to AQ does say that "the writers are serviced in the order they put in their request to write, so there's never a starvation" and that only the dequeues could be starved, but maybe this behaves differently in limited size queues or maybe this is different in different LV versions? Nice discussion, thanks for the link. The quoted statement seems to contradict my observations yes. I have not checked if the misbehavior was absent in any of my earlier LabVIEW installations yet...I am working in Windows LabVIEW 2020 SP1 at the moment. Edited March 17, 2021 by Mads Quote Link to comment
Mads Posted April 8, 2021 Author Report Share Posted April 8, 2021 Just to tie off this thread: The reason for the glitch was that the wiring allowed for a minuscule race-condition 😒. One minor adjustment was needed to ensure everything was *only* decided by the enqueue scheduling. So no worries about that anymore 😀 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.