Jump to content

should queue reference be obtained or kept in shift register


Recommended Posts

I am working on a cRIO based application in which I am using some SEQs and these SEQs are being used within different parallel processes... Is there any difference in terms of processing to keep SEQ's reference in a shift register or to obtain the SEQ reference (using a name and "Obtain Queue" function) each time it is used?

Thanks for any comments.

Link to comment

Yes, there is a difference. By saving the queue refnum and re-using it you avoid the overhead of having to do a name based look-up every iteration. Depending on the frequency of your loop this may or may not be significant.

 

Regardless, obtaining a refnum before a loop, using that refnum over and over again on each iteration, then finally releasing the refnum after you're done with it is pretty standard practice. You may also consider passing the refnum directly into the subVI and not having to obtain it in the first place.

Link to comment

Obtain Queue using a name is planted in my brain as a serious performance bottleneck.  If you compare a simple enqueue in a loop the difference between looking up once and looking up each loop iteration can be the difference between a few msec and more than one second (order of 20000 iterations).

 

If you were to place an obtain queue inside a loop with a constant name I would think it could be recognized as a loop invariant and optimized for you.  You know, have your cake (no long bending wires) and eat it too (super fast).  No such luck.

 

What I do instead if I need the name function is construct a look up table using variant attributes.  With a constant name the variant lookup code is optimized and is essentially as fast as using a single obtain.  With a control inside the loop for a name (breaks loop invariant) the attribute lookup is still much faster than obtain queue.  It is a way to get the elegance of by-name lookup with much less of a performance hit than using obtain by name.  What do you lose?  Primarily the validity checks you usually do not care about when you are trying to shove data around quickly.  I figure that the enqueue/dequeue will complain soon enough I do not worry about catching the error upstream from there.

 

(These are vague recollections from LV11or12, maybe it got a lot better in LV13)

Link to comment
Interesting, do you use a single variant with multiple attributes, one for each reference?

That's how I've done it.  Remember a variant property can be any data type, so you don't need to just store the same data, it can be a look up table for any data you feel like storing.

 

WORM uses this technique.  I don't use WORM it self, but I use the same concepts in my applications.

Link to comment

Thanks guys for your comments!

 

I like to store my refs (queue, event etc) in LV2 style VIs (functional globals). That way you do not need to litter reference wires in shift registers all over your diagrams.

 

You can use the first call primitive to obtain the ref the very first time it is called, which is nice as then you do not need to care who called it first (parallel loops). See picture, in the false case the ref is just wired straight through.

 

 

 

Interesting, do you use a single variant with multiple attributes, one for each reference?

 

I like the first call primitive solution.

 

 

Obtain Queue using a name is planted in my brain as a serious performance bottleneck.  If you compare a simple enqueue in a loop the difference between looking up once and looking up each loop iteration can be the difference between a few msec and more than one second (order of 20000 iterations).

 

If you were to place an obtain queue inside a loop with a constant name I would think it could be recognized as a loop invariant and optimized for you.  You know, have your cake (no long bending wires) and eat it too (super fast).  No such luck.

 

What I do instead if I need the name function is construct a look up table using variant attributes.  With a constant name the variant lookup code is optimized and is essentially as fast as using a single obtain.  With a control inside the loop for a name (breaks loop invariant) the attribute lookup is still much faster than obtain queue.  It is a way to get the elegance of by-name lookup with much less of a performance hit than using obtain by name.  What do you lose?  Primarily the validity checks you usually do not care about when you are trying to shove data around quickly.  I figure that the enqueue/dequeue will complain soon enough I do not worry about catching the error upstream from there.

 

(These are vague recollections from LV11or12, maybe it got a lot better in LV13)

 

I did the variant based look up in the following way (after getting an advice from here by Aristos Queue), but the problem I see here is that the data reads can insert delay for data writes which I dont have in case of SEQs.

 

post-28231-0-65489600-1380533151_thumb.p

 

post-28231-0-72443000-1380533152_thumb.p

Link to comment
You can use the first call primitive to obtain the ref the very first time it is called, which is nice as then you do not need to care who called it first (parallel loops). See picture, in the false case the ref is just wired straight through.

A minor warning about that technique.  A reference is destroyed if the top-level VI under which it is created goes idle.  The “First Call” primitive is reset when the subVI it is contained stops being reserved for execution.  If you are working with multiple top-level VIs (such as if using dynamically “launched” VIs) then it is possible to invalidate the reference without resetting the first call.

  • Like 1
Link to comment
A minor warning about that technique.  A reference is destroyed if the top-level VI under which it is created goes idle.  The “First Call” primitive is reset when the subVI it is contained stops being reserved for execution.  If you are working with multiple top-level VIs (such as if using dynamically “launched” VIs) then it is possible to invalidate the reference without resetting the first call.

Agreed. You are much better of testing for an invalid refnum instead. Then it will be created on the first call and if it ever becomes invalid - recreated.

  • Like 1
Link to comment

Agreed. You are much better of testing for an invalid refnum instead. Then it will be created on the first call and if it ever becomes invalid - recreated.

Even better than testing for an invalid refnum is actually trying to use the reference.  Recreate it if you get an error.  There is some post somewhere by AQ that points out the race condition in testing for refnum validity.

 

Here’s an example of a “refnum created and maintained inside a non-reentrant VI” from one of my projects:

post-18176-0-54296300-1380554894.png

Link to comment
Even better than testing for an invalid refnum is actually trying to use the reference.  Recreate it if you get an error.  There is some post somewhere by AQ that points out the race condition in testing for refnum validity.

 

Here’s an example of a “refnum created and maintained inside a non-reentrant VI” from one of my projects:

attachicon.gifReference retry.png

Nope. It's far too slow. 

Link to comment
I like to store my refs (queue, event etc) in LV2 style VIs (functional globals). That way you do not need to litter reference wires in shift registers all over your diagrams.

 

What bothers me about this is that it seems to defeat parallelism somewhat. Two loops that really ought to just be sharing data via a the queue are now have to depend on a non-rentrant VI to get the queue. Say we have non-empty queue, and A is the producer loop and B is the consumer. If loop A is calling the LV2 global to get the  queue to in order to enqueue data, B has to wait until the LV2 global is finished executing in order to dequeue data that is already available. 

 

I'm sure the performance hit is negligible, but it's the principle of the thing that bugs me. I like to pass queues into through front panel, in a cluster if need be. 

Link to comment

I have some architectures where different resources (COM-ports for example) are shared by handlers that are created dynamically to manage that single resource. Functions that need access to a given resource do it by contacting the relevant handler through its (single) input queue, and the message protocol requires them to provide a return queue for the reply. For historical reasons the return queue reference is not an actual reference, just a name generated by a given set of rules. So the handlers need to acquire and close return queues for every transaction. Typically each handler will do this 10-20 times per second, and there are typically 10-100 handlers running in one and the same application.The continuous acquisition of references has never caused any problems at these rates, and they are used in applications that (have to) run 24/7 on both PCs and PACs. 

Link to comment
If you were to place an obtain queue inside a loop with a constant name I would think it could be recognized as a loop invariant and optimized for you.  You know, have your cake (no long bending wires) and eat it too (super fast).  No such luck.

 

I wouldn't count on that! While it is theoretically possible, the implementation of many LabVIEW nodes is on a different level than the actual DFIR and LLVM algorithmes that allow for dead code elimination and invariant code optimizations. The LabVIEW compiler architecture does decompose loops and other structures as well as common functions like some array functions into DFIR graphs but other nodes such as File IO (Open Read, Write, Close) or most likely also Obtain Queue are basically mostly just calls into precompiled C++ functions inside the LabVIEW kernel and DFIR and LLVM can not optimize on that level.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.