manigreatus Posted September 26, 2013 Report Share Posted September 26, 2013 I am working on a cRIO based application in which I am using some SEQs and these SEQs are being used within different parallel processes... Is there any difference in terms of processing to keep SEQ's reference in a shift register or to obtain the SEQ reference (using a name and "Obtain Queue" function) each time it is used? Thanks for any comments. Quote Link to comment
mje Posted September 26, 2013 Report Share Posted September 26, 2013 Yes, there is a difference. By saving the queue refnum and re-using it you avoid the overhead of having to do a name based look-up every iteration. Depending on the frequency of your loop this may or may not be significant. Regardless, obtaining a refnum before a loop, using that refnum over and over again on each iteration, then finally releasing the refnum after you're done with it is pretty standard practice. You may also consider passing the refnum directly into the subVI and not having to obtain it in the first place. Quote Link to comment
Darin Posted September 26, 2013 Report Share Posted September 26, 2013 Obtain Queue using a name is planted in my brain as a serious performance bottleneck. If you compare a simple enqueue in a loop the difference between looking up once and looking up each loop iteration can be the difference between a few msec and more than one second (order of 20000 iterations). If you were to place an obtain queue inside a loop with a constant name I would think it could be recognized as a loop invariant and optimized for you. You know, have your cake (no long bending wires) and eat it too (super fast). No such luck. What I do instead if I need the name function is construct a look up table using variant attributes. With a constant name the variant lookup code is optimized and is essentially as fast as using a single obtain. With a control inside the loop for a name (breaks loop invariant) the attribute lookup is still much faster than obtain queue. It is a way to get the elegance of by-name lookup with much less of a performance hit than using obtain by name. What do you lose? Primarily the validity checks you usually do not care about when you are trying to shove data around quickly. I figure that the enqueue/dequeue will complain soon enough I do not worry about catching the error upstream from there. (These are vague recollections from LV11or12, maybe it got a lot better in LV13) Quote Link to comment
Mark Smith Posted September 27, 2013 Report Share Posted September 27, 2013 It may be obvious and you already are doing it, but make sure you are releasing those queue refs you obtain by name or you can create a small but sometimes consequential memory leak. It's bitten me in the past. Mark 2 Quote Link to comment
hooovahh Posted September 27, 2013 Report Share Posted September 27, 2013 Interesting, do you use a single variant with multiple attributes, one for each reference? That's how I've done it. Remember a variant property can be any data type, so you don't need to just store the same data, it can be a look up table for any data you feel like storing. WORM uses this technique. I don't use WORM it self, but I use the same concepts in my applications. Quote Link to comment
manigreatus Posted September 30, 2013 Author Report Share Posted September 30, 2013 Thanks guys for your comments! I like to store my refs (queue, event etc) in LV2 style VIs (functional globals). That way you do not need to litter reference wires in shift registers all over your diagrams. You can use the first call primitive to obtain the ref the very first time it is called, which is nice as then you do not need to care who called it first (parallel loops). See picture, in the false case the ref is just wired straight through. Interesting, do you use a single variant with multiple attributes, one for each reference? I like the first call primitive solution. Obtain Queue using a name is planted in my brain as a serious performance bottleneck. If you compare a simple enqueue in a loop the difference between looking up once and looking up each loop iteration can be the difference between a few msec and more than one second (order of 20000 iterations). If you were to place an obtain queue inside a loop with a constant name I would think it could be recognized as a loop invariant and optimized for you. You know, have your cake (no long bending wires) and eat it too (super fast). No such luck. What I do instead if I need the name function is construct a look up table using variant attributes. With a constant name the variant lookup code is optimized and is essentially as fast as using a single obtain. With a control inside the loop for a name (breaks loop invariant) the attribute lookup is still much faster than obtain queue. It is a way to get the elegance of by-name lookup with much less of a performance hit than using obtain by name. What do you lose? Primarily the validity checks you usually do not care about when you are trying to shove data around quickly. I figure that the enqueue/dequeue will complain soon enough I do not worry about catching the error upstream from there. (These are vague recollections from LV11or12, maybe it got a lot better in LV13) I did the variant based look up in the following way (after getting an advice from here by Aristos Queue), but the problem I see here is that the data reads can insert delay for data writes which I dont have in case of SEQs. Quote Link to comment
drjdpowell Posted September 30, 2013 Report Share Posted September 30, 2013 You can use the first call primitive to obtain the ref the very first time it is called, which is nice as then you do not need to care who called it first (parallel loops). See picture, in the false case the ref is just wired straight through. A minor warning about that technique. A reference is destroyed if the top-level VI under which it is created goes idle. The “First Call” primitive is reset when the subVI it is contained stops being reserved for execution. If you are working with multiple top-level VIs (such as if using dynamically “launched” VIs) then it is possible to invalidate the reference without resetting the first call. 1 Quote Link to comment
ShaunR Posted September 30, 2013 Report Share Posted September 30, 2013 A minor warning about that technique. A reference is destroyed if the top-level VI under which it is created goes idle. The “First Call” primitive is reset when the subVI it is contained stops being reserved for execution. If you are working with multiple top-level VIs (such as if using dynamically “launched” VIs) then it is possible to invalidate the reference without resetting the first call. Agreed. You are much better of testing for an invalid refnum instead. Then it will be created on the first call and if it ever becomes invalid - recreated. 1 Quote Link to comment
drjdpowell Posted September 30, 2013 Report Share Posted September 30, 2013 Agreed. You are much better of testing for an invalid refnum instead. Then it will be created on the first call and if it ever becomes invalid - recreated. Even better than testing for an invalid refnum is actually trying to use the reference. Recreate it if you get an error. There is some post somewhere by AQ that points out the race condition in testing for refnum validity. Here’s an example of a “refnum created and maintained inside a non-reentrant VI” from one of my projects: Quote Link to comment
ShaunR Posted September 30, 2013 Report Share Posted September 30, 2013 Even better than testing for an invalid refnum is actually trying to use the reference. Recreate it if you get an error. There is some post somewhere by AQ that points out the race condition in testing for refnum validity. Here’s an example of a “refnum created and maintained inside a non-reentrant VI” from one of my projects: Reference retry.png Nope. It's far too slow. Quote Link to comment
drjdpowell Posted September 30, 2013 Report Share Posted September 30, 2013 Nope. It's far too slow. You mean it is too slow when the reference is invalid? When it is valid then this method takes less time than testing for an invalid refnum and then using it. Quote Link to comment
MarkCG Posted September 30, 2013 Report Share Posted September 30, 2013 I like to store my refs (queue, event etc) in LV2 style VIs (functional globals). That way you do not need to litter reference wires in shift registers all over your diagrams. What bothers me about this is that it seems to defeat parallelism somewhat. Two loops that really ought to just be sharing data via a the queue are now have to depend on a non-rentrant VI to get the queue. Say we have non-empty queue, and A is the producer loop and B is the consumer. If loop A is calling the LV2 global to get the queue to in order to enqueue data, B has to wait until the LV2 global is finished executing in order to dequeue data that is already available. I'm sure the performance hit is negligible, but it's the principle of the thing that bugs me. I like to pass queues into through front panel, in a cluster if need be. Quote Link to comment
Mads Posted October 2, 2013 Report Share Posted October 2, 2013 I have some architectures where different resources (COM-ports for example) are shared by handlers that are created dynamically to manage that single resource. Functions that need access to a given resource do it by contacting the relevant handler through its (single) input queue, and the message protocol requires them to provide a return queue for the reply. For historical reasons the return queue reference is not an actual reference, just a name generated by a given set of rules. So the handlers need to acquire and close return queues for every transaction. Typically each handler will do this 10-20 times per second, and there are typically 10-100 handlers running in one and the same application.The continuous acquisition of references has never caused any problems at these rates, and they are used in applications that (have to) run 24/7 on both PCs and PACs. Quote Link to comment
Rolf Kalbermatter Posted October 2, 2013 Report Share Posted October 2, 2013 If you were to place an obtain queue inside a loop with a constant name I would think it could be recognized as a loop invariant and optimized for you. You know, have your cake (no long bending wires) and eat it too (super fast). No such luck. I wouldn't count on that! While it is theoretically possible, the implementation of many LabVIEW nodes is on a different level than the actual DFIR and LLVM algorithmes that allow for dead code elimination and invariant code optimizations. The LabVIEW compiler architecture does decompose loops and other structures as well as common functions like some array functions into DFIR graphs but other nodes such as File IO (Open Read, Write, Close) or most likely also Obtain Queue are basically mostly just calls into precompiled C++ functions inside the LabVIEW kernel and DFIR and LLVM can not optimize on that level. Quote Link to comment
Darin Posted October 2, 2013 Report Share Posted October 2, 2013 So, as I said "No such luck". And the reason is much simpler: the reference could change and the name remains the same. No guarantee of a loop invariant. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.