Grampa_of_Oliva_n_Eden Posted April 9, 2007 Report Share Posted April 9, 2007 For those not familiar with LV2's Functional Globals or Action Engines, see this Nugget to get started. http://forums.ni.com/ni/board/message?boar...8&jump=true For those of you who know them, please feel free to let me know what I missed. Thank you, Ben Quote Link to comment
ragglefrock Posted April 19, 2007 Report Share Posted April 19, 2007 QUOTE(Jim Kring @ Apr 7 2007, 10:21 AM) I didn't notice anyone mention this yet, so I figured that I would throw it out there:I believe that the While Loop, instead of the For Loop, was chosen for LV2-style globals, because the While Loop version required fewer editing steps to create. You might be thinking, "but it takes just as many editing steps to wire a FALSE to the While Loop's return selector as it takes to wire a numeric constant to a For Loop's count (N) terminal". Yes, that's true, now, but back in the good ol' days (LV <= 5.1, if I vaugely recall) you didn't have to wire anything to a While Loop's return selector, and it would default to FALSE if unwired (in the "Continue if TRUE" mode, as there was no "Stop if TRUE" setting for the return selector, back then). This means that if the return selector was unwired, a While Loop would execute one, and only one, time -- perfect for a LV2-style global Another editing step that takes longer with for loop FGs is disabling the auto-indexing that occurs by default. It's generally not the desired behavior with FGs. That's the reason I've stayed away from them, personally. Quote Link to comment
Michael Aivaliotis Posted April 19, 2007 Report Share Posted April 19, 2007 QUOTE(JFM @ Apr 5 2007, 08:14 AM) Tomi, I think you are safe If I remember correctly, bsvingen did some testing of this in another thread (that I can not currently find). The result, as I recall them, was that the LV2 global was faster than using queues. Hmm, I think we should place some bets here. LAVA could use some donations . My money goes on the queue. Quote Link to comment
Mellroth Posted April 19, 2007 Report Share Posted April 19, 2007 QUOTE(Michael_Aivaliotis @ Apr 18 2007, 09:36 AM) Hmm, I think we should place some bets here. LAVA could use some donations .My money goes on the queue. I haven't really tested performance of queues vs. FG's (at least not since queues became primitives). As long as the queue is only replacing a simple array memory, I think that the queue might be as fast as a FG. But when the functional global contains more than one array of data, my guess is that the FG would be faster. Anyway, a really interesting topic. /J Quote Link to comment
ned Posted May 16, 2007 Report Share Posted May 16, 2007 QUOTE(Aristos Queue @ Mar 28 2007, 10:30 PM) So I asked Jeff K (father of LV, deep knowledge of diagram optimization). He says that we can indeed constant-fold a While Loop with a constant wired to the Stop terminal. It's the only time we can constant fold the contents of a while loop, but we can do it. He says that there's no advantage to using one over the other. Slightly off-topic but related: if you can constant-fold a single-iteration while loop, does this affect buffer allocations at tunnels into and out of those loops? I frequently pass a cluster into a single-iteration while loop, replace an element, and then pass the cluster back out again. I've started using shift registers in place of tunnels in this situation to avoid buffer allocations. Will LabVIEW optimize this for me even if I use a tunnel, because it knows the loop iterates exactly once? Quote Link to comment
ragglefrock Posted May 17, 2007 Report Share Posted May 17, 2007 QUOTE(ned @ May 15 2007, 10:34 AM) Slightly off-topic but related: if you can constant-fold a single-iteration while loop, does this affect buffer allocations at tunnels into and out of those loops? I frequently pass a cluster into a single-iteration while loop, replace an element, and then pass the cluster back out again. I've started using shift registers in place of tunnels in this situation to avoid buffer allocations. Will LabVIEW optimize this for me even if I use a tunnel, because it knows the loop iterates exactly once? I'm not sure if LabVIEW folding the loop has any effect on buffer allocations. Even if it does, my guess is that LabVIEW would still have more difficulty determining which two tunnels go together in this case if you don't use shift registers. For simple algorithms it might be trivial, but when traversing multiple cases of a case structure, the path might not be clear. At best LabVIEW might be able to find the inplace path through the loop, but using shift registers is a big hint to LabVIEW. Stick with shift registers. I was surprised to learn that LabVIEW can inplace various other tunnel forms such as an input and output auto-index tunnel! I never knew that and am very glad to know, since this is quickest way to operate on array elements. You still run the risk that LabVIEW won't recognize the path through the loop, though, so complex algorithms might even benefit from a single-cycle loop inside the for loop with shift registers. Don't quote me on that, as I've never seen it in practice. Just a thought Quote Link to comment
Ton Plomp Posted May 17, 2007 Report Share Posted May 17, 2007 QUOTE(ragglefrock @ May 16 2007, 06:21 AM) You still run the risk that LabVIEW won't recognize the path through the loop, I believe I've seen somewhere that you can force the compiler to use the same memory by adding a case structure inside the loop and have at least one case that just connects the tunnels. Ton Quote Link to comment
i2dx Posted May 17, 2007 Report Share Posted May 17, 2007 QUOTE(Jim Kring @ Apr 7 2007, 05:21 PM) I didn't notice anyone mention this yet, so I figured that I would throw it out there:I believe that the While Loop, instead of the For Loop, was chosen for LV2-style globals, because the While Loop version required fewer editing steps to create. an other point is: the border of a while loop is easier to hit with the right mouse button, than the border of a for-loop => it's easier to create a shift register. Even if the *invisible size* of the border of the for loop equals the border-size of the while loop, I'm allways trying to hit that small line e.g. I mostly do not create shift registers in a for loop by hand, but drag the wires and chose "replace with shift registers" on the tunnels, which is more convenient Quote Link to comment
MPC Posted September 8, 2007 Report Share Posted September 8, 2007 I was curious and created my own benchmark - I'm getting quite different results with LV 8.5: While Loop: Baseline Floating: ~same For Loop: 2.5x's faster writing / 4x's faster reading (I did use a different method with for loops - 0 iterations = read, 1 iteration = write), but testing with nested case structure still was faster than other methods) Quick check with LV 8.2 shows only 1.2 - 1.3x's performance increase with for loops. Quote Link to comment
Jim Kring Posted September 8, 2007 Report Share Posted September 8, 2007 I don't think anyone has mentioned this yet, here... In LabVIEW 8.5, you can implement a Functional Global without any loop: Functional Globals in LabVIEW 8.5 - No Loop, No Joke Quote Link to comment
Rolf Kalbermatter Posted September 8, 2007 Report Share Posted September 8, 2007 QUOTE(Jim Kring @ Sep 7 2007, 10:22 AM) I don't think anyone has mentioned this yet, here...In LabVIEW 8.5, you can implement a Functional Global without any loop: http://thinkinging.com/2007/09/07/functional-globals-in-labview-85-no-loop-no-joke/' target="_blank">Functional Globals in LabVIEW 8.5 - No Loop, No Joke Just learned that yesterday at the local LabVIEW day here in the Netherlands, presented by Jeff Washington. His example had a loop but was about pipelined execution and boy I can tell you that although I'm excited about this feature, it does need getting used too. Basically with this node you sort of have to forget a few things about data flow and wire dependancy. And yes Jeff mentioned that the original Feedback node was implemented by an intern and they had thought he had chosen to implement it simply as a folded shift register but that seems to not have been the case and that is why it was much slower than a shift register. In 8.5 however Jeff claimed that the Feedback register should in all aspects we as user could possibly measure, behave exactly as a shift register. Probably there is also already an NI patent pending for it :-) Rolf Kalbermatter Quote Link to comment
Ton Plomp Posted September 8, 2007 Report Share Posted September 8, 2007 To support Rolf's comment, here's a similar code part: I heard from several persons that are high on the LV list a 'whoah' (well the screenshot didn't use express VIs so that might be it as well) Ton Quote Link to comment
Rolf Kalbermatter Posted September 8, 2007 Report Share Posted September 8, 2007 QUOTE(tcplomp @ Sep 7 2007, 02:38 PM) To support Rolf's comment, here's a similar code part: I heard from several persons that are high on the LV list a 'whoah' (well the screenshot didn't use express VIs so that might be it as well) Ton And just to show what the pre LabVIEW 8.5 version of this code would look like: http://forums.lavag.org/index.php?act=attach&type=post&id=6885''>http://forums.lavag.org/index.php?act=attach&type=post&id=6885'>http://forums.lavag.org/index.php?act=attach&type=post&id=6885 Forget about the unwired loop termination! ;-) That is not the point here. And if anyone wonders why one would write a VI in such a way. It is called pipelined execution and has advantages on multi core, or multi CPU machines as LabVIEW will simply distribute the different blocks onto different CPUs/cores if that is possible at all. On single core systems it has no real disadvantage in terms of execution speed but this construct of course takes a memory hit because of the shift registers that store actually double the data between iterations than what a linear execution would need. Rolf Kalbermatter Quote Link to comment
silmaril Posted September 9, 2007 Report Share Posted September 9, 2007 QUOTE(rolfk @ Sep 7 2007, 09:53 PM) And just to show what the pre LabVIEW 8.5 version of this code would look like: Those two images are a really nice example that teach me to use the traditional approach with shift registers rather than the unowned feedback node. With the traditional version, I recognize the pipelining at once. Even if the new version does in fact the same, it still looks like sequential code if you don't look twice. I don't think it's very intuitive. The unowned feedback node still has one advantage: If you use it to build a functional global, you can now have a defined initial value without the need to write to the FGV at least once! :thumbup: As far as I can see, this can't be done using the "old" shift registers. Quote Link to comment
Yair Posted September 9, 2007 Report Share Posted September 9, 2007 QUOTE(silmaril @ Sep 8 2007, 05:00 PM) As far as I can see, this can't be done using the "old" shift registers. You can do that by using the First Call? primitive. If it outputs T, you use your initial value. http://forums.lavag.org/index.php?act=attach&type=post&id=6895''>http://forums.lavag.org/index.php?act=attach&type=post&id=6895'>http://forums.lavag.org/index.php?act=attach&type=post&id=6895 Quote Link to comment
silmaril Posted September 11, 2007 Report Share Posted September 11, 2007 QUOTE(yen @ Sep 8 2007, 09:36 PM) You can do that by using the First Call? primitive. If it outputs T, you use your initial value. No! That's way to simple! That doesn't count!!!!!!! ok, maybe you've got a point here... Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.