Jump to content

LV2-style globals: Ever wondered whether to use While Loops or For


Recommended Posts

  • 2 weeks later...
  • Replies 65
  • Created
  • Last Reply

Top Posters In This Topic

QUOTE(Jim Kring @ Apr 7 2007, 10:21 AM)

I didn't notice anyone mention this yet, so I figured that I would throw it out there:

I believe that the While Loop, instead of the For Loop, was chosen for LV2-style globals, because the While Loop version required fewer editing steps to create. You might be thinking, "but it takes just as many editing steps to wire a FALSE to the While Loop's return selector as it takes to wire a numeric constant to a For Loop's count (N) terminal". Yes, that's true, now, but back in the good ol' days (LV <= 5.1, if I vaugely recall) you didn't have to wire anything to a While Loop's return selector, and it would default to FALSE if unwired (in the "Continue if TRUE" mode, as there was no "Stop if TRUE" setting for the return selector, back then). This means that if the return selector was unwired, a While Loop would execute one, and only one, time -- perfect for a LV2-style global ;)

Another editing step that takes longer with for loop FGs is disabling the auto-indexing that occurs by default. It's generally not the desired behavior with FGs. That's the reason I've stayed away from them, personally.

Link to comment

QUOTE(JFM @ Apr 5 2007, 08:14 AM)

Tomi, I think you are safe :)

If I remember correctly, bsvingen did some testing of this in another thread (that I can not currently find).

The result, as I recall them, was that the LV2 global was faster than using queues.

Hmm, I think we should place some bets here. LAVA could use some donations ;) .

My money goes on the queue.

Link to comment

QUOTE(Michael_Aivaliotis @ Apr 18 2007, 09:36 AM)

Hmm, I think we should place some bets here. LAVA could use some donations ;) .

My money goes on the queue.

I haven't really tested performance of queues vs. FG's (at least not since queues became primitives).

As long as the queue is only replacing a simple array memory, I think that the queue might be as fast as a FG. But when the functional global contains more than one array of data, my guess is that the FG would be faster.

Anyway, a really interesting topic.

/J

Link to comment
  • 4 weeks later...

QUOTE(Aristos Queue @ Mar 28 2007, 10:30 PM)

So I asked Jeff K (father of LV, deep knowledge of diagram optimization). He says that we can indeed constant-fold a While Loop with a constant wired to the Stop terminal. It's the only time we can constant fold the contents of a while loop, but we can do it. He says that there's no advantage to using one over the other.

Slightly off-topic but related: if you can constant-fold a single-iteration while loop, does this affect buffer allocations at tunnels into and out of those loops? I frequently pass a cluster into a single-iteration while loop, replace an element, and then pass the cluster back out again. I've started using shift registers in place of tunnels in this situation to avoid buffer allocations. Will LabVIEW optimize this for me even if I use a tunnel, because it knows the loop iterates exactly once?

Link to comment

QUOTE(ned @ May 15 2007, 10:34 AM)

Slightly off-topic but related: if you can constant-fold a single-iteration while loop, does this affect buffer allocations at tunnels into and out of those loops? I frequently pass a cluster into a single-iteration while loop, replace an element, and then pass the cluster back out again. I've started using shift registers in place of tunnels in this situation to avoid buffer allocations. Will LabVIEW optimize this for me even if I use a tunnel, because it knows the loop iterates exactly once?

I'm not sure if LabVIEW folding the loop has any effect on buffer allocations. Even if it does, my guess is that LabVIEW would still have more difficulty determining which two tunnels go together in this case if you don't use shift registers. For simple algorithms it might be trivial, but when traversing multiple cases of a case structure, the path might not be clear. At best LabVIEW might be able to find the inplace path through the loop, but using shift registers is a big hint to LabVIEW. Stick with shift registers.

I was surprised to learn that LabVIEW can inplace various other tunnel forms such as an input and output auto-index tunnel! I never knew that and am very glad to know, since this is quickest way to operate on array elements. You still run the risk that LabVIEW won't recognize the path through the loop, though, so complex algorithms might even benefit from a single-cycle loop inside the for loop with shift registers. Don't quote me on that, as I've never seen it in practice. Just a thought :)

Link to comment

QUOTE(ragglefrock @ May 16 2007, 06:21 AM)

You still run the risk that LabVIEW won't recognize the path through the loop,

I believe I've seen somewhere that you can force the compiler to use the same memory by adding a case structure inside the loop and have at least one case that just connects the tunnels.

Ton

Link to comment

QUOTE(Jim Kring @ Apr 7 2007, 05:21 PM)

I didn't notice anyone mention this yet, so I figured that I would throw it out there:

I believe that the While Loop, instead of the For Loop, was chosen for LV2-style globals, because the While Loop version required fewer editing steps to create.

an other point is: the border of a while loop is easier to hit with the right mouse button, than the border of a for-loop => it's easier to create a shift register. Even if the *invisible size* of the border of the for loop equals the border-size of the while loop, I'm allways trying to hit that small line

e.g. I mostly do not create shift registers in a for loop by hand, but drag the wires and chose "replace with shift registers" on the tunnels, which is more convenient

Link to comment
  • 3 months later...

I was curious and created my own benchmark - I'm getting quite different results with LV 8.5:

While Loop: Baseline

Floating: ~same

For Loop: 2.5x's faster writing / 4x's faster reading

(I did use a different method with for loops - 0 iterations = read, 1 iteration = write), but testing with nested case structure still was faster than other methods)

Quick check with LV 8.2 shows only 1.2 - 1.3x's performance increase with for loops.

Link to comment

QUOTE(Jim Kring @ Sep 7 2007, 10:22 AM)

Just learned that yesterday at the local LabVIEW day here in the Netherlands, presented by Jeff Washington.

His example had a loop but was about pipelined execution and boy I can tell you that although I'm excited about this feature, it does need getting used too. Basically with this node you sort of have to forget a few things about data flow and wire dependancy.

And yes Jeff mentioned that the original Feedback node was implemented by an intern and they had thought he had chosen to implement it simply as a folded shift register but that seems to not have been the case and that is why it was much slower than a shift register. In 8.5 however Jeff claimed that the Feedback register should in all aspects we as user could possibly measure, behave exactly as a shift register.

Probably there is also already an NI patent pending for it :-)

Rolf Kalbermatter

Link to comment

QUOTE(tcplomp @ Sep 7 2007, 02:38 PM)

And just to show what the pre LabVIEW 8.5 version of this code would look like:

http://forums.lavag.org/index.php?act=attach&type=post&id=6885''>http://forums.lavag.org/index.php?act=attach&type=post&id=6885'>http://forums.lavag.org/index.php?act=attach&type=post&id=6885

Forget about the unwired loop termination! ;-) That is not the point here.

And if anyone wonders why one would write a VI in such a way. It is called pipelined execution and has advantages on multi core, or multi CPU machines as LabVIEW will simply distribute the different blocks onto different CPUs/cores if that is possible at all. On single core systems it has no real disadvantage in terms of execution speed but this construct of course takes a memory hit because of the shift registers that store actually double the data between iterations than what a linear execution would need.

Rolf Kalbermatter

Link to comment

QUOTE(rolfk @ Sep 7 2007, 09:53 PM)

And just to show what the pre LabVIEW 8.5 version of this code would look like:

Those two images are a really nice example that teach me to use the traditional approach with shift registers rather than the unowned feedback node.

With the traditional version, I recognize the pipelining at once. Even if the new version does in fact the same, it still looks like sequential code if you don't look twice.

I don't think it's very intuitive.

The unowned feedback node still has one advantage: If you use it to build a functional global, you can now have a defined initial value without the need to write to the FGV at least once! :thumbup:

As far as I can see, this can't be done using the "old" shift registers.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.




×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.