Jump to content
Sign in to follow this  
crelf

"Required" terminals are more efficient?

Recommended Posts

I just heard an interesting tidbit at a local LabVIEW User Group meeting: one of the attendees said that he'd heard an NI presenter at a recent NI Developer Days session comment that changing the status of a connector pane terminal from "Optional" or "Recommended" to "Required" made it more efficient (I'm not sure if that meant memory or speed or both). Anyone heard of this one?

Share this post


Link to post
Share on other sites

QUOTE(crelf @ Jun 7 2007, 04:30 PM)

I just heard an interesting tidbit at a local LabVIEW User Group meeting: one of the attendees said that he'd heard an NI presenter at a recent NI Developer Days session comment that changing the status of a connector pane terminal from "Optional" or "Recommended" to "Required" made it more efficient (I'm not sure if that meant memory or speed or both). Anyone heard of this one?

No.

I wonder if they confused that with the "controls on icon connector should be on the root" tidbit.

Paraphrasing: Talk is cheap, show me the benchmarks"

Ben

Share this post


Link to post
Share on other sites

It may very well be true. In theory required inputs can be optimized better at compile time of the VI. For not-required inputs VI needs to create a memory buffer for the particular input. For required inputs VI can more easily reuse the input buffer. The added benefit would be lost if LabVIEW did optimization that would depend on the block diagram environment into which a VI is dropped. But it doesn't do this.

Share this post


Link to post
Share on other sites

QUOTE(Tomi Maila @ Jun 7 2007, 11:11 PM)

if LabVIEW did optimization that would depend on the block diagram environment into which a VI is dropped. But it doesn't do this.

Tomi,

could you explain what you mean here?

'block diagram environment'?

Ton

Share this post


Link to post
Share on other sites
QUOTE(Tomi Maila @ Jun 7 2007, 02:11 PM)
It may very well be true. In theory required inputs can be optimized better at compile time of the VI. For not-required inputs VI needs to create a memory buffer for the particular input. For required inputs VI can more easily reuse the input buffer.
This doesn't make sense to me. A required input just indicates to the user that they need to connect the wire. The only difference I see is that in one case the wire is connected to the input terminal and in the other case it is not. Perhaps this makes a difference, however the "required" flag by itself does not carry any valuable info for the compiler. Of course, there may be some black magic that only NI knows but at face value, I don't see it.

Share this post


Link to post
Share on other sites

QUOTE(Michael_Aivaliotis @ Jun 7 2007, 02:38 PM)

This doesn't make sense to me. A required input just indicates to the user that they need to connect the wire. The only difference I see is that in one case the wire is connected to the input terminal and in the other case it is not. Perhaps this makes a difference, however the "required" flag by itself does not carry any valuable info for the compiler. Of course, there may be some black magic that only NI knows but at face value, I don't see it.

If a required input/connector is not wired then you get a broken wire -- meaning, among other things, that the LV code can't be "compiled". I think Tomi got it right -- without a required input/connector a memory buffer is allocated. How much "real world" impact that makes in a particular project would depend on a number of factors obviously but, if the project were large enough, and the computing resource limited enough, it could be discernible...

At least that's the theory, or my story (so far) and I'm sticking with it.

Share this post


Link to post
Share on other sites

QUOTE(Val Brown @ Jun 7 2007, 03:20 PM)

If a required input/connector is not wired then you get a broken wire -- meaning, among other things, that the LV code can't be "compiled". I think Tomi got it right -- without a required input/connector a memory buffer is allocated.

Well, I think that's stating the obvious. If you DO have the wire connected however, and the input is marked as required, then what? Does it make a difference?

Share this post


Link to post
Share on other sites

QUOTE(Ben @ Jun 8 2007, 06:54 AM)

What does that mean?

QUOTE(Tomi Maila @ Jun 8 2007, 07:11 AM)

In theory required inputs can be optimized better at compile time of the VI. For not-required inputs VI needs to create a memory buffer for the particular input. For required inputs VI can more easily reuse the input buffer.

Can you flesh this out a little more Tomi? I *think* I know what you're trying to say, but it's not completely clear...

QUOTE(crelf @ Jun 8 2007, 06:30 AM)

...NI Developer Days session comment that changing the status of a connector pane terminal from "Optional" or "Recommended" to "Required" made it more efficient...

Anyone form NI care to comment on whether this is in a NI Developer Days session?

Share this post


Link to post
Share on other sites

QUOTE(crelf @ Jun 7 2007, 04:30 PM)

I just heard an interesting tidbit at a local LabVIEW User Group meeting: one of the attendees said that he'd heard an NI presenter at a recent NI Developer Days session comment that changing the status of a connector pane terminal from "Optional" or "Recommended" to "Required" made it more efficient (I'm not sure if that meant memory or speed or both). Anyone heard of this one?

This looks like it's false and it doesn't make any difference...

I just created a quick VI that contains a subVI. I originally set the connector on the subVI as optional... then built the application as an EXE. I then used a Win32 dissasmbler to convert the EXE into an ASM file. Then, I took the subVI, marked the connector as required, built, and disassembled again. I did a compare between the two disassembled files. No difference. The files were identical.

I did this several times and tried also with dropping down the subVI several times on the BD of the top level VI. Still in all instances, changing the connector pane between optional or required made no difference in the EXE produced.

Share this post


Link to post
Share on other sites

QUOTE(Jeff Plotzke @ Jun 8 2007, 11:53 AM)

This looks like it's false and it doesn't make any difference...

Nice job Jeff - just as I expected. I wonder why the people I spoke to today saud they'd all heard the NI employee say it? I hope it was just some sort of misunderstanding...

Share this post


Link to post
Share on other sites

Some coworkers and I went to the Developer Education Day here in Phoenix. I didn't stay for the "Performance Optimization for Embedded LabVIEW Applications" discussion, but they did say that required inputs can reduce memory allocations. There isn't much info in the slides, but you can see them here:

ftp://ftp.ni.com/pub/events/labview_dev_e...ced_labview.pdf

I get the impression that this applies to all versions of LabVIEW, not just embedded. See slide #30.

I think what they were saying (based on a email from a coworker that did see the presentation) is that LabVIEW needs to set aside memory for the default values of recommended/optional inputs. It might not need to do this for required inputs if the "inplaceness" algorithm determines that a copy doesn't need to be made. If so, the subVI can use the same memory that was allocated in the calling VI.

I'm thinking Jeff's results could be one of three things:

1) The "inplaceness" algorithm determined that his subVI needed a copy, in which case memory has to be set aside regardless of required/recommended/optional settings.

2) When building an executable, LabVIEW does additional optimization and gets rid of any memory allocations for default values of inputs that are wired.

3) The slides above only apply to embedded.

Of course, it could be that the slides are wrong. Or maybe they apply to versions of LV that aren't released and the presenter didn't realize that. Wouldn't be the first time that a NI employee said something only to come back and say something like, "Sorry, I'm using an alpha copy of LV9. What I said doesn't apply right now."

Pat

Share this post


Link to post
Share on other sites

QUOTE(Jeff Plotzke @ Jun 7 2007, 06:53 PM)

I just created a quick VI that contains a subVI. I originally set the connector on the subVI as optional... then built the application as an EXE. I then used a Win32 dissasmbler to convert the EXE into an ASM file. Then, I took the subVI, marked the connector as required, built, and disassembled again. I did a compare between the two disassembled files. No difference. The files were identical.

Uh oh... it sounds like you just violated your license agreement :P

QUOTE

Restrictions. You may not: (i) reverse engineer, decompile, or
disassemble
the SOFTWARE

Share this post


Link to post
Share on other sites

QUOTE(Ben @ Jun 7 2007, 10:54 PM)

QUOTE(crelf @ Jun 8 2007, 03:14 AM)

What does that mean?

I think Ben means that is you want to make good (and) fast use of in place work on data that goes in and out (like an +1 operation) it is best to set the terminals on the bare block diagram. Meaning not inside any structure (like an error testing case structure).

Ton

Share this post


Link to post
Share on other sites

I decided to test this issue. I wrote two identical VIs from scratch. One of the VIs is with required connector and the other is with recommended connector. Both VIs compute +1 for an U32.

I made a test VI runs both of these VIs in two parallel loops and measures which loop runs faster. Then I made another test VI that is identical to the first test VI but the required subVI is replaced with recommended subVI and vice versa. Both of these test VIs show that VI with required input runs faster than the VI with recommended input. I considered the option that LV schedules VIs according to their name so I switched the names of the VIs. Still the required VI was faster. The speed advantage for this task was about 1%.

Let's see what could happen under the hood of LabVIEW. If we have a subVI with a required connection, LabVIEW compiler knows that it always has an input buffer connected to the required terminal. On the other hand for a VI with a recommended or optional terminal LabVIEW doesn't know at compile time if a buffer is connected to the input terminal. So LabVIEW needs to insert a snippet of code to allow both options.

Let's speculate what happens when LabVIEW calls a VI with required input.

- The callers pushes input buffer pointer to a stack

- The caller jumps to the entry point of the subVI

- The subVI pops the input buffer pointer from the stack

- The value of the input buffer is incremented by one inside the subVI

- The subVI returns

Let's also speculate what happens when LabVIEW calls a VI with recommended input.

- The callers pushes input buffer pointer to a stack

- The caller jumps to the entry point of the subVI

- The subVI pops the input buffer pointer from the stack

- The subVI checks if the input buffer pointer is a NULL pointer

- If it's a NULL pointer, LV creates a new buffer for the evaluation

- The value of the chosen buffer is incremented by one inside the subVI

- The subVI returns

Actually the recommended case process can be even worse if the in-placeness algorithm cannot optimize for the reusage of input buffer. Even though there is enough information available to be able to reuse the input buffer, it doesn't mean that LV actually does this. In this case there will be memory copied once or even twice each time the subVI is called.

Tomi

Share this post


Link to post
Share on other sites

QUOTE(Jim Kring @ Jun 8 2007, 02:02 PM)

Doesn't "SOFTWARE" mean LabVIEW itself? Are the VIs we create part of that?

QUOTE(tcplomp @ Jun 8 2007, 02:07 PM)

I
think
Ben means that is you want to make good (and) fast use of in place work on data that goes in and out (like an +1 operation) it is best to set the terminals on the bare block diagram. Meaning not inside any structure (like an error testing case structure).

I see - yep, that makes sense...

QUOTE(Tomi Maila @ Jun 8 2007, 05:33 PM)

That makes sense to me.

QUOTE(yen @ Jun 8 2007, 06:43 PM)

See Greg's (!) explanations in
this thread
.

That's a most-excellent thread - thanks for the link! :thumbup:

Share this post


Link to post
Share on other sites

QUOTE(crelf @ Jun 7 2007, 08:30 PM)

That's me. I'm back, for more trouble. :shifty:

QUOTE(lavezza @ Jun 8 2007, 04:02 AM)

Some coworkers and I went to the Developer Education Day here in Phoenix. I didn't stay for the "Performance Optimization for Embedded LabVIEW Applications" discussion, but they did say that required inputs can reduce memory allocations. [...]

Exactly what I was told.

QUOTE(yen @ Jun 8 2007, 08:43 AM)

.

Now you've got me curious how the required-terminal and inside/outside-of-structure variations interact. Can you get a good performance with a terminal-inside-the-structure as long as it's a required terminal on the connector pane? Or do you have to do it right both ways?

At this rate, I'll need to take about 2 weeks to go back and fix all my subvi's.

Share this post


Link to post
Share on other sites

QUOTE(torekp @ Jun 8 2007, 10:51 AM)

...

At this rate, I'll need to take about 2 weeks to go back and fix all my subvi's.

The good news is that Darren's VI Analyzer will find all of the VI's that need changed.

Ben

Share this post


Link to post
Share on other sites

QUOTE(Tomi Maila @ Jun 8 2007, 02:33 AM)

Both of these test VIs show that VI with required input runs faster than the VI with recommended input. I considered the option that LV schedules VIs according to their name so I switched the names of the VIs. Still the required VI was faster. The speed advantage for this task was about 1%.

When I run your VI, I get better performance for the R case in Test.vi, but better performance for the O case in Test_inverse.vi. I wondered if it had something to do with the fact that the two loops were running in parallel, so I put them sequentially. In this case, I get faster performance in the O case 5 times out of 6. The spread is smaller in my case, about 5 ms or 0.1%.

Share this post


Link to post
Share on other sites

QUOTE(Jim Kring @ Jun 7 2007, 11:02 PM)

Sorry I'm lazy, but does the SOFTWARE also include executables created with LabVIEW or just LabVIEW itself?

QUOTE(Jeff Plotzke @ Jun 7 2007, 08:53 PM)

This looks like it's false and it doesn't make any difference...

I just created a quick VI that contains a subVI. I originally set the connector on the subVI as optional... then built the application as an EXE. I then used a Win32 dissasmbler to convert the EXE into an ASM file. Then, I took the subVI, marked the connector as required, built, and disassembled again. I did a compare between the two disassembled files. No difference. The files were identical.

I did this several times and tried also with dropping down the subVI several times on the BD of the top level VI. Still in all instances, changing the connector pane between optional or required made no difference in the EXE produced.

Your test shows most probably nothing. The only code your dissassembler can see is the startup stub that loads the LabVIEW runtime system and then passes the reference to the internal LLB to that. The machine code located in the VIs inside that LLB, can only be located and invoked by LabVIEW. There is no disassembler that could possibly know how to find the LLB, not to speak about the VIs inside it nor the LabVIEW generated machine code in each VI.

Basically every LabVIEW executable of a given LabVIEW version will give you exactly the same assembly code.

Rolf Kalbermatter

Share this post


Link to post
Share on other sites

QUOTE(rolfk @ Jun 10 2007, 06:07 PM)

I assumed that "SOFTWARE" meant LabVIEW (and other included NI software)... not executables created with LabVIEW, but I'll have to read the license to find out for sure...

QUOTE(rolfk @ Jun 10 2007, 06:07 PM)

Your test shows most probably nothing. The only code your dissassembler can see is the startup stub that loads the LabVIEW runtime system and then passes the reference to the internal LLB to that. The machine code located in the VIs inside that LLB, can only be located and invoked by LabVIEW. There is no disassembler that could possibly know how to find the LLB, not to speak about the VIs inside it nor the LabVIEW generated machine code in each VI.

Basically every LabVIEW executable of a given LabVIEW version will give you exactly the same assembly code.

You're absolutely right -- I just created a considerably different VI from what I tested with originally... and *cough* *cough* *cough* -- The exact same disassembled code was created.

That's interesting... So, does this mean that a built EXE actually contains some intermediate language (in the LLB) that's interpreted by the runtime engine or does LV actually generate the machine code while it builds?

Share this post


Link to post
Share on other sites

QUOTE(Jeff Plotzke @ Jun 10 2007, 05:24 PM)

That's interesting... So, does this mean that a built EXE actually contains some intermediate language (in the LLB) that's interpreted by the runtime engine or does LV actually generate the machine code while it builds?

No. I'm not sure why Rolf thinks the assembly code isn't in there somewhere. The assembly code for all the VIs is definitely in the EXE. The initial code is going to start the EXE, and then start executing "clumps" of assembly as they become available to the thread scheduler (the top-level VI schedules its initial clumps to run... as each clump's inputs become available, they join the list of clumps that can get executed). LV doesn't have a call stack for subroutines -- just the scheduler picking up a clump, executing it, then scheduling the next batch of clumps. Anyway, the long and short of it, there is assembly code in there for the VIs. I don't know why you're not seeing it.

This is probably the sort of information that you're not supposed to know -- because you're not supposed to be reverse engineering the assembly according to the license agreement -- but I figured I'd tell you because I don't want some rumor to start up that LV is somehow an intermediate interpreted language like JAVA. That sort of nonsense would just contribute to perceptions -- right or wrong -- of LV as not a real language and too slow for real work. LabVIEW is a compiled language.

Share this post


Link to post
Share on other sites

QUOTE(Aristos Queue @ Jun 11 2007, 01:58 PM)

The assembly code for all the VIs is definitely in the EXE.

Thanks Aristos - do you, or anyone else you can find at NI, have anything to add about the "required" vs "recommended" terminal debate? I think, without inside knowledge, the rest of us are just guessing at this point...

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.