Jump to content

Using LV Class vs Passing Cluster


Recommended Posts

In the development environment, whenever you say File >> Open and load a top-level VI, the LC and FP load into memory. If the VI was last saved as bad or broken or if it turns out to be bad/broken after it finishes loading its dependencies, then the BD also loads into memory.

This actually means, that if I break a deep VI (e.g. from my reuse lib), I'll have a much bigger 'load' when I open the Main.vi compared to when everything is fine?

To be more specific, if I load a VI using Vi-Server (not intended to run), I should see a big performance penalty if a subVI is broken/bad?

Felix

Link to comment

I'll point out this discussion reminds me of a common practice I do while coding in LabVIEW: close VIs after editing them. More than once have I worked on a VI which is called in a tight loop only to watch the performance of the calling VI get destroyed by leaving the front panel open. As a habit, I close all but the top level VI whenever I'm testing execution of something (except if I need to probe values).

Link to comment
This actually means, that if I break a deep VI (e.g. from my reuse lib), I'll have a much bigger 'load' when I open the Main.vi compared to when everything is fine?

To be more specific, if I load a VI using Vi-Server (not intended to run), I should see a big performance penalty if a subVI is broken/bad?

Not in the runtime engine (because no BD there), but yes in the dev env. Of course, my first thought is, "How often are you loading broken VIs as part of a running app?" Most of the time, it seems to me, this wouldn't be a big deal. I mean, if you're loading a broken VI, my guess is that it probably isn't a part of the hierarchy you're trying to run, so there are few callers of it that would also be broken. If it is something like a template VI, you're going to need the block diagram anyway to fill in whatever is missing with scripting.
Link to comment

Thanks Stephen! That really helps fill in the missing pieces. :star: (I've never seen this information in a white paper. Do you know if one exists?)

Say you're editing a typedef cluster and all the dependent LC blocks are loaded, but their FP/BD's are not. When the typedef's changes are applied, LV finds all the dependent vis, loads the block diagrams, and makes the necessary edits transparently to the user. (i.e. Without opening windows to the block diagrams.) Since the dependent vis are now dirty their FPs are also loaded into memory, forcing data copies for those FP controls when the app is executed, even though the FP windows aren't open, right?

Does this behavior change when separating compiled code in LV10? My initial thought is that LV ought to be able to recompile the dependent vi's source code and remove the FP/BD from memory without marking them dirty, and simply wait until the user opens that vi before giving it a dirty dot. But that could lead to confusing behavior when multiple typedef changes are applied sequentially (like what happens when a vi's LC block isn't loaded when changes are applied) so I'm guessing no.

The one open question I still don't have an answer to is why does the FP have to be loaded when the BD loaded? My previous explanation got blown away and now I'm really curious.

I created a flow chart to graphically represent what you described. If it's accurate enough I'll throw it up on the wiki. (OpenOffice source file attached in case anyone wants to edit it.)

post-7603-0-34667300-1296408411_thumb.pn

LoadingLabviewVI.zip

Link to comment

Not in the runtime engine (because no BD there), but yes in the dev env. Of course, my first thought is, "How often are you loading broken VIs as part of a running app?" Most of the time, it seems to me, this wouldn't be a big deal. I mean, if you're loading a broken VI, my guess is that it probably isn't a part of the hierarchy you're trying to run, so there are few callers of it that would also be broken. If it is something like a template VI, you're going to need the block diagram anyway to fill in whatever is missing with scripting.

I'm unsure about the frequency this happens. Just knowing this load-penalty will enforce a 'never-save-a-broken-build' rule and drastically reduce this issue.

But I had to scenarios 'theoretically' in mind:

1. Run a scripting VI, such as a VI Analyzer Test on the top-level vi, while somewhere down the hierarchy there is still unfinished work (assume it's the job of another developer to write the driver and you are just working on the top-level GUI).

2. Having a changed ConPane in the reuse lib (new version) and propagating this change to an older project. So all kind of relinking issues.

1. & 2. That's what I actually had in mind, a kind of 'merging' of projects and which strategy to use (bottom-up, top-down). I'm not far enough with the project to detail if it (this behavior) would affect that idea. But I already see that performance is an issue for the general architecture, so I'm concerned.

Felix

Link to comment
(I've never seen this information in a white paper. Do you know if one exists?)
No idea.
Say you're editing a typedef cluster and all the dependent LC blocks are loaded, but their FP/BD's are not. When the typedef's changes are applied, LV finds all the dependent vis, loads the block diagrams, and makes the necessary edits transparently to the user. (i.e. Without opening windows to the block diagrams.) Since the dependent vis are now dirty their FPs are also loaded into memory, forcing data copies for those FP controls when the app is executed, even though the FP windows aren't open, right?
Yes. All that is correct.

There's another variant -- the FP could be the thing hosting the typedef, and when the typedef changes, LV will load both the FP and the BD into memory in order to update the FP. The reason the BD loads is because the VI has to recompile to deal with the typedef change.

Does this behavior change when separating compiled code in LV10? My initial thought is that LV ought to be able to recompile the dependent vi's source code and remove the FP/BD from memory without marking them dirty, and simply wait until the user opens that vi before giving it a dirty dot. But that could lead to confusing behavior when multiple typedef changes are applied sequentially (like what happens when a vi's LC block isn't loaded when changes are applied) so I'm guessing no.
I know that it does change. I do not know how dramatic the change is. Your analysis sounds completely plausible, but I haven't really stayed up-to-date on the source-obj splitting.
The one open question I still don't have an answer to is why does the FP have to be loaded when the BD loaded? My previous explanation got blown away and now I'm really curious.
Up until LV 2009, the default values of controls were stored as part of the front panel, so any time the VI wanted to recompile, the panel was actually necessary to generate the code. Nowadays the default values have been moved out of the panel, and we are inching closer to the day when the recompile could happen with just the diagram without the panel, but there are 25 years worth of code that assumes "if I have the diagram then I know the panel is in memory, so I don't have to test for NULL". It's a low priority refactoring that is ooching forward.
I created a flow chart to graphically represent what you described. If it's accurate enough I'll throw it up on the wiki.
On the "reasons for FP being in memory", add "VI is tagged dirty". Same for BD.

Do put a note on it that it is LV2010 specific and does not cover the case where source and obj files are saved separately.

Link to comment

On the "reasons for FP being in memory", add "VI is tagged dirty". Same for BD.

I'm assuming "edited during load" and "tagged dirty" are interchangable. Are there cases where there are not?

Do put a note on it that it is LV2010 specific and does not cover the case where source and obj files are saved separately.

Will do. Thanks for the feedback.

Link to comment
  • 2 months later...
  • 4 months later...

So I thought I'd reopen this discussion as I searched the forum for using classes as clusters. I'm working on a state machine which does the typical thing of passing constants, inside a cluster, around on a shift register. I've read that mimicing this behaviour using classes is a good way to start getting into OOP. I've broken up all the constants into categories that I think make sense (such as FPGA initlisation constants and Messenger Queue Refs). What is the best way of passing these classes on one wire? Similar to the way the data cluster is one wire. Conceptually, it's nice to break the constants up into areas of responsibility and initialise them using methods and such, but I don't want to pass a class wire for every set of constants. I also don't want to group all my constants into one Class as it seems yucky to me. What's the best solution?

Link to comment
Sure, you can't see it update in this case, but is still allocated memory e.g. to give you the option to cut and paste the data etc... post-10325-0-29094700-1296170819_thumb.p
Yes, but Why? I mean, I can see why FP controls that change their appearance need a copy of their data, but if the control doesn't change appearance why does the FP control need any data at all (other than type information obviously !) ? Actually that might be an interesting idea for debugging dynamic dispatch calls - if the FP controls and inducators changed appearance depending on the runtime type of object they were being passed. Then there'd be a good reason to pay for getting a copy of the data....

And I suspect -- but COULD be wrong -- that this is because iLV is "really" byval (meaning dataflow optimized) and not byref, so inplaceness counts in many, many ways...

Link to comment

So I thought I'd reopen this discussion as I searched the forum for using classes as clusters. I'm working on a state machine which does the typical thing of passing constants, inside a cluster, around on a shift register. I've read that mimicing this behaviour using classes is a good way to start getting into OOP. I've broken up all the constants into categories that I think make sense (such as FPGA initlisation constants and Messenger Queue Refs). What is the best way of passing these classes on one wire? Similar to the way the data cluster is one wire. Conceptually, it's nice to break the constants up into areas of responsibility and initialise them using methods and such, but I don't want to pass a class wire for every set of constants. I also don't want to group all my constants into one Class as it seems yucky to me. What's the best solution?

AlexA,

If you want a single or reduced set of wires, you could make a cluster of the classes you have identified or even Class with the set of classes you identified as private data. I find the latter a bit overkill as you will need lots of accessor (Read/Write) vi's. So I would vote for making a cluster of classes.

Kurt

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.