KoBe Posted April 21, 2010 Report Share Posted April 21, 2010 Hi people, it's me agian. I'm still working on Xnodes using LV8.6. I have Xnodes, which include other Xnodes. The height of Hierarchy is 4 and it works. Adaption of Inputs works for the whole Hierachy when I change an Input Type of the Xnode on the top of Hierachie. Problem: BAD RUNTIME BEHAVIOUR. Even if do not work on the Xnode directly but only on the same Block Diagram the LabView environment is getting slower and slower depending on the depth of the Hierarchie. In my opinion, (I didn't debug it until now) this happens, because the "Child-Xnodes" are updated each time when their "Parent-Xnode" is updated. In my case that means: 100% of processor load for 10 to 15 seconds and RAM consumption raises from normal 200MByte above 600MByte. After the 15 seconds it decreases again to 200MByte. Time and storage consumption may vary on the written routines. My solution: just update an Xnode (and all its Sub-Xnodes) when I directly work on the Xnode but not when I just work in the same Diagram without changing anything that has an influence on the Xnode. OR Just update on dubble click. Question: How could I Inhibit Ability VI's to run? What Ability VI is called, when I change something in my block diagram? How could the Ability VI recognize if a change on the Diagram containing the Xnode influences the Xnode or not? Thanks and looking forward to your inputs :-) Quote Link to comment
gb119 Posted April 22, 2010 Report Share Posted April 22, 2010 This sounds like the XNodes are being rescripted on each type propagation. I guess this could get very slow with Xnodes embedding XNodes. Do you get any improvement in behaviour if you keep track of the types of your inputs and only run the generate code ability when the type actually changes (rather than if the inputs are valid) ? Problem: BAD RUNTIME BEHAVIOUR. Even if do not work on the Xnode directly but only on the same Block Diagram the LabView environment is getting slower and slower depending on the depth of the Hierarchie. In my opinion, (I didn't debug it until now) this happens, because the "Child-Xnodes" are updated each time when their "Parent-Xnode" is updated. In my case that means: 100% of processor load for 10 to 15 seconds and RAM consumption raises from normal 200MByte above 600MByte. After the 15 seconds it decreases again to 200MByte. Time and storage consumption may vary on the written routines. My solution: just update an Xnode (and all its Sub-Xnodes) when I directly work on the Xnode but not when I just work in the same Diagram without changing anything that has an influence on the Xnode. OR Just update on dubble click. Question: How could I Inhibit Ability VI's to run? What Ability VI is called, when I change something in my block diagram? How could the Ability VI recognize if a change on the Diagram containing the Xnode influences the Xnode or not? Thanks and looking forward to your inputs :-) 1 Quote Link to comment
KoBe Posted April 22, 2010 Author Report Share Posted April 22, 2010 This sounds like the XNodes are being rescripted on each type propagation. I guess this could get very slow with Xnodes embedding XNodes. Do you get any improvement in behaviour if you keep track of the types of your inputs and only run the generate code ability when the type actually changes (rather than if the inputs are valid) ? I'm not sure if I get your point right. Example: Placing my Xnode on the Diagram triggers about 10 times or even more often the AdaptToInput Ability VI of a Xnode 3 hierachical level lower. I checked that with a breakpoint. AdaptToInputs is checking if the input type is same or different. If different, then run GenerateCode, else do nothing. Is that what you meant? But it seems that the type checking itself makes lot of troubles. For example if I have arrays of clusters or any insane combination of different data type I have to generate first of all default data of the new input type and of the existing type. Than I compare their values, if they are equal or not. Their are contained VI's like Variant to Flattened String and some ugly selfmade code to generate default data out of the type descriptor. Then again Flattened String to Variant VI, some loops and so on. How could I compare just the data type more easily or with less calculation effort? I don't even know, if that is the only cause for this bad runtime. Quote Link to comment
gb119 Posted April 22, 2010 Report Share Posted April 22, 2010 I'm not sure if I get your point right. Example: Placing my Xnode on the Diagram triggers about 10 times or even more often the AdaptToInput Ability VI of a Xnode 3 hierachical level lower. I checked that with a breakpoint. AdaptToInputs is checking if the input type is same or different. If different, then run GenerateCode, else do nothing. Is that what you meant? Yes that was exactly what I meant. I hadn't gotten round to trying this before, but the development version of my unbundle and unindex Xnode now checks for the input type changing like this: It seems to work right (given about 30secs of testing !) But it seems that the type checking itself makes lot of troubles. For example if I have arrays of clusters or any insane combination of different data type I have to generate first of all default data of the new input type and of the existing type. Than I compare their values, if they are equal or not. Their are contained VI's like Variant to Flattened String and some ugly selfmade code to generate default data out of the type descriptor. Then again Flattened String to Variant VI, some loops and so on. How could I compare just the data type more easily or with less calculation effort? I don't even know, if that is the only cause for this bad runtime. I'd be interested in knowing the most efficient way to compare variant types too - I end up doing this at various points in my code and often on critical paths. 1 Quote Link to comment
KoBe Posted April 22, 2010 Author Report Share Posted April 22, 2010 Found out that runtime problem is due to Open VI Reference, which opens a template VI. This template VI contains the Xnodes Code. Information like Icon Image are also taken from that template VI. Just in the moment when I open a template containing other Xnode these all get initialized i think and that takes time..... I call Open VI Reference in GenerateCode but also in Image to get the actual Icon from the template for my XNode. Icon is also used in Help. Help is called every time I move over the Xnode on my Diagram. Therefore I would like to extract the Icon Image not by Reference-> VI Icon Get as Image Data but directly out of the Binary/String data of my template.vit to reduce the number of Open Vi Reference calls. Do you know how to find the bytes containing information about the icon of a VI / VIT? Bye Quote Link to comment
Rolf Kalbermatter Posted April 22, 2010 Report Share Posted April 22, 2010 Found out that runtime problem is due to Open VI Reference, which opens a template VI. This template VI contains the Xnodes Code. Information like Icon Image are also taken from that template VI. Just in the moment when I open a template containing other Xnode these all get initialized i think and that takes time..... I call Open VI Reference in GenerateCode but also in Image to get the actual Icon from the template for my XNode. Icon is also used in Help. Help is called every time I move over the Xnode on my Diagram. Therefore I would like to extract the Icon Image not by Reference-> VI Icon Get as Image Data but directly out of the Binary/String data of my template.vit to reduce the number of Open Vi Reference calls. Do you know how to find the bytes containing information about the icon of a VI / VIT? That is really not so easy. An icon consists in fact of up to four resources namely an icongroup resource, and an icon resource for each of the 3 possible bit depths 1, 4, and 8 bits. For the principle you best check an old Inside Macintosh Volume since LabVIEW binary files are more or less Macintosh OS 9 type resource files, with some minor modifications and of course many non Macintosh type resources in them. But since LabVIEW version 8.2 there is a private application method to extract the icon of a VI without loading it. Get VI Icon.vi 1 Quote Link to comment
KoBe Posted April 22, 2010 Author Report Share Posted April 22, 2010 That is really not so easy. An icon consists in fact of up to four resources namely an icongroup resource, and an icon resource for each of the 3 possible bit depths 1, 4, and 8 bits. For the principle you best check an old Inside Macintosh Volume since LabVIEW binary files are more or less Macintosh OS 9 type resource files, with some minor modifications and of course many non Macintosh type resources in them. But since LabVIEW version 8.2 there is a private application method to extract the icon of a VI without loading it. Thanks rolf, I was searching exactly for such a function. Now runtime improved drastically. Adaption of Inputs on Datatype consumes less than 2 seconds, place on diagram too. Problem with mouse over does not appear anymore. Thank you rolfk and gb119 for your help. I will clean up my VI's now :-) Quote Link to comment
gb119 Posted April 22, 2010 Report Share Posted April 22, 2010 Found out that runtime problem is due to Open VI Reference, which opens a template VI. This template VI contains the Xnodes Code. Information like Icon Image are also taken from that template VI. Just in the moment when I open a template containing other Xnode these all get initialized i think and that takes time..... I call Open VI Reference in GenerateCode but also in Image to get the actual Icon from the template for my XNode. Icon is also used in Help. Help is called every time I move over the Xnode on my Diagram. Therefore I would like to extract the Icon Image not by Reference-> VI Icon Get as Image Data but directly out of the Binary/String data of my template.vit to reduce the number of Open Vi Reference calls. Is there a reason why you are using open vi reference rather than a static vi reference for the template ? I'd have thought the static vi reference would mean that you do all the loading of the templates when the XNode library first enters memory. Quote Link to comment
KoBe Posted April 23, 2010 Author Report Share Posted April 23, 2010 (edited) Is there a reason why you are using open vi reference rather than a static vi reference for the template ? Yes... I want to reuse the same ability VI (Image.vi) for all my Xnodes. Just by searching for a *.vit in the currents VI's (Image.vi) folder I want to load the Icon of the template. Each Xnode has it's own folder containing one or more templates, ability VI's and the xnode itself. That allows me to reuse my own existing VI's with Icon as template for a Xnode and the Icon is automatically loaded. I don't even have to adapt the Ability VI for that, because it searches always in the right (It's own folder) for the Icon. To create an Xnode i do normally: Write or reuse a template Copy xnode + ability VI's in a new folder. Rename xnode and save all Ability VI's Open xnode with xnode manager and adjust version, icon and Name Adjust GetTerms3.vi Adjust AdaptToInputs.vi and normally thats it. :-) Additionally for user friendlyness: Help Ability should have the same Icon as the Xnode, that is displayed in a Menu Entry of your palette. You can write a short Help Info by typing the description of your Xnode into the description of your Help Ability VI. Edited April 23, 2010 by KoBe Quote Link to comment
gb119 Posted March 31, 2011 Report Share Posted March 31, 2011 Additionally for user friendlyness: Help Ability should have the same Icon as the Xnode, that is displayed in a Menu Entry of your palette. You can write a short Help Info by typing the description of your Xnode into the description of your Help Ability VI. Interestingly this is only true in LabVIEW>=2009 - in earlier versions the library icon was used in the palette. Oh well, that's what one gets for messing with unreleased features... Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.