Jump to content

LabVIEWs response time during editing becomes so long


Recommended Posts

Posted

I don't particularly like how that option is worded, but that's my understanding of what it's supposed to mean

Damn, I was way off with my interpretation. unsure.gif

  • 11 months later...
Posted

Every time I have seen this problem, the computer did not have enough RAM, and the swap file was heavily used. Upgrading the system RAM solved the problem. If you cannot upgrade the system RAM, and have windows 7, then ReadyBoost can be used to augment the system RAM. I currently have 8GB of RAM. (I was running out at 6GB)

  • 14 years later...
Posted
11 hours ago, Thomas Robertson said:

Just wanted to chime in on this Zombie thread and say I have all of these problems and it's driving me crazy.  LV2023, Roughly 18,000 vis in the project.

My projects can be on that order of size and editing can be a real pain. I pointed out the difficulties to Darren in QD responsiveness and he suggested looking for and removing circular dependencies in libraries and classes. I think it helped but not by much. Going to PPLs isn't really an option since so many of the VIs are in reuse packages, and those packages are intended to be used across multiple targets, Windows and RT.  This has a cascading affect and linking to things means they need to be PPLs, made for that specific target, and then the functions palette needs to be target specific to pull in the right edition of the dependencies. AQ mentioned a few techniques for target specific VI loading, but I couldn't get it to work properly for the full project.

Posted (edited)

Hi

NI has always tried to 'optimize' the compiler so the code runs faster.

In LabVIEW 2009 they introduced a version where the compiler would do extra work to try to inline whatever could be inlined.

2009 was a catastrophe with the compiler running out of memory with my complex code and NI only saved their reputation by introducing the hybrid compiler in 2010 SP1. Overall a smooth sailing thereafter up to and including 2018 SP1.

NI changed something in 2015, but its effect could be ignored if this token was included in the LabVIEW.in file EnableLegacyCompilerFallback=TRUE. 

In LabVIEW 2019 NI again decided to do something new. They ditched the hybrid compiler. It was too complex to maintain, they argued. 

2019 reminded me somewhat of the 2009 version, except that the compiler now did not run out of memory, but editing code was so sloow and sometimes LabVIEW simply crashed. NI improved on things in the following versions, but they has yet to be snappy ( ~ useful ) with my complex code.

Regards

 

 

Edited by Softball
clarification
Posted
58 minutes ago, Softball said:

Hi

NI has always tried to 'optimize' the compiler so the code runs faster.

In LabVIEW 2009 they introduced a version where the compiler would do extra work to try to inline whatever could be inlined.

2009 was a catastrophe with the compiler running out of memory with my complex code and NI only saved their reputation by introducing the hybrid compiler in 2010 SP1. Overall a smooth sailing thereafter up to and including 2018 SP1.

NI changed something in 2015, but its effect could be ignored if this token was included in the LabVIEW.in file EnableLegacyCompilerFallback=TRUE. 

In LabVIEW 2019 NI again decided to do something new. They ditched the hybrid compiler. It was too complex to maintain, they argued. 

2019 reminded me somewhat of the 2009 version, except that the compiler now did not run out of memory, but editing code was so sloow and sometimes LabVIEW simply crashed. NI improved on things in the following versions, but they has yet to be snappy ( ~ useful ) with my complex code.

Regards

 

I still use 2009 - by far the best version. Fast, stable and quick to compile. 2011 was the worst and 2012 not much better. If they had implemented a benevolent JSON primitive instead of the strict one we got, I would have upgraded to 2013.

Posted

Hi again

The topic has been discussed in the past and is even mentioned in the manual :

Discussion ( even AQ was active then in 2016 ) :

https://forums.ni.com/t5/LabVIEW/Compiler-optimisations-and-IPE/m-p/3302614

Manual :

https://www.ni.com/docs/en-US/bundle/labview/page/choosing-between-editor-responsiveness-and-vi-execution-speed.html

 

My story about a slow editing experience and what was done over time may help others to either avoid creating large systems, or stick with LabVIEW versions that can handle that.

Regards

Posted

After consulting some co-workers I reverted recent changes to my project and now everything is fast again.   They reported similar observations when they had similar problems.   Our theory is that as things change something in the project gets out of whack, maybe it's sense of dependencies I don't know and gets saved into the project file.  Anyway, I'm back to normal speed editing.

As to breaking apart the 18K vi project, we've discussed it for a decade now.   We have plans to try to decouple one part of the code from the rest because it seems to be a major culprit for why everything relies on everything else.   But, this decoupling effort feels like trying to boil the ocean.   We're dealing with a 30 year old code base and lots of the communication and subvi calling was defined long before there were libraries and public private concepts.  If we can ever get them more isoltated we might consider PPL's.  I feel like those had issues with cross platform compatibility (Windows/Mac) though.

Posted
7 minutes ago, Thomas Robertson said:

As to breaking apart the 18K vi project, we've discussed it for a decade now.   We have plans to try to decouple one part of the code from the rest because it seems to be a major culprit for why everything relies on everything else.   But, this decoupling effort feels like trying to boil the ocean.

Yes, decoupling is a long process. I have been trying to do that with the Icon Editor. Not trying right now as other priorities with that project are more important. Do the process slowly and make deliberate efforts. You will eventually see the benefits.

Posted
19 hours ago, crossrulz said:

Yes, decoupling is a long process. I have been trying to do that with the Icon Editor. Not trying right now as other priorities with that project are more important. Do the process slowly and make deliberate efforts. You will eventually see the benefits.

Ooooh. What have you been doing with the icon editor?

Posted (edited)
21 hours ago, Thomas Robertson said:

After consulting some co-workers I reverted recent changes to my project and now everything is fast again.   They reported similar observations when they had similar problems.   Our theory is that as things change something in the project gets out of whack, maybe it's sense of dependencies I don't know and gets saved into the project file.  Anyway, I'm back to normal speed editing.

One thing I have seen in the past running really havoc with the LabVIEW editor and/or compiler were circular dependencies. Very easy to end up with even in moderately sized projects if one uses globals. Absolutely unavoidable without a proper design and avoiding globals almost entirely, except in very carefully chosen places, for large projects. The LabVIEW editor/precompiler does pretty much a full pass over the internal data graph for every edit operation. With circular dependencies the graph gets effectively infinite in length and while the system has checks in place to detect such circular references and abort the parsing at some point, it seems not able to do that safely just on the first occurrence without missing some paths, so goes on longer than is most of the times necessary.

First sign usually shows up as frequent inability to build the project without obscure errors, especially for realtime targets. Things go ok much longer for builds on Windows, but drop the project code into a realtime target and builds and/or deploys will cause all kind of hard to explain errors.

Quote

As to breaking apart the 18K vi project, we've discussed it for a decade now.   We have plans to try to decouple one part of the code from the rest because it seems to be a major culprit for why everything relies on everything else.   But, this decoupling effort feels like trying to boil the ocean.   We're dealing with a 30 year old code base and lots of the communication and subvi calling was defined long before there were libraries and public private concepts.  If we can ever get them more isoltated we might consider PPL's.  I feel like those had issues with cross platform compatibility (Windows/Mac) though.

A 18k VI project! That's definitely a project having grown out into a mega pronto dinosaur monster. I can't imagine to even consider creating such a beast. My biggest projects were probably somewhere around 5000 and that was already getting very painful to do any work on. And caused me to modularize it eventually, with parts moved into realtime targets. The cost for the additional hardware were actually smaller than the time lost keep trying to get the monster to build and work, despite that NI realtime hardware is anything but cheap.

But I inherited in the long ago past a project that consisted only of maybe 100 VIs. However it consisted of a main VI that was something like 15MB in size (the other VIs were mostly just simple accessors to drivers and ... shudder ... several dozen global variables), with the main VI being one huge loop with sequence structures inside case structures, inside loops, inside more sequence structures, inside even more case structures and loops and this continued for a few more levels like that. Not one shift register, everything was put in globals and written and read back 100ds of times. Editing that VI was a painful exercise, select a wire or node, wait 5 seconds, move the wire or node, wait 5 seconds ... . I have no idea how the original developer ever got this to that point without going insane, but more likely he was insane to begin with already. 😀

I was several days busy to just get the diagram a bit cleaned up by adding some shift registers to manage the actual data more efficiently, identify common code constructs that appeared all over the place over and over and place them into subVIs, and get everything to a state that was reasonably workable before I could really go and refactor that application. In the end I had maybe 500 or so VIs and a main VI that was well below 1MB with a proper state machine and almost no sequence structures anymore. And it run reliably and when you pushed the stop button you did not have to wait half an eternity before the application was able to detect that. The biggest irony was that the application actually was working with an enum state with some 100 or more states, maintained in a global and in almost every sequence frame there was a case structure that would have one or a few cases for a specific state and a default that was doing pretty much nothing. It was a state machine turned inside out and then put into a cascade of sequences!

Edited by Rolf Kalbermatter
Posted
39 minutes ago, Rolf Kalbermatter said:

A 18k VI project! That's definitely a project having grown out into a mega pronto dinosaur monster.

You may say that but ECL alone is about 1400 Vi's. If each DLL export is a VI then realising the entire export table of some DLLs can create hundreds of VI's, alone.

However. I think that probably the OP's project is OOP. Inheritance and composition exponentially balloon the number of VI's - especially if you stick to strict OOP principles.

Posted (edited)
53 minutes ago, ShaunR said:

However. I think that probably the OP's project is OOP. Inheritance and composition exponentially balloon the number of VI's - especially if you stick to strict OOP principles.

Well, if you stick to strict OOP principles, modularizing it through plugin mechanism and similar should be fairly easy to do!

It takes a bit of time to create the necessary plugin mechanisms and requires about at least 3 iterations before you end up with something that really works but that is still magnitudes easier than waiting on a 18k VI project to load every time and fall asleep between edit operations.

Quote

You may say that but ECL alone is about 1400 Vi's. If each DLL export is a VI then realising the entire export table of some DLLs can create hundreds of VI's, alone.

That's one more reason why I usually have a wrapper shared library that adapts the original shared library interface impedance to the LabVIEW Call Library Interface impedance. 😀

Edited by Rolf Kalbermatter

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.