Jump to content

Recommended Posts

Hello,

I am using the variantconfig.llb for saving/loading configuration files.
Unfortunately these configuration-files got very huge: 10.000 lines or even more, having hundreds of sections.

Loading these files takes 15-30 seconds. The workflow:

- getting all sections with "Get Section Names.vi" (is very fast)
- a for-loop is using "Read Section Cluster_ogtk.vi" on each section Name (using Profiler, I can see that this uses the most amount of time). Depending on the section name, I am selecting the cluster type.

 

Do you have any hints, how I can speed up the process?

Thanks!

profile.PNG

Edited by flowschi
Link to post
Share on other sites

It depends on the structure of your data. The biggest hit reported by most users is when reading an array of clusters.

http://forums.ni.com/t5/LabVIEW/Saving-Clusters-to-a-Config-File-Using-Variants/td-p/1586500

There is a VI collection called Read/Write Anything from Moore Good Ideas that might help.

http://www.mooregoodideas.com/readwrite-anything-vis/

WIth such a large data file, you may want to consider switching to a different format, like XML or maybe even a database.

  • Like 1
Link to post
Share on other sites
  • Buy an SSD?
  • Only load the sections that are not the same as the LabVIEW in-memory defaults?
  • Only load what you need when you need it (just in time config)?
  • Split out into multiple files? (diminishing returns)
  • Refactor to to use smaller configurations?

Lets face it. 15-30 seconds to load 10,000 line inifiles is pretty damned good for a convenience function where you are probably stuffing complex clusters into it. What is your target time?

  • Like 1
Link to post
Share on other sites

OpenG was written before proper recursion existed in LabVIEW, and uses (slow) VI-Server calls to dive into clusters.  MGI is more recent and uses proper recursive calls, so should be significantly faster.  

Personally, I use JSON for configuration files.

Link to post
Share on other sites

Like I said in the thread Philip linked to (there should be some actual numbers in one of my older threads on LAVA, but I don't feel like looking it up), in my testing I saw that the main slowdown in the OpenG VIs came from the NI VIs and switching to the MGI VIs had a performance difference of ~10-20X.

The basic functional difference between the two libraries is that the NI INI VIs let the OpenG library handle existing files and modify items within them, whereas the MGI VIs simply overwrite the entire file. At least, that was the case ~8-9 years ago, when I did this. Since then many things have changed (like the internal structure of the NI INI VIs).

Link to post
Share on other sites

Yup another vote for MGI.  I believe it does read/write the whole file so some trickery might be needed to make it as modular as the OpenG calls.

The reason it is slow is because the OpenG functions call the NI INI functions under the hood.  And each line in the text file is a call to the NI function, where the MGI write their own INI parsing code, and reading and writing is just one call to a file I/O primitive.  Basically if you have a file that is 5KB in size or so either will work fine, but if you get clusters of arrays of clusters and your output file is larger than that, then you'll end up with slow downs.  Another option I used in the past was to flatten large structures into binary blobs.  This would mean the OpenG INI writing would only need one call because that binary section would be just one line.  Of course the down side is that data type won't be human readable in the file, and changes to the data type will cause errors, where the INI parsing does a decent job at preserving it.

Link to post
Share on other sites

If I remember correctly the NI ini code reads the whole file when you open it, but it keeps all the values in a single array (no var attribute lookup) so if you do random access its real slow. I've seen similar performance issues with openG, mgi and the json libraries (on different scales, of course), and I've also seen it with the labview xml and json functions. I personally think the only real right answer is:

Quote

Lets face it. 15-30 seconds to load 10,000 line inifiles is pretty damned good for a convenience function where you are probably stuffing complex clusters into it.

 

 

Edited by smithd
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By Taylorh140
      I have never gotten the performance that I desire out of the 2D picture control. I always think that it should be cheaper than using controls since they don't have to handle user inputs and click events etc. But they always seem to be slower.
      I was wondering if any of the wizards out there had any 2d picture control performance tips that could help me out?
      Some things that come to mind as far as questions go:
      Is the conversion to from pixmap to/from picture costly?
      Why does the picture control behave poorly when in a Shift register?
      What do the Erase first settings cost performance wise?
      Anything you can think of that are bad ideas with picture controls?
      Anything you can think of that is generally a good idea with picture controls?
    • By madskg
      I'm using a Labview Shared Library (DLL) to comunicate between a C# program (made by another company) and a labview Executeable (which means different processes) on the same PC. Currently i'm using network published shared variables, to communicate between the Labview DLL and the LABVIEW program (both made by me) which works well, except for the performance.
       
      Each time the DLL is called it needs to connect to the shared variable, which takes between 50 and 300 ms. When it is connected, the data transfer is instant. I have tried to use the PSP "Open Variable Connection In Background", which is a bit faster, because it doesn't wait to verify the connection. But it still adds some overhead.
       
      I have also tried to use notifiers from this example: https://lavag.org/topic/10408-communication-between-projects/ . Opening connection and sending the notifier takes 50 - 100 ms.
       
      I guess both the notifier and the shared variables are "slow" because they use the network communication, even if it is the same pc both programs are running on (localhost).
       
      Does any of you know of a faster method of communicating between a program that is running continuesly (connection open constantly) and one only exectuted when new data is ready (connection "re"-opened on every instance)?
       
      Thanks in advance.
       
      Best Regards
      Mads
       
       
       
       
       
       
    • By Manudelavega
      Before making the switch from LV2011 to LV2014, I ran the exact same test with the 2 versions (2011 and 2014) of my application. I recorded the CPU usage and discovered a huge deterioration of in LV2014.
       
      Is anybody aware of any change between LV2011 and LV2014 that could impact the performances like this?
       
      I should mention that the unit on the Y-scale is %CPU and the X-scale is MM:SS

    • By bigjoepops
      I am trying to create a code section that will take a 1D array and create a moving average array.  Sorry if this is a bad description.  I want to take x elements of the input array, average them, and put that average in the first element of a new array.  Then take the next x elements, average them, and put them as the second element of the new array.  I want this done until the array is empty.
       
      I have two possible ways to do it, but neither are running as fast as I wanted them to.  I want to see if anyone knows of a faster way to conduct this averaging.
       
      Thanks
      Joe


    • By Oakromulo
      After two years "leeching" content every now and then from the Lava community I think it's time to contribute a little bit.
      Right now, I'm working on a project that involves lots of data mining operations through a neurofuzzy controller to predict future values from some inputs. For this reason, the code needs to be as optimized as possible. With that idea in mind I've tried to implement the same controller using both a Formula Node structure and Standard 1D Array Operators inside an inlined SubVI.
      Well... the results have been impressive for me. I've thought the SubVI with the Formula Node would perform a little bit better than the other one with standard array operators. In fact, it was quite the opposite. The inlined SubVI was consistently around 26% faster.
      Inlined Std SubVI

      Formula Node SubVI

      evalSugenoFnode.vi
      evalSugenoInline.vi
      perfComp.vi
      PerfCompProject.zip
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.