Jump to content

Performance on different CPU:s


Recommended Posts

Hi LAVA,

I need your help!

I've recently started updating a system from LV8.6 to 2011SP1, and have ended up in confusion. I deploy the system as executables on two different machines running Linux Ubuntu, one a laptop with a single processor and the other a panel PC with two processors.

What happens is the on the first, single-processor computer I see a dramatic fall in the use of the CPU. The other in contrast shows a dramatic raise in CPU usage. The computers do not have LV installed, only the RTE:s.

Machine1 (1* Intel® Celeron® CPU 900 @ 2.20GHz):

CPU% with LV8.6: 63%

CPU% with 2011SP1: 39%

Machine2 (2* Genuine Intel® CPU N270 @ 1.60GHz):

CPU% with LV8.6: 40%

CPU% with 2011SP1: 102%

In the second machine the max CPU is 200% since it has two CPU:s. The load seems to be pretty even between the CPU:s.

Why is this happening, and what should I do to get the CPU usage down on machine2 (the one being shipped to customers)?

/Martin

Link to post

The simple answer it that clock speed is the only contributing factor to performance.

In either case, it appears your application needs some attention with regard to efficient programming techniques. My first step would be to run the program in your development environment and use the Profiler (Tools -> Profile -> Performance and Memory) to find which parts of the program are consuming the most resources.

~Dan

Link to post

More fundamentally, you might want to add some code to profile intensive sections of the software. You don't include any information to its specific function, so I can't recommend anything to you off-hand. You should have intimate knowledge of what your process will be doing at the times when you're grabbing these figures, so analyzing those routines should tell you what some potential sources might be.

I wouldn't really expect the change in CPU usage you see between versions, but it could be that some nodes/VIs you're depending on have changed, so reviewing your code (especially when jumped 3.5 major versions) would be prudent.

One thing to do would be swap back to source code and use the built-in profiler to see which VIs are racking up the most clock time. (Dan already suggested this)

Link to post

My c¢:

1: On LabVIEW 2011, in the build : Advanced, turn off SSE2

2: On LabVIEW 2011, in the build : Advanced, Check "Use LabVIEW 8.6 file layout"

3: other combination of 1 & 2

The other thing I noted your second cpu is N270, which I believe is Netbook ATOM kind. it my handles Math and double/single operation differently. I can't see why it would change from 8.6 to LV11, but it may have something to do with SSE2 optimization. looking at Wikipedia both Celeron and Atom support it.

just some ideas.

Link to post

Thanks for your suggestions. Unfortunately, no luck so far in solving the problem.

To give a bit of background information:

The system is communicating with a USB device through drivers written in C and called by the good old CINs. The data then goes through an algorithm and is presented in 4 charts using user events. More user events are triggered for GUI updates, but the GUI i use now don't care about those. The update rate is about 30Hz. It uses about 80 classes altogether although many of these are for administrative use (user accounts, printing etc.) and quite a few are wrappers of different kinds. Slightly more than 2000 vi:s are loaded.

1: On LabVIEW 2011, in the build : Advanced, turn off SSE2

2: On LabVIEW 2011, in the build : Advanced, Check "Use LabVIEW 8.6 file layout"

3: other combination of 1 & 2

The other thing I noted your second cpu is N270, which I believe is Netbook ATOM kind. it my handles Math and double/single operation differently. I can't see why it would change from 8.6 to LV11, but it may have something to do with SSE2 optimization. looking at Wikipedia both Celeron and Atom support it.

SSE2 has been tested back and forth without any change. I use 8.6 file layout as some code rely on it.

I've set the program in different states and compared the CPU usage between 8.6 and 2011SP1 to see if I can nail down any specific parts of my code that would cause the increase:

* With the drivers are switched off the increase in CPU usage is 65% (relative).

* Starting the drivers, still about 65%

* Starting the algorithms and GUI updates gives more than 100% increase (I can't separate those two yet).

* Stopping the GUI updates, i.e. not listening to any of the user events for GUI updating also gave gave more than a 100% increase, although the overall CPU usage dropped more than I would have expected in both 8.6 and 2011SP1.

I've run the application on my development machine that also have two CPUs, this shows better performance using 2011SP1 than 8.6 as in machine1 above.

So the conclusion of this would be that everything takes up more CPU on this specific computer with 2011SP1, and that the algorithms takes up even more CPU power. Further suggestions or crazy ideas on why I see this behaviour are welcome. I need coffee.

/Martin

Link to post

Do you have a bunch of loops which don't have a 0ms wait in them to yield the CPU? Acceptable alternatives are event structures, ms internal nodes, or most any NI-built node which includes a timeout terminal.

For loops or while loops without a node like these will be free-running, executing literally as fast as the scheduler will allow, which is often not necessary and detrimental to the performance of the application as a whole.

Link to post

Do you have a bunch of loops which don't have a 0ms wait in them to yield the CPU? Acceptable alternatives are event structures, ms internal nodes, or most any NI-built node which includes a timeout terminal.

For loops or while loops without a node like these will be free-running, executing literally as fast as the scheduler will allow, which is often not necessary and detrimental to the performance of the application as a whole.

No loops that I can think of. Looked through some parts just to make sure, found one timeout event that should never be fired (nothing connected to timeout connector) but adding a timeout there didn't do any difference. The fact that the CPU usage seem to be pretty consistently higher relative to that of the 8.6 application with different parts of the code running makes me think that it is the 2011SP1 run-time engine that is less efficient on this machine.

Link to post
  • 7 months later...

Hello again folks!

 

A bit of a bump of an old thread here, I still haven't been able to solve this issue, it remains with LV2012.

 

A short recap of the problem:

 

 

What happens is the on the first, single-processor computer I see a dramatic fall in the use of the CPU. The other in contrast shows a dramatic raise in CPU usage. The computers do not have LV installed, only the RTE:s.

Machine1 (1* Intel® Celeron® CPU 900 @ 2.20GHz):
CPU% with LV8.6: 63%
CPU% with 2011SP1: 39%

Machine2 (2* Genuine Intel® CPU N270 @ 1.60GHz):
CPU% with LV8.6: 40%
CPU% with 2011SP1: 102%

In the second machine the max CPU is 200% since it has two CPU:s. The load seems to be pretty even between the CPU:s.

 

 

I've been trying to track this down again lately, and my suspicions are now towards hyperthreading as this is one of the main differences between the computers.

Machine 2 with  a cpu described above as 2* Genuine Intel® CPU N270 @ 1.60GHz turns out to be a single core CPU with hyperthreading enabled, whereas machine 1 with a CPU 1* Intel® Celeron® CPU 900 @ 2.20GHz do not use hyperthreading.

 

I've tried most performance tricks in the book, turning off debugging, setting compiler optimization etc. to no avail except minor improvements.

 

Unfortunately we cannot turn off the hyperthreading on machine 2, the choice seem to be disabled in BIOS. We've contacted the vendors and might be able to get hold of another BIOS in a few days if we're lucky. Machine 1 doesn't support hyperthreading.

 

Anyone ever got into problems like these with hyperthreading on Linux? Any idea of what I can do to solve the issue, apart from buying new computers? Am I barking up the wrong tree thinking this has anything to do with hyperthreading?

Link to post
  • 2 years later...

I'm upgrading from LabView 8 to LabView 2014 and have the same problems, also using Linux/Ubuntu to run my application.

I found this article:

 

https://forums.ni.com/t5/LabVIEW/CPU-usage-rises-in-LabVIEW-executable/td-p/2194414

 

In my case it doesn't help to replace all the deprecated property nodes, but I think that's one problem relying to the high cpu load...

  • Like 1
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By Yaw Mensah
      I have installed Labview 2020 on Dedian Buster using the rpm to deb conversion method via alien. Due to Architecture mismatch i deleted the *i386.rpm files before conversion.
      My Problem is that after creating a project at "Build Specification"-> "rigth click" i am only able to select "Source Distribution". Application does not show up as an option. 
      I will be grateful for any suggestions.
      Thank you in advance.
    • By Gan Uesli Starling
      We have a gage supplied by a company that shipped it with a *.exe application targeted for LVRTE 2009. I need to retarget it for 2017, but don't have the source code. The supplier had said they'd gladly supply me with a copy of the *.LV source, but they have looked and cannot find their own copy in-house.
      History of Need: Our global corporate mother ship's IT department, in their infinite wisdom, is mandating an upgrade from Win7 to Win10. That with yet even further constraints. They enforce a list of "approved versions" of "approved applications". And for LVRTE, they are insisting upon 2017, with 2009 being a red light.
      So, then, my query. Is converting an app without the source for a higher LVRTE doable at all? File is attached.
      If it is doable, instructions on how?
      Concentricity-Gage.exe
    • By Dawid
      I'm trying to execute LPR.exe command to print some labels on a printer. However as its normal, problems occur. I could not find answer on any topic conneced with "sytem exec". I already tried all described methods (I think so). That's why I'm asking very kindly for any help.
      When trying to execute or call the LPR.exe from System exec VI, I'm receiving error:
      "'C:\Windows\System32\lpr.exe' is not recognized as an internal or external command, operable program or batch file."
      Generally I would like to call function: "lpr -S 192.168.1.5 -P lp C:\test\do_druku.txt" which works from command window without any problem.
       

      print.vi

    • By PaulL
      Out of the box text in the icon editor looks awful. (See attached image, which is better looking than most.)
      (Yes, even with small fonts: https://forums.ni.com/t5/Linux-Users/Labview-Icons-under-GNOME/gpm-p/3379530.)
      Details:
      LabVIEW 2016 64-bit, CentOS 7 Linux OS
       
      We have tried many things to get this to work, to no avail. 
      Does anyone have a solution?

    • By szewczak
      I wanted to cross post metux's discovery here asap, and have a separate discussion.
      Metux's original post:
      The recent Linux driver package introduces a CRITICAL security vulnerability:
       
      http://www.ni.com/download/ni-linux-device-drivers-2018/7664/en/
       
      It adds additional yum/zypper repos, but explicitly disabling package signing and using unencrypted HTTP transport. That way, it's pretty trivial to completely takeover the affected systems, by injecting malicious packages.
       
       
      DO NOT INSTALL THIS BROKEN SOFTWARE - IT IS DANGEROUS !
       
      CERT and BSI are already notified.
       
       
       
       
       
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.