Jump to content

Performance on different CPU:s


Recommended Posts

Hi LAVA,

I need your help!

I've recently started updating a system from LV8.6 to 2011SP1, and have ended up in confusion. I deploy the system as executables on two different machines running Linux Ubuntu, one a laptop with a single processor and the other a panel PC with two processors.

What happens is the on the first, single-processor computer I see a dramatic fall in the use of the CPU. The other in contrast shows a dramatic raise in CPU usage. The computers do not have LV installed, only the RTE:s.

Machine1 (1* Intel® Celeron® CPU 900 @ 2.20GHz):

CPU% with LV8.6: 63%

CPU% with 2011SP1: 39%

Machine2 (2* Genuine Intel® CPU N270 @ 1.60GHz):

CPU% with LV8.6: 40%

CPU% with 2011SP1: 102%

In the second machine the max CPU is 200% since it has two CPU:s. The load seems to be pretty even between the CPU:s.

Why is this happening, and what should I do to get the CPU usage down on machine2 (the one being shipped to customers)?

/Martin

Link to comment

The simple answer it that clock speed is the only contributing factor to performance.

In either case, it appears your application needs some attention with regard to efficient programming techniques. My first step would be to run the program in your development environment and use the Profiler (Tools -> Profile -> Performance and Memory) to find which parts of the program are consuming the most resources.

~Dan

Link to comment

More fundamentally, you might want to add some code to profile intensive sections of the software. You don't include any information to its specific function, so I can't recommend anything to you off-hand. You should have intimate knowledge of what your process will be doing at the times when you're grabbing these figures, so analyzing those routines should tell you what some potential sources might be.

I wouldn't really expect the change in CPU usage you see between versions, but it could be that some nodes/VIs you're depending on have changed, so reviewing your code (especially when jumped 3.5 major versions) would be prudent.

One thing to do would be swap back to source code and use the built-in profiler to see which VIs are racking up the most clock time. (Dan already suggested this)

Link to comment

My c¢:

1: On LabVIEW 2011, in the build : Advanced, turn off SSE2

2: On LabVIEW 2011, in the build : Advanced, Check "Use LabVIEW 8.6 file layout"

3: other combination of 1 & 2

The other thing I noted your second cpu is N270, which I believe is Netbook ATOM kind. it my handles Math and double/single operation differently. I can't see why it would change from 8.6 to LV11, but it may have something to do with SSE2 optimization. looking at Wikipedia both Celeron and Atom support it.

just some ideas.

Link to comment

Thanks for your suggestions. Unfortunately, no luck so far in solving the problem.

To give a bit of background information:

The system is communicating with a USB device through drivers written in C and called by the good old CINs. The data then goes through an algorithm and is presented in 4 charts using user events. More user events are triggered for GUI updates, but the GUI i use now don't care about those. The update rate is about 30Hz. It uses about 80 classes altogether although many of these are for administrative use (user accounts, printing etc.) and quite a few are wrappers of different kinds. Slightly more than 2000 vi:s are loaded.

1: On LabVIEW 2011, in the build : Advanced, turn off SSE2

2: On LabVIEW 2011, in the build : Advanced, Check "Use LabVIEW 8.6 file layout"

3: other combination of 1 & 2

The other thing I noted your second cpu is N270, which I believe is Netbook ATOM kind. it my handles Math and double/single operation differently. I can't see why it would change from 8.6 to LV11, but it may have something to do with SSE2 optimization. looking at Wikipedia both Celeron and Atom support it.

SSE2 has been tested back and forth without any change. I use 8.6 file layout as some code rely on it.

I've set the program in different states and compared the CPU usage between 8.6 and 2011SP1 to see if I can nail down any specific parts of my code that would cause the increase:

* With the drivers are switched off the increase in CPU usage is 65% (relative).

* Starting the drivers, still about 65%

* Starting the algorithms and GUI updates gives more than 100% increase (I can't separate those two yet).

* Stopping the GUI updates, i.e. not listening to any of the user events for GUI updating also gave gave more than a 100% increase, although the overall CPU usage dropped more than I would have expected in both 8.6 and 2011SP1.

I've run the application on my development machine that also have two CPUs, this shows better performance using 2011SP1 than 8.6 as in machine1 above.

So the conclusion of this would be that everything takes up more CPU on this specific computer with 2011SP1, and that the algorithms takes up even more CPU power. Further suggestions or crazy ideas on why I see this behaviour are welcome. I need coffee.

/Martin

Link to comment

Do you have a bunch of loops which don't have a 0ms wait in them to yield the CPU? Acceptable alternatives are event structures, ms internal nodes, or most any NI-built node which includes a timeout terminal.

For loops or while loops without a node like these will be free-running, executing literally as fast as the scheduler will allow, which is often not necessary and detrimental to the performance of the application as a whole.

Link to comment

Do you have a bunch of loops which don't have a 0ms wait in them to yield the CPU? Acceptable alternatives are event structures, ms internal nodes, or most any NI-built node which includes a timeout terminal.

For loops or while loops without a node like these will be free-running, executing literally as fast as the scheduler will allow, which is often not necessary and detrimental to the performance of the application as a whole.

No loops that I can think of. Looked through some parts just to make sure, found one timeout event that should never be fired (nothing connected to timeout connector) but adding a timeout there didn't do any difference. The fact that the CPU usage seem to be pretty consistently higher relative to that of the 8.6 application with different parts of the code running makes me think that it is the 2011SP1 run-time engine that is less efficient on this machine.

Link to comment
  • 7 months later...

Hello again folks!

 

A bit of a bump of an old thread here, I still haven't been able to solve this issue, it remains with LV2012.

 

A short recap of the problem:

 

 

What happens is the on the first, single-processor computer I see a dramatic fall in the use of the CPU. The other in contrast shows a dramatic raise in CPU usage. The computers do not have LV installed, only the RTE:s.

Machine1 (1* Intel® Celeron® CPU 900 @ 2.20GHz):
CPU% with LV8.6: 63%
CPU% with 2011SP1: 39%

Machine2 (2* Genuine Intel® CPU N270 @ 1.60GHz):
CPU% with LV8.6: 40%
CPU% with 2011SP1: 102%

In the second machine the max CPU is 200% since it has two CPU:s. The load seems to be pretty even between the CPU:s.

 

 

I've been trying to track this down again lately, and my suspicions are now towards hyperthreading as this is one of the main differences between the computers.

Machine 2 with  a cpu described above as 2* Genuine Intel® CPU N270 @ 1.60GHz turns out to be a single core CPU with hyperthreading enabled, whereas machine 1 with a CPU 1* Intel® Celeron® CPU 900 @ 2.20GHz do not use hyperthreading.

 

I've tried most performance tricks in the book, turning off debugging, setting compiler optimization etc. to no avail except minor improvements.

 

Unfortunately we cannot turn off the hyperthreading on machine 2, the choice seem to be disabled in BIOS. We've contacted the vendors and might be able to get hold of another BIOS in a few days if we're lucky. Machine 1 doesn't support hyperthreading.

 

Anyone ever got into problems like these with hyperthreading on Linux? Any idea of what I can do to solve the issue, apart from buying new computers? Am I barking up the wrong tree thinking this has anything to do with hyperthreading?

Link to comment
  • 2 years later...

I'm upgrading from LabView 8 to LabView 2014 and have the same problems, also using Linux/Ubuntu to run my application.

I found this article:

 

https://forums.ni.com/t5/LabVIEW/CPU-usage-rises-in-LabVIEW-executable/td-p/2194414

 

In my case it doesn't help to replace all the deprecated property nodes, but I think that's one problem relying to the high cpu load...

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.