Jump to content

Graphics sucking up processor... how to improve?


JPrevost

Recommended Posts

So my program is displaying a lot of data to a GUI. Attached is a screenshot.

My laptop is a Compaq Presario 3000 model 3045us; specs are as follows. P4 2.4, 512mb, 60gb, 16"lcd 1280x1024, SiS 650m video shared ram.

My program reads in serial data at approximately 37.6kbps where the data is async sent blind. Each packet si 277bytes where 256 is from the car's computers ram dumped followed by 8 10-bit a/d channels followed by a 16-bit checksum of all of the data followed by the start/stop bytes "55AA" repeated continuously (duh, async). Anyways, my program has several loops, 1 for events, 1 for looking at the serial port or playback file, and 1 to display the GUI. After the serial loop has gotten a full packet it runs a checksum on the packet and compares it to the checksum bytes in the packet, if good, send it to the queue. The GUI loop has the queue (FIFO) which outs the byte array "packet" which is indexed and sent to the gauges and various displays. Here is the problem, it takes up about 35-40% of my cpu power ONLY when the data is being sent to the gauges. Mind you, there are a couple tabs in my program, when one tab is in focus there is a conditional case structure that only updates what is visible. With that being said, it's obvious that the indexing and calculations being run on the byte array aren't the slow down because when I display "flag" (aka boolean) only the CPU usage is SUPER lower, like 2-6%, change over to the gauges and 1 line chart and it jumps up :( .

Here might be my problem; I'm using classical gauges instead of the default 7.0 gauges because I thought it would be lower cpu usage, is this wrong? Are the newer gauges with the alpha blending edges and smooth appearance better code wise? I don't have the time to convert all of my gauges so maybe somebody knows. Other than that, I'm at a complete loss. The gauges are only refreshed 17Hz! Slower laptops have had similar issues with my program and honestly, I'm fed up. My last step is to update the GUI less frequently, every other packet and from there I might try indexing the byte array for only the bytes I need instead of sending the whole packet into the queue... although I need to display 1/3 of the bytes in the packets :headbang:\

post-3780-1136849352.png?width=400

Link to post

Hi JP:

One thing to try-- Looks to me like you've got a lot of things overlapping on the display-- Digital displays overlapping the gauges, etc.

I seem to recall I had really poor performance last time I made a display with overlapping graphic items--that was some time ago, and perhaps LabView, Windows, and Graphics cards have gotten better since then-- but I don't know, since my early experience with overlapping graphics things was so bad, I've got out of the habit of letting things that get updated overlap. (and, just as an opinion, I think its easier for the user if the display is less croweded anyhow.) Might try eliminating the overlap (first try the actual visible overlaps, if that doesn't fix it, try eliminating the overlaps of the rectangles that bound the round dials too.)

Also, a little of a hassle to program, but the temperatures probably change so slowly that you can update them once every couple of seconds-- I've owned cars where the temp. gauge moved almost as fast as the tach, but not for very long :( .

Anyhow, best luck & hope the suggestions help, Louis

Link to post

Usually, such problems are caused by overlapping elements, which make it harder for LV to draw the FP. This could definitely be a problem in your FP, since it's very crowded and has a lot of round indicators. Having less elements on the screen would probably also help the user recognize the more important information easily.

Try searching the NI website for a presentation called "the good, the bad and the ugly" which discusses LV interface design.

Link to post
So my program is displaying a lot of data to a GUI. Attached is a screenshot.

My laptop is a Compaq Presario 3000 model 3045us; specs are as follows. P4 2.4, 512mb, 60gb, 16"lcd 1280x1024, SiS 650m video shared ram.

My program reads in serial data at approximately 37.6kbps where the data is async sent blind. Each packet si 277bytes where 256 is from the car's computers ram dumped followed by 8 10-bit a/d channels followed by a 16-bit checksum of all of the data followed by the start/stop bytes "55AA" repeated continuously (duh, async). Anyways, my program has several loops, 1 for events, 1 for looking at the serial port or playback file, and 1 to display the GUI. After the serial loop has gotten a full packet it runs a checksum on the packet and compares it to the checksum bytes in the packet, if good, send it to the queue. The GUI loop has the queue (FIFO) which outs the byte array "packet" which is indexed and sent to the gauges and various displays. Here is the problem, it takes up about 35-40% of my cpu power ONLY when the data is being sent to the gauges. Mind you, there are a couple tabs in my program, when one tab is in focus there is a conditional case structure that only updates what is visible. With that being said, it's obvious that the indexing and calculations being run on the byte array aren't the slow down because when I display "flag" (aka boolean) only the CPU usage is SUPER lower, like 2-6%, change over to the gauges and 1 line chart and it jumps up :( .

Here might be my problem; I'm using classical gauges instead of the default 7.0 gauges because I thought it would be lower cpu usage, is this wrong? Are the newer gauges with the alpha blending edges and smooth appearance better code wise? I don't have the time to convert all of my gauges so maybe somebody knows. Other than that, I'm at a complete loss. The gauges are only refreshed 17Hz! Slower laptops have had similar issues with my program and honestly, I'm fed up. My last step is to update the GUI less frequently, every other packet and from there I might try indexing the byte array for only the bytes I need instead of sending the whole packet into the queue... although I need to display 1/3 of the bytes in the packets :headbang:\

Things that affect performance:

1 Overlapping graphic objects

2 Transparent objects overlapping each other or other graphics.

3 New-Style (6i) controls/indicators have a bit more overhead because of the shading etc.

4 Static graphic objects (boxes, arrows etc.) if placed, should be placed at the BACK. (Group Menu>Move to BACK).

5 If placing things like logos etc. do not overlap with other objects. If no choice then move to back.

6 Do you really need a 17Hz update on the panel meters? Use "defer panel updates" property to update indicators every 250ms or so. Anything faster is useless anyway (unless captured on a plot)

7 Get latest drivers for you Graphics card (this may not make too much of a difference)

8 Increase system memory to 1Gig. 512MB is barely enough nowadays.

9 Upgrade to a newer laptop if you have an option. I have had display related issues with older laptops.

10 Separate User-Interface updates to a separate loop from the rest of the acquisition/control code.

Don't worry about graphics on a tab sheet that is not displayed. That won't affect performance since those pages don't have to be drawn.

Calculations etc. are usually very fast in LabVIEW. However, avoid building arrays in a loop, that calls the memory manager which will slow down your system.

You can add logic to display every alternate or every 3rd or 4th value based on the length of the GUI Q elements. If too much backlog, display less data.

Neville.

Link to post
Don't worry about graphics on a tab sheet that is not displayed. That won't affect performance since those pages don't have to be drawn.

Actually, I have seen a case in the past where there was an apparent bug with a tab control where displaying something demanding on a non-visible page of a tab did cause massive CPU usage, but I've never been able to reliably recreate this.

Link to post
Actually, I have seen a case in the past where there was an apparent bug with a tab control where displaying something demanding on a non-visible page of a tab did cause massive CPU usage, but I've never been able to reliably recreate this.

This can happen with ActiveX controls. LabVIEWs control of them is very limited and in combination with Tabs there has been a bug in LabVIEW that made the ActiveX control not be aware that it shouldn't paint at all. On the other hand support for partial cover of ActiveX controls is very limited and that means that an ActiveX control in LabVIEW is either fully visible or not at all. Overlapping will ALWAYS result in the ActiveX control drawing over anything else independant of the Z order.

Rolf Kalbermatter

Link to post
This can happen with ActiveX controls.

Not in that case. Everything was standard LV controls (not even dialog controls).

Since I couldn't recreate it and it went away when the real cause for the high CPU usage was detected, I just let it lie, but it did happen - switching to an empty page of the tab and moving the tab control entirely off the screen had 2 different results.

Link to post

Thanks for the ideas guys (and maybe gals). I'll see what happens. My program is definatly cluttered but to an experienced tuner it's really nice to have without switching around to different views. Out of the corner of my eye I can see things while watching others.

The reason I can't drop the update frequency is for playback reasons. My program plays back the data at speeds defined by the user. Slow, super fast, etc. When playing very fast the queue starts to stack up :( .

It's tuff being the only developer for a huge program like this. I wish I had a team... but I've got this forum which I don't have to pay... yet ;) . Again, thanks for the ideas and I will implement them and report back.

Link to post

If removing overlap and increasing times between refresh do not fully recover CPU speed, try investing in a graphics card. GCs almost always have a separate processor and memory dedicated to graphic alone. As a result, you main processor and RAM will be reserved for computational calculations, and the graphics card's processor and memory will deal with graphics. Mid-range cards usually run for around $100; however, you may find a decent lower-range card that fits your application for much less.

I suggest buying a card that has at least 64MB memory and a core clock speed of 175MHz.

This may not be of help if you have no available PCI/AGP slots, or you are under a strict budget :oops: . I am sure, though, that following information from other members' posts will prove much more help in solving your dilemma :thumbup: since they are software fixes. This is more of a last resort :ninja: .

--Mayur

Link to post
If removing overlap and increasing times between refresh do not fully recover CPU speed, try investing in a graphics card. GCs almost always have a separate processor and memory dedicated to graphic alone. As a result, you main processor and RAM will be reserved for computational calculations, and the graphics card's processor and memory will deal with graphics. Mid-range cards usually run for around $100; however, you may find a decent lower-range card that fits your application for much less.

I suggest buying a card that has at least 64MB memory and a core clock speed of 175MHz.

This may not be of help if you have no available PCI/AGP slots, or you are under a strict budget :oops: . I am sure, though, that following information from other members' posts will prove much more help in solving your dilemma :thumbup: since they are software fixes. This is more of a last resort :ninja: .

--Mayur

I think you missed where I said in the 2nd sentance; "My laptop" = no PCI/AGP slots and no way to upgrade other than memory. But yes, the quick and dirty solution would be to raise the min requirements to run my program but that would take away from my target market.

Link to post
I think you missed where I said in the 2nd sentance; "My laptop" = no PCI/AGP slots and no way to upgrade other than memory. But yes, the quick and dirty solution would be to raise the min requirements to run my program but that would take away from my target market.
I have a hunch...

Based on my experience, new LV developers usually pass data to the UI by using the value property and most of the time with ctrl references. I don't want to stereotype but if I'm wrong, let me know. This might be your problem since this method of passing data is slow. If you can, just wire data to the terminals of the indicators by a wire or use a local. If it is too much work to re-write your code then you can use the "defer panel updates" property.

Link to post
I think you missed where I said in the 2nd sentance; "My laptop" = no PCI/AGP slots and no way to upgrade other than memory. But yes, the quick and dirty solution would be to raise the min requirements to run my program but that would take away from my target market.

Whoops... :headbang: my bad.

Link to post
I have a hunch...

Based on my experience, new LV developers usually pass data to the UI by using the value property and most of the time with ctrl references. I don't want to stereotype but if I'm wrong, let me know. This might be your problem since this method of passing data is slow. If you can, just wire data to the terminals of the indicators by a wire or use a local. If it is too much work to re-write your code then you can use the "defer panel updates" property.

Yup, wrong, I read about not doing that and so I don't. I wire directly to the terminal and if I can't, I'll use a local, and then those LV2 globals. Do you think it's better to run LV2 globals than locals? My program is self contained and any data I pass to a seperate VI would be to a VI's connector. I use shift registers and feedback nodes as often as I can. I'm getting better but the larger the program gets (exe is almost a meg now) the more locals I seem to have to run :( .

Link to post
Yup, wrong, I read about not doing that and so I don't. I wire directly to the terminal and if I can't, I'll use a local, and then those LV2 globals. Do you think it's better to run LV2 globals than locals? My program is self contained and any data I pass to a seperate VI would be to a VI's connector. I use shift registers and feedback nodes as often as I can. I'm getting better but the larger the program gets (exe is almost a meg now) the more locals I seem to have to run :( .
Ok, that's good. You are my friend now... :wub: .

I don't know how the loop timing is configured since you haven't shown us the code yet, but you might want to look at placing a wait function within the loop that updates the front panel. If you already have wait functions or timeouts, how long are they? Is it possible to send a screen capture of that portion of the code? Adding a small time delay usually allows the processor time to perform other tasks and reduces the load.

Link to post
Ok, that's good. You are my friend now... :wub: .

I don't know how the loop timing is configured since you haven't shown us the code yet, but you might want to look at placing a wait function within the loop that updates the front panel. If you already have wait functions or timeouts, how long are they? Is it possible to send a screen capture of that portion of the code? Adding a small time delay usually allows the processor time to perform other tasks and reduces the load.

Let's just say the code is 4800x3000 at the moment but will NOT be getting any larger :) . Got a good video card, lol.

It's really not that bad. Later this evening when I get some free-time I'll make a clean easy to read flow of my code that'll fit in 1 screen capture. I'm not using a wait on the display loop, only on the user event and serial/playback input loops. I thought that because of the queue, the while loop that contains the display terminals would wait at the queue and just poll it until there is something in it to output?

Just a quicky question, is it better to put a wait of 0ms or 1ms or doesn't it matter?

Link to post
Let's just say the code is 4800x3000 at the moment but will NOT be getting any larger :) . Got a good video card, lol.

It's really not that bad. Later this evening when I get some free-time I'll make a clean easy to read flow of my code that'll fit in 1 screen capture. I'm not using a wait on the display loop, only on the user event and serial/playback input loops. I thought that because of the queue, the while loop that contains the display terminals would wait at the queue and just poll it until there is something in it to output?

Just a quicky question, is it better to put a wait of 0ms or 1ms or doesn't it matter?

0 ms will work, but add a reasonable delay. If you can live with 50ms updates, then use 50ms. Its just good practice to not hog all the time, if not needed.

Neville.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.