Jump to content

What are the *real* system requirements for LabVIEW?


Recommended Posts

One of my customers asked me a simple question the other day, and I realized I didn't have a good answer. He's writing some LabVIEW code that is doing some pretty intensive plotting, and it's crushing his (admittedly anemic) CPU. So he wants to know: Will a faster/better computer fix it, and what features should it have?

Now obviously, the standard answers here are:

  1. Debug the code to find bottlenecks and/or acceptable tradeoffs, then optimize.
  2. Buy the fastest/biggest thing you can afford; computers are cheap.

It occurs to me that I'd like a better answer than that, though. NI's LabVIEW requirements page recommends a Pentium 4/M (and elsewhere implies 4/M at >= 866 MHz) with at least 1 GB of RAM. That's fine, but it's only part of the story. Can anyone help me fill in the rest? Here's what I'm looking for:

  • In general, what's the biggest limiting factor in LabVIEW's execution? Is it RAM? CPU speed? Bus speed?
  • Specifically with regard to graphing/UI stuff, what's the biggest limiting factor? Is it the UI controls themselves? Does accelerated graphics ever matter at all to LabVIEW?
  • Given a problem with CPU railing during intensive plotting (not 3D picture control), what's the best way to tackle the problem other than code changes?

Link to comment

... Can anyone help me fill in the rest? Here's what I'm looking for:

  • In general, what's the biggest limiting factor in LabVIEW's execution? Is it RAM? CPU speed? Bus speed?
  • Specifically with regard to graphing/UI stuff, what's the biggest limiting factor? Is it the UI controls themselves? Does accelerated graphics ever matter at all to LabVIEW?
  • Given a problem with CPU railing during intensive plotting (not 3D picture control), what's the best way to tackle the problem other than code changes?

I can optimize code or deploy to more than one CPU so Virtual address space is the limit I hit most often. Even the Bus speed can be worked-around by designng hte app to run on more than on machine (devide and conquer).

UI thread is single threaded. Accelerated graphics really made a big difference in my 3D graphing speed. Even the old 3D graph (that only used a single code to render the graphics) got much quicker when I used my new top of the line laptop.

You last question is rather limiting but ... Hiding the FP of a LV app realy can speed up a screen update.

Ben

Link to comment

You last question is rather limiting but ... Hiding the FP of a LV app realy can speed up a screen update.

I'm confused by this suggestion... hiding the front panel (which is where the graph is being drawn) would kind of defeat the purpose of trying to show the data in the first place, no? Am I misunderstanding?

Link to comment

I'm confused by this suggestion... hiding the front panel (which is where the graph is being drawn) would kind of defeat the purpose of trying to show the data in the first place, no? Am I misunderstanding?

I can't answer for neBulus, but I read that to mean using the Defer Front Panels property, then do stuff, then enable front panel updates (using the same property) I've seen alot of improvements using this with trees, and UIs that have alot of changes (control properties and such).

In my experience RAM is a big part of it. LabVIEW can eat up alot of RAM, and Windows will do funky things with virtual memory and may allocate LabVIEW to use page file RAM even when there is several hundred MB of physical RAM available. The more physical RAM available the less likely it is for Windows to use the page file. There are ways to not use any page file (or make it smaller) but it's there for a reason, like when you run out of physical RAM, I just wish Windows managed it a little better.

Link to comment

I'm confused by this suggestion... hiding the front panel (which is where the graph is being drawn) would kind of defeat the purpose of trying to show the data in the first place, no? Am I misunderstanding?

In parenthesis, you ruled out code changes, So I offered a suggestion that did not require code changes. How can it help?

I have code that renders the CAT scan we often see as a gif of a human head (looks like Hova's vatar) as 3D image. It let me turn that gif around so I could see the guy's face. On my old laptop, it would take about an hour to render the image in 3D. On my new laptop it only takes about 15 minutes to udate the screen if i watch the updates but if I minimize the screen and watch the CPU (windows task manager) I can see that the update will complete in about two minutes.

So...

It did not require code changes

It did increase the renering speed.

If you lifted the "no code change" restiction, I suggest defer FP updates, hide the widget, update the widget, show the widget undefer updates.

BTW: Using Defer FP update when trying to update a chart on a tab control results in no update as of LV 2009.

Ben

Link to comment

We put together test equipment. General rules for what to get in a PC for us are:

- 4 GB RAM, possibly more if it's a NVH application

- Faster dual-core processor. The only time we used 4 cores was with a LabVIEW 8.0 application and 3+ cores slowed the test by ~40% due to multi-threading. We've not tried more than two cores since, so I can't comment on what the improvement has been since.

- Does the PC has enough ISA/PCI/PCIe slots (yes, we still occassionally use one ISA card... baffling)

Video cards have only come up once when a customer wanted 4 monitors from one PC. Otherwise, the screens don't update that much information that disabling updates for a moment doesn't take care of performance issues. The performance issues I run into is with multi-column list boxes when doing something like setting colors of individual cells (red=fail, green=pass), and populating tables. Graphs don't give me issues; I've tested putting 10,000 waveforms of 16k points each on a graph and had a slight performance hit when zooming in and out.

Link to comment

If you are dealing with large amounts of data, memory is going to be a real problem. It gets even worse if you're plotting it. Why? 1) LV doesn't handle large data sets well, and 2) 8.6 (the version you have listed) isn't "large address aware" Check out the following if you're using a built app:

http://digital.ni.com/public.nsf/allkb/1203A9B2930B7961862576F00058F94E

(thanks to mesmith here for that link. I haven't had a chance to check out the suggestions he or ShaunR made yet)

IOW, sure, getting a faster CPU should help with number crunching, but optimizing memory use will go far, too.

Link to comment

...UI thread is single threaded. ...

Ben

I'd like to add an outlandish statement re: the UI thread.

THe UI thread is over-used. It is now ime for NI to seriously consider getting all GUI updates out of the UI thread so that they can happen in parallel. It was a clever move when first implemented but the time has come to fix the UI Bottleneck.

Ben

Link to comment

In the past, having graphs/charts transparent and overlaid caused the drawing updates on panels to be drastically slowed down when LV tries to draw and then redraw the layers above. Try to avoid transparency or else make sure transparent elements are not overlaid on each other.

But I agree, panel drawing routines especially for charts/graphs have been a performance bottle-neck for ages. In this day of streaming video, and using video cards as co-processors for FFT's, having an application fall to its knees when plotting data is a bit ridiculous.

NI really need to work on improving performance in this area.

N.

Link to comment

Thanks, everybody, for all the feedback. Everything is pretty much along the lines of what I already suspected, but it's good to see the discussion.

We've been discussing this internally at JKI, too, and I've sort of come to the following take-home points:

  • Good code always beats bad code (this is obvious, and yes, I ruled this out in the original question, on purpose)
  • A faster computer will definitely help in the case I specified. Also, more RAM never hurt anyone.
  • Being careful about transparency and tab controls (especially nested ones) will probably help, too.
  • A powerful graphics card will probably not change things much.

Link to comment

Thanks, everybody, for all the feedback. Everything is pretty much along the lines of what I already suspected, but it's good to see the discussion.

We've been discussing this internally at JKI, too, and I've sort of come to the following take-home points:

  • Good code always beats bad code (this is obvious, and yes, I ruled this out in the original question, on purpose)
  • A faster computer will definitely help in the case I specified. Also, more RAM never hurt anyone.
  • Being careful about transparency and tab controls (especially nested ones) will probably help, too.
  • A powerful graphics card will probably not change things much.

On that last point.

I suspect that very little of LV graphics can take advantage of the accelerator on high-end graphics cards.

The New 3D picture may be the exception. I am getting good response from it and have not totally ruled out (yet) including the 3D graphics as a realtime update.

So if the app is using the 3D picture, a good graphics card may pay off.

Ben

Link to comment

I'll throw one other thing in - SSD's have improved performance most everywhere I've used them - for instance, if you do get memory paging, it will be a lot faster than if you have to page to a disk. It's kind of an expensive alternative, but it almost always helps.

Mark

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.