Jump to content

LabVIEW benchmarks


Adam Kemp

Recommended Posts

When we release new versions of LabVIEW we like to be able to compare its performance to previous releases of LabVIEW. As we make improvements to the compiler, the runtime, the execution system, and specific algorithms we often see that the same applications run faster in the newer version. However, we usually only do these comparisons for specific changes that we know we've made in order to highlight those improvements. We haven't really settled on any standard benchmarks that we compare with every release. We think that would be a good idea, but we want to ask the various LabVIEW communities which benchmarks they think would be valuable.

Here are some questions that you may have answers to: When new version of LabVIEW comes out, and you are deciding whether or not to upgrade, what kinds of performance issues do you consider important in making that decision?

What kind of general benchmarks would you like to see us run on every release of LabVIEW?

Example benchmarks might be how long it takes to run a certain FFT or how fast we can stream data to or from disk.

Link to comment

These are all very difficult questions. But the basics like FFT, filters and so on should be as fast in LV as it is with any other method. There should be no room for improvements in those things since optimized algorithms have existed for ages, and there is no good reason why LV should not use the best algirithms.

I think improvements can be better measured in LV specific things. Efficiency of queues, sub vi calls, compiler optimization of large diagrams vs using sub vis, efficiency of LVOOP operations, graphs and charts and so on.

Link to comment

QUOTE (bsvingen @ Feb 25 2009, 06:06 PM)

These are all very difficult questions. But the basics like FFT, filters and so on should be as fast in LV as it is with any other method. There should be no room for improvements in those things since optimized algorithms have existed for ages, and there is no good reason why LV should not use the best algirithms.

I think improvements can be better measured in LV specific things. Efficiency of queues, sub vi calls, compiler optimization of large diagrams vs using sub vis, efficiency of LVOOP operations, graphs and charts and so on.

I'm a bit of an outlier here as I always upgrade to the latest release ASAP. Being on SSP, Premium Support, there really isn't any good reason (at least for me) to NOT do that.

But, in terms of comparisons, I actually think FFT, JTFA, Wavelet and Matrix operations are a good start. Picture Control operations as well and then the list specified above as well are all good starts IMO.

Link to comment

QUOTE (Adam Kemp @ Feb 25 2009, 03:48 PM)

When we release new versions of LabVIEW we like to be able to compare its performance to previous releases of LabVIEW. As we make improvements to the compiler, the runtime, the execution system, and specific algorithms we often see that the same applications run faster in the newer version. However, we usually only do these comparisons for specific changes that we know we've made in order to highlight those improvements. We haven't really settled on any standard benchmarks that we compare with every release. We think that would be a good idea, but we want to ask the various LabVIEW communities which benchmarks they think would be valuable.

Here are some questions that you may have answers to: When new version of LabVIEW comes out, and you are deciding whether or not to upgrade, what kinds of performance issues do you consider important in making that decision?

What kind of general benchmarks would you like to see us run on every release of LabVIEW?

Example benchmarks might be how long it takes to run a certain FFT or how fast we can stream data to or from disk.

Exampel benchmarks in my mind would be those large-scale functions, like imaging functions. They require more resources from computer and running time is longer. If NI could improve their performances or shorten the running time while doing a better job, I would not hesitate to upgrade.

Another big attraction is new staff, like voice recognition or computer teaching, just for example. I mean they are still challeging techs now. New staff that are useful for automation is definitely a push for upgrade, as well as improvements on current algorithms.

Link to comment

I think that general benchmark should also be considered.

Benchmark to consider: Memory footprint for a given app.

Last year, one of my customer did not switch to LV 8.6 because a very large LVOOP app (> 5k VIs) was using a significant larger amount of memory than the same app in LV 8.5 (I dont remember for sure but I think it was in the order of 20% more).

Other general benchmark (for a given app): Start time, Shutdown time (again with large app this could be very long...) .

Something else that I would like to see (but I am sure that this is probably exotic for most people) is a benchmark on the rendering speed of images (object [square, rectangle, circle, text ....]) draw in the classic picture control.

PJM

Link to comment

I would like to see a benchmark for:

1) Vision functions: threshold/extract image, copy image, image to image, management of large numbers of image buffers.

2) Manipulation of complicated deeply nested structures. e.g: Array of clusters of array of Edge (x,y) coords as output from some of the image functions like the IMAQ concentric Rake. See the Search Arcs output and the ROI Descriptor input in this picture:

post-2680-1235669342.jpg?width=400

Thanks,

Neville.

Link to comment

QUOTE (Neville D @ Feb 26 2009, 12:31 PM)

I'm specifically asking for benchmark ideas for LabVIEW, not drivers or extra toolkits. Working with deeply-nested structures is general enough to benchmark, but IMAQ algorithm performance is dependent on code that is independent of LabVIEW. Similarly I'm excluding things like DAQ performance or RT hardware. Those are things worthy of benchmarks, but those benchmarks should compare different versions of their respective products, not different versions of LabVIEW.

Link to comment

QUOTE (Mark Yedinak @ Feb 26 2009, 02:21 PM)

You may also want to consider things like processing time on large arrays operations and manipulation.

What kind of operations/manipulations? A lot of the focus on improving performance with large data structures has been on finding ways to avoid copying them, so if we do that right then operations on individual elements within them should be just as fast no matter how big the array is. Are there specific whole-array operations that you think are performance issues and change between LabVIEW releases?

Link to comment

QUOTE (Adam Kemp @ Feb 26 2009, 01:32 PM)

What kind of operations/manipulations? A lot of the focus on improving performance with large data structures has been on finding ways to avoid copying them, so if we do that right then operations on individual elements within them should be just as fast no matter how big the array is. Are there specific whole-array operations that you think are performance issues and change between LabVIEW releases?

Replacing elements, adding a row, searching or splitting them are a few operations that come to mind. I have encountered some performance issues when using tables (2-D string arrays) with a fair amount of data in them. We use tables to update test results or for display test data. For test result we color the cell representing the test result green for a pass, red for a fail or orange for an error. Even restricting the tables to a few hundred lines updating the color of the cells is a time consuming task. The color attribute for the table cells do not actually follow the data. If you delete a row from the table, the corresponding cell color is not deleted. Therefore we end up having to manually track the cell colors ourselves. This results in quite a bit of processing of arrays. This is mainly simple operations but in some tests we end up doing this quite a bit. If I get a chance I will try to put together an example if you think this would be helpful.

Link to comment

QUOTE (Adam Kemp @ Feb 26 2009, 10:45 AM)

I'm specifically asking for benchmark ideas for LabVIEW, not drivers or extra toolkits.

Does that apply to the Signal Processing Toolkit as well? For me performance there is critical. And FWIW, compile and release it for Mac, please. As I understand it, the issue really is just a compile... :rolleyes:

Link to comment

QUOTE (Val Brown @ Feb 26 2009, 04:32 PM)

Does that apply to the Signal Processing Toolkit as well? For me performance there is critical. And FWIW, compile and release it for Mac, please. As I understand it, the issue really is just a compile... :rolleyes:

That applies to anything which is not core LabVIEW. If the Signal Processing Toolkit improves performance then it can have its own benchmarks. If its performance improves because of LabVIEW itself getting better then you should see that in more general benchmarks.

Link to comment

QUOTE (Adam Kemp @ Feb 26 2009, 03:52 PM)

That applies to anything which is not core LabVIEW. If the Signal Processing Toolkit improves performance then it can have its own benchmarks. If its performance improves because of LabVIEW itself getting better then you should see that in more general benchmarks.

Understood and both are what I've seen over the years (ie since LV5). I'm asking -- a bit clumsily -- whether benchmarks will be done on the toolkit as well.

Link to comment

QUOTE (Val Brown @ Feb 26 2009, 05:58 PM)

Understood and both are what I've seen over the years (ie since LV5). I'm asking -- a bit clumsily -- whether benchmarks will be done on the toolkit as well.

I don't know. I will mention the request to see benchmarks for specific toolkits as well. Thanks for the feedback.

Link to comment

Adam,

How about the entries for the NI coding challenges for one aspect. They're already more or less highly optimised so any changes in performance will reflect changes in how LV executes already optimised code. The danger in using less-then-optimally optimised code for benchmarks might be that a copy is avoided here or a type cast avoided there instead of measuring "real" improvements (or worsening) of performance....

We have a dictionary example, a prime number example (two very different approaches - one with the famous sieve of erastothenes which is a much-publicised benchmark) and other mathematical examples.

Shane.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.