Jump to content

Working with large data sets?


Recommended Posts

Hello all,

I have a question about working with large data sets. I am currently acquiring about 8Ms of data, splitting it into two arrays and processing it. I require this vi to be very quick for the hopes of a real time imaging system. I am currently implementing a 3 For loop system, but my processing time is still rather slow. The most inner loop is the actual loop that does most of the number crunching. The outer 2 loops manipulate the data to be the proper size. I have read that if you decimate your data into smaller sets, labview can perform tasks faster. I have attached a basic version of my algorithm and was wondering if anyone can point me in the direction of using the smaller data sets. I have also read that arrays should be initialized at the output of for loops for memory management. Has anyone heard of this and if so can you provide me an example as I have tried but have not been able to produce encouraging results.

Thanks,

Azazal

Download File:post-1045-1114542273.vi

Link to comment

Hi Azazel:

I'm not sure that I can provide a direct answer to your question--

But I count something more than fifteen math functions (addition, multiplication, random number generation & arctan) that get executed 4,096,000 different times each. So (for simplicity) assuming each function corresponds to at least one floating point operation, it adds up to something more than 50 million floating point operations, which executed in around 9/10 of a second on my machine (1.39 GHz)... so I sort of suspect that the vi is running as efficiently as one could expect, and there isn't a memory management technique that would make it much better....

Not sure if you can reformulate the problem to get rid of the Arctans, but in the old days each one would have taken many, many clock cycles & I'm not sure of the degree to which modern processors compress that, so if you can figure out how to rid yourself of them, things might get better....

But this is just a guess on my part & I'll be curious to hear what others think.

Good luck & best Regards, Louis

Link to comment
  • 6 months later...
Hello all,

I have a question about working with large data sets. I am currently acquiring about 8Ms of data, splitting it into two arrays and processing it. I require this vi to be very quick for the hopes of a real time imaging system. I am currently implementing a 3 For loop system, but my processing time is still rather slow. The most inner loop is the actual loop that does most of the number crunching. The outer 2 loops manipulate the data to be the proper size. I have read that if you decimate your data into smaller sets, labview can perform tasks faster. I have attached a basic version of my algorithm and was wondering if anyone can point me in the direction of using the smaller data sets. I have also read that arrays should be initialized at the output of for loops for memory management. Has anyone heard of this and if so can you provide me an example as I have tried but have not been able to produce encouraging results.

Thanks,

Azazal

Azazal,

I can not imagine you could solve this problem with state-of-the-art HW. Have tested it on my fastest machine, a dual-opteron XPP. It took both CPUs with 100% and tokk not less than 740 ms - much slower than the requested real time imaging system. Even when taking two or four of the faster dual-core CPUs, the result was not below ~100 ms - neither a real time imaging system.

BTW, your statement was not OK: 'I am currently acquiring about 8Ms of data,...'. What you provided was two 2D-arrays of each 4,096,000 elemets of Dbl, wich accounts to 8M*8Bytes. On a 32bit machine, this takes 16M CPU readings acesses at least. and is some ms alone for access of such large data sets.

Reviewing your code, the only suggestion I can make is getting rid of those autoindexings at the right side of your loop. Replace it with 'replace array subset' on a predefined data buffer. This is called to 'work in line'.

But this might just bring some smaller improvements, maybe a factor of 1.1 to 2.

Much more important is to choose the right data type. Consider using Sgl instead of dbl. And make the control 'ensemble length x overlay' to an I32 to avoid the data conversion in the inner loop. This alone reduced the processing time to about 580ms on the above mentioned machine.

Just my 0.02

Link to comment

I ran the VI on my computer, then made the changes Louis suggested, and saw similar improvements. I watch the NI Developer Zone RSS feeds and read "Optimizing LabVIEW Embedded Applications" and shaved some additional time off by following the advice to "use shift registers instead of loop tunnels for large arrays."

When passing a large array through a loop tunnel, the original value must be copied into the array location at the beginning of each iteration, which can be expensive. The shift register does not perform this copy operation, but make sure to wire in the left shift register to the right if you don

Link to comment
  • 2 weeks later...

You can pop up on a loop entry node and ask to change it to a shift register, which is great, but asking the compiler to do this for you automatically would create more complaints than compliments. The two behaviors are very different and if LabVIEW did this "for me" at the wrong time I'd scream.

On another note, what you are trying to do might be better done in a HW card such as a DSP. Take a look at: http://www.sheldoninst.com/ They have DSP cards that also have LabVIEW VIs to do the DSP programming for you.

Link to comment
Hello all,

I have a question about working with large data sets. I am currently acquiring about 8Ms of data, splitting it into two arrays and processing it. I require this vi to be very quick for the hopes of a real time imaging system. I am currently implementing a 3 For loop system, but my processing time is still rather slow. The most inner loop is the actual loop that does most of the number crunching. The outer 2 loops manipulate the data to be the proper size. I have read that if you decimate your data into smaller sets, labview can perform tasks faster. I have attached a basic version of my algorithm and was wondering if anyone can point me in the direction of using the smaller data sets. I have also read that arrays should be initialized at the output of for loops for memory management. Has anyone heard of this and if so can you provide me an example as I have tried but have not been able to produce encouraging results.

Thanks,

Azazal

Hi Azazel,

Looking at the diagram, you have a lot of coercion dots on arrays. These are a killer for memory allocation.

The output of the arctan probably is a DBL, but output array is a SGL. Use Numeric>Conversion>To SGL precision Float to cast the data to the right type.

You should see a huge performance boost.

Neville.

Link to comment

minor tweak to make your process time more accurate: move the output indicator arrays outside the sequence structure to the right and fill them after the calculations. On my system that lowered the measured time by an additional 20%. It also more accurately reflects how this algorithm will behave as a subVI.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.