PJM_labview Posted February 1, 2015 Report Share Posted February 1, 2015 Hi Everyone. I am trying to figure out the most efficient way to manipulate somewhat large array of data (up to about 120 Megabyte) that I am currently getting from a FPGA. This data represent an image and it need to be manipulated before it can be displayed to the user. The data need to be unpacked from a U64 to I16 and some of it need to be chopped (essentially chop off 10% on each side of the image so if an image is 800 x 480 it becomes 640 x 480). I have tried several approaches and the image below show the one that is the quickest but there might be further optimization that could be done. I am looking forward to see what other can come up with. Note 01: I am including a link to the benchmark VI that has a quite large image in it so this VI is about 40MB. Note 02: This is cross-posted on NI Forum. Thanks Quote Link to comment
drjdpowell Posted February 1, 2015 Report Share Posted February 1, 2015 I don’t have 2014 and can’t open your VI, but my first thoughts are to type cast the U64 array to an I16 array then do a FOR loop over the number of lines to do the chopping. Why is this data not in I16 in the first place, BTW? Quote Link to comment
ShaunR Posted February 1, 2015 Report Share Posted February 1, 2015 If you can afford to be a frame or two behind, then you might want to split the resizing and unpacking into separate pipelines. Quote Link to comment
PJM_labview Posted February 1, 2015 Author Report Share Posted February 1, 2015 I don’t have 2014 and can’t open your VI, but my first thoughts are to type cast the U64 array to an I16 array then do a FOR loop over the number of lines to do the chopping. Why is this data not in I16 in the first place, BTW? I down-convert it to 2013 and 2011. Since that post yesterday, I got a slightly faster version that operate on each line (like you suggested) [see image below]. Also typecasting is not faster than split and interleave. If you can afford to be a frame or two behind, then you might want to split the resizing and unpacking into separate pipelines. This is an interesting suggestion. I will have to give this more thoughts. Thanks Quote Link to comment
Tomi Maila Posted February 2, 2015 Report Share Posted February 2, 2015 I always love a small challenge. Here's what I did: Calculate the offset of first pixel of each row in the final image where offset is the index of an element in the initial U64 array (this needs to be calculated only once) Initialize a destination image of correct size For each row of the destination image, replace the row content with an unpacked content of the initial U64 array of correct length I was able to drop the processing time to about 1/2 of the fastest algorithm in the initial VI. Download here p.s. LAVA refused any uploads... Quote Link to comment
hooovahh Posted February 2, 2015 Report Share Posted February 2, 2015 p.s. LAVA refused any uploads... This should either be reported or posted in the site feedback and support section. In any case I just replied to a post and could attach a VI file. If you (or anyone else) is still having problems feel free to report something, or post in that sub-forum. Quote Link to comment
PJM_labview Posted February 3, 2015 Author Report Share Posted February 3, 2015 Just closing the loop on this topic. Below is the screenshot of the fastest solution to date that does include the buffer allocation (array creation) as part of the code that is being bench marked. Thanks for everyone help. PJM 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.