Jump to content

self-decimating array for storage


torekp

Recommended Posts

I've developed a "functional global VI" to store a representative sample of raw data throughout the history of a run. It works, but it seems kind of klutzy. Also it requires the calling VI to keep a, whaddaya call, double shift register (where you use the time-before-last-time's value) to decide whether to log a given data set.

post-4616-1170851369.png?width=400

So ... please suggest improvements. Thanks in advance.

Download File:post-4616-1170851012.zip

Link to comment

I can't share my exact code, but I can explain what I did in my project.

I have a high speed data source that is received as strings (TCP Read). I created 2 queues to handle the data; one for full rate logging to disk, and the other for the decimated data. My TCP Read is in a while loop. I use the index of the loop and the quotient/remainder function to decimate the data. When the remainder = 0, then I insert the string into my decimated data queue.

You will need to service the queues or you will run out of memory if the size of the queue is undefined (this is default?!). I have separate loops that flush the queue and write the data to disk. In my case, I use the decimated data to feed my UI and calculate averages for a smaller set of data.

Simplified Example:

Link to comment

LVPunk, your lower loop is not safe. If an error occurs in the queue flushing node, the loop exits but doesn't return an error so that the user could react in any way.

If you are using LV 8.20 I suggest you create a decimating queue class that would work identically to a queue but that would automatically decimate the data written into the queue.

Tomi

Link to comment

Your point is well taken, the example was made in ~5 minutes. My actual implementation is multithreaded using VI server with much more complete error handling and notifiers for signalling. I guess I should have spent a little more time.... BTW, I did mention that I couldn't share my actual code, and included (in bold) the label "Simplified Example".

The intent was to suggest an improvement/technique (as requested) by using queues instead of a functional global and processor/memory expensive array manipulation to streamline data acq and provide access to decimated data.

Some of what I've done is outlined in this thread. I really wish LabVIEW queues were lossy :(

Link to comment

The intent was to suggest an improvement/technique (as requested) by using queues instead of a functional global and processor/memory expensive array manipulation to streamline data acq and provide access to decimated data.

Some of what I've done is outlined in this thread. I really wish LabVIEW queues were lossy :(

I agree. I've just seen this trick in multiple examples here in LAVA and just wanted to make a point that nobody should really use it in any serious application. This issue reminds me of a kind of educative story of the importance of proper error handling. A guy working for Nokia had a backup software to back up his workstation to a network backup server. Well, the IT administrators decided to change the IP-address of the backup server, unluckily nobody told the guy who used it for backing up his workstation. The backup software never gave any error message that the server couldn't be reached. One day the hard drive of the workstation broke down and the guy started to recover his system from the backup server just to notice that there actually were no backups. Luckily nobody of use makes such stupid applications, right :)

Link to comment

Thanks LV Punk.

I forgot to mention, my VI automatically increases its decimation factor every time it gets "filled". Say you start out by capturing every item ("frequency to write VIG" = 1). After the allotted space is filled, it becomes every 2nd item; then when filled again, every 4th; etc. When the calling VI terminates, the functional global is filled with items number 0, 8, 16, etc. (for example), although there will likely be a few leftovers from the previous decimation (item #28, say, which is divisible by 4 but not by 8).

This lets me keep a tight lid on the allocated memory, and get a representative sample of the history, without having to know in advance how long the top-level VI might run.

Link to comment

Don't have time to look or think hard, but one little tip to consider:

I've got an app going with high-speed disk streaming that also pumps filtered and decimated data to the user display in pseudo real-time. One tweak I added to both the filtering and decimation steps was to preserve Min/Max info so as not to lose the kind of outliers that may require operator intervention. I would find Min/Max before filtering/decimation, then substitue those values over top of the calculated ones at the appropriate locations after the filtering/decimation. This isn't needed in all apps of course, just food for thought.

BTW, I like your idea of re-decimating your history buffer at higher compression each time it fills up! Simple enough concept, but I hadn't ever thought to do it myself before.

-Kevin P.

Link to comment

QUOTE(torekp @ Feb 7 2007, 02:54 PM)

Thanks LV Punk.

I forgot to mention, my VI automatically increases its decimation factor every time it gets "filled". Say you start out by capturing every item ("frequency to write VIG" = 1). After the allotted space is filled, it becomes every 2nd item; then when filled again, every 4th; etc. When the calling VI terminates, the functional global is filled with items number 0, 8, 16, etc. (for example), although there will likely be a few leftovers from the previous decimation (item #28, say, which is divisible by 4 but not by 8).

This lets me keep a tight lid on the allocated memory, and get a representative sample of the history, without having to know in advance how long the top-level VI might run.

If all your decimation factors happen to be powers of 2, then it will be much cheaper for you to use bit shifting instead of the Quotient & Remainder function for dividing. Keep in mind that dividing an unsigned integer by 2^x is the same as shifting the integer x bits to the right without carrying (wrapping around). If you want the remainder, then subtract the result of the bitshift times the power of 2 and subtract it from the original.

Something along the lines of the following, though you could perhaps optimize it further.

Link to comment
  • 4 months later...

QUOTE(ragglefrock @ Feb 13 2007, 04:59 AM)

If all your decimation factors happen to be powers of 2, then it will be much cheaper for you to use bit shifting instead of the Quotient & Remainder function for dividing. Keep in mind that dividing an unsigned integer by 2^x is the same as shifting the integer x bits to the right without carrying (wrapping around). If you want the remainder, then subtract the result of the bitshift times the power of 2 and subtract it from the original.

Something along the lines of the following, though you could perhaps optimize it further.

After making a better version of my idea for the Code Repository, I tried out your suggestion, and found that the native Quotient & Remainder function is faster. Attached, my slightly modified version of your VI, plus the test VI.

Link to comment

QUOTE(torekp @ Jun 28 2007, 06:30 AM)

Good catch. Actually most of the time difference you were seeing was the overhead of a subVI call. Changing the subVI's execution to subroutine reduces the time taken from 1300ms or so down to around 80, and with some further optimizations you can get it down to around 48ms. Copying and pasting the code itself directly into the loop reduces it further to around 16ms. Still, the quotient and remainder function only takes around 5ms.

The remaining difference appears to simply be buffer allocations. My "optimized" algorith had like 5 or 6, while the quotient and remainder function purports to only need 2.

QUOTE(ragglefrock @ Jun 28 2007, 09:48 PM)

Good catch. Actually most of the time difference you were seeing was the overhead of a subVI call. Changing the subVI's execution to subroutine reduces the time taken from 1300ms or so down to around 80, and with some further optimizations you can get it down to around 48ms. Copying and pasting the code itself directly into the loop reduces it further to around 16ms. Still, the quotient and remainder function only takes around 5ms.

The remaining difference appears to simply be buffer allocations. My "optimized" algorith had like 5 or 6, while the quotient and remainder function purports to only need 2.

OK, further update. I can get my homemade code attached below to run exactly as fast as Quotient & Remainder. Sometimes a little faster (10-15ms faster over 100000000 iterations), but it's probably just the error of the timing. Here's what I did:

1. Turn off debugging. This was the last piece that allowed my code to catch up in speed.

2. My calculation for the Quotient was okay in my code, but the code for the modulus was very slow. I was using multiplication, which is fairly expensive (not on the order of division, but more than addition and bitshifting).

Here's my code below for you to examine. The performance is really about the same. So sorry for claiming I could do better, but this isn't that bad.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.