Jump to content

Faster Lookup Table


Recommended Posts

Posted

I recently had one of those "why didn't I think of that" moments. It didn't really surprise me however, since the code I was perusing at the time was written by a preeminent member of the LabVIEW R&D team.

Yesterday, I gave a quick talk at a local LabVIEW User Group meeting on the new lvOOP stuff in 8.20. Since it is such a compelling example, I decided to flash up a few block diagrams from Christina Rogers' Getting Started window.

As I was explaining (rather quickly) how dynamic dispatching is used to fire the proper handler VI to react to a click on a link, I noticed something very cool. When the Mouse Down event fires on one of the links, there is an event case to catch it (of course). That case handles the Mouse Down for all the clickable text links. The object data for the clicked link is obtained by using the clicked control's Label.Text property to Get a Variant Attribute. The value of the Attribute is the link's object data (custom wire)!

post-123-1157663013.png?width=400

After thinking for a bit, it occurs to me that we have a wonderful implementation of a linked list or lookup table. There's even a comment in Christina's code that says "Store the name/value pairs in a variant's attributes for faster look up than an array."

post-123-1157663022.png?width=400

Perhaps many of you LAVA folks out there already use that pattern in your projects. I do not believe that this is anything new, although its application to lvOOP certianly is.

A big Thank You goes out to Christina Rogers and the rest of the LabVIEW R&D team. LabVIEW 8.20 is a most welcome addition to the family tree!

Posted
Perhaps many of you LAVA folks out there already use that pattern in your projects. I do not believe that this is anything new, although its application to lvOOP certianly is.

It isn't new... but you probably don't want to use it before LV8.0.

The variant attributes have been stored using a red/black tree for O(log(n)) lookup speeds for as far back in the code as I have checked (LV6.1). I think that's how they're originally implemented. And that's good. But until LV8.0, there was a huge block of memory reserved for each attribute, which actually made this name lookup database not feasible for anything more than a couple hundred attributes. With the revisions of LV8.0, the variants have really really improved in speed (I've never benchmarked it but it's A LOT faster), and the memory overhead has decreased to almost equal to the amount of data you're storing. Variants were tweaked with this use case specifically in mind to makes it an incredibly efficient lookup tree.

Posted

To me this looks pretty much as associative Arrays with string as the key. Why not built this mechanism into the ordinary array functions? That is where it belongs. Or parhaps better, make some associative array functions.

Also, if you use a flattened to string as the key/name, it will in all respect become an associative array that can take anything as the key.

Maybe a good OpenG Project :question:

:2cents:

Posted
To me this looks pretty much as associative Arrays with string as the key.

If you use flatten to string, then you can use anything as a key. I agree that this should be implemented as an associative array. When variants are stored to disk, the attributes are lost.

Posted
With the revisions of LV8.0, the variants have really really improved in speed (I've never benchmarked it but it's A LOT faster)

We benchmarked some simple to variant and from variant operations for float,bool,string and the improvement is about 50 times as fast (lv7.1.1 compared with lv8.2)

Posted
When variants are stored to disk, the attributes are lost.

That is not true. There is a field in the flattened variant string that tells how many variant attributes there are, followed by the flattened attributes.

Posted
We benchmarked some simple to variant and from variant operations for float,bool,string and the improvement is about 50 times as fast (lv7.1.1 compared with lv8.2)

Just a question:

Does anybody know of the speed improvement/performance lag when using variants with LV-RT 8.x ?

Neville.

Posted
[...] But until LV8.0, there was a huge block of memory reserved for each attribute, which actually made this name lookup database not feasible for anything more than a couple hundred attributes. With the revisions of LV8.0, the variants have really really improved in speed (I've never benchmarked it but it's A LOT faster [...]

Would it be faster than the following simple array trick? I use this a lot:

post-4616-1158695454.png?width=400

post-4616-1158695440.png?width=400

...but my real life LUTs are massive and performance is everything. Note that I already separate the independent variable X into a separate 1D array and forget about interpolating.

Posted
Would it be faster than the following simple array trick?

...but my real life LUTs are massive and performance is everything. Note that I already separate the independent variable X into a separate 1D array and forget about interpolating.

You're probably faster for lookup times than the variant, but you'll pay pay a bigger penalty at insert time. If you use the Search 1D Array prim to find what position to do the insert, that's an O(n) penalty (though you might have written your own binary search on sorted data to get this down to O(log n) ). When you add a value to the array, possibly in the middle of the array, you're going to reallocate the array to make room for the new element. That's a lot of data movement. Compare that with the O(log n) time needed to add a variant attribute to the tree with zero data movement. Same comparison applies for removing an element from the lookup table.

So if your lookup table is pretty stable from the time it is built until you're done with it, your method probably has advantages. If you're adding entries and deleting entries a lot, then I wager the variant attrib is going to beat you hands down, even including the expense of converting your doubles to strings to do the key lookup.

  • 2 years later...
Posted

QUOTE (jzoller @ Dec 14 2008, 03:48 AM)

In very general terms, it's a description of how long it takes for an algorithm to run to completion.
More specifically, it specifies as N gets larger, how does the time needed to complete the algorithm grow. So if I have N items in my array, saying an algorithm takes O(N) means that the function grows linearly -- double the number of items, double the amount of time. If it grows O(N * N), aka N squared, then if I double the number of items in my array, I expect the time to increase as N squared.

In general: Any polynomial, even large ones like N to the 10th, is considered a good "solvable" algorithm. Any exponential (2 to the Nth, or N to the Nth, or worse) is considered a bad "unsolvable" algorithm since those would generally take longer to compute than we have time to wait for any non-trivial value of N.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.