Dan Press Posted September 7, 2006 Report Share Posted September 7, 2006 I recently had one of those "why didn't I think of that" moments. It didn't really surprise me however, since the code I was perusing at the time was written by a preeminent member of the LabVIEW R&D team. Yesterday, I gave a quick talk at a local LabVIEW User Group meeting on the new lvOOP stuff in 8.20. Since it is such a compelling example, I decided to flash up a few block diagrams from Christina Rogers' Getting Started window. As I was explaining (rather quickly) how dynamic dispatching is used to fire the proper handler VI to react to a click on a link, I noticed something very cool. When the Mouse Down event fires on one of the links, there is an event case to catch it (of course). That case handles the Mouse Down for all the clickable text links. The object data for the clicked link is obtained by using the clicked control's Label.Text property to Get a Variant Attribute. The value of the Attribute is the link's object data (custom wire)! After thinking for a bit, it occurs to me that we have a wonderful implementation of a linked list or lookup table. There's even a comment in Christina's code that says "Store the name/value pairs in a variant's attributes for faster look up than an array." Perhaps many of you LAVA folks out there already use that pattern in your projects. I do not believe that this is anything new, although its application to lvOOP certianly is. A big Thank You goes out to Christina Rogers and the rest of the LabVIEW R&D team. LabVIEW 8.20 is a most welcome addition to the family tree! Quote Link to comment
Aristos Queue Posted September 8, 2006 Report Share Posted September 8, 2006 Perhaps many of you LAVA folks out there already use that pattern in your projects. I do not believe that this is anything new, although its application to lvOOP certianly is. It isn't new... but you probably don't want to use it before LV8.0. The variant attributes have been stored using a red/black tree for O(log(n)) lookup speeds for as far back in the code as I have checked (LV6.1). I think that's how they're originally implemented. And that's good. But until LV8.0, there was a huge block of memory reserved for each attribute, which actually made this name lookup database not feasible for anything more than a couple hundred attributes. With the revisions of LV8.0, the variants have really really improved in speed (I've never benchmarked it but it's A LOT faster), and the memory overhead has decreased to almost equal to the amount of data you're storing. Variants were tweaked with this use case specifically in mind to makes it an incredibly efficient lookup tree. Quote Link to comment
bsvingen Posted September 8, 2006 Report Share Posted September 8, 2006 To me this looks pretty much as associative Arrays with string as the key. Why not built this mechanism into the ordinary array functions? That is where it belongs. Or parhaps better, make some associative array functions. Also, if you use a flattened to string as the key/name, it will in all respect become an associative array that can take anything as the key. Maybe a good OpenG Project :question: :2cents: Quote Link to comment
LAVA 1.0 Content Posted September 8, 2006 Report Share Posted September 8, 2006 To me this looks pretty much as associative Arrays with string as the key. If you use flatten to string, then you can use anything as a key. I agree that this should be implemented as an associative array. When variants are stored to disk, the attributes are lost. Quote Link to comment
Albert Geven Posted September 8, 2006 Report Share Posted September 8, 2006 With the revisions of LV8.0, the variants have really really improved in speed (I've never benchmarked it but it's A LOT faster) We benchmarked some simple to variant and from variant operations for float,bool,string and the improvement is about 50 times as fast (lv7.1.1 compared with lv8.2) Quote Link to comment
jpdrolet Posted September 8, 2006 Report Share Posted September 8, 2006 When variants are stored to disk, the attributes are lost. That is not true. There is a field in the flattened variant string that tells how many variant attributes there are, followed by the flattened attributes. Quote Link to comment
Aristos Queue Posted September 8, 2006 Report Share Posted September 8, 2006 When variants are stored to disk, the attributes are lost. Attributes are not lost. See attached VI. (written in LV8.2, but I am very certain this same behavior applies all the way back). Download File:post-5877-1157741008.vi Quote Link to comment
Neville D Posted September 12, 2006 Report Share Posted September 12, 2006 We benchmarked some simple to variant and from variant operations for float,bool,string and the improvement is about 50 times as fast (lv7.1.1 compared with lv8.2) Just a question: Does anybody know of the speed improvement/performance lag when using variants with LV-RT 8.x ? Neville. Quote Link to comment
torekp Posted September 19, 2006 Report Share Posted September 19, 2006 [...] But until LV8.0, there was a huge block of memory reserved for each attribute, which actually made this name lookup database not feasible for anything more than a couple hundred attributes. With the revisions of LV8.0, the variants have really really improved in speed (I've never benchmarked it but it's A LOT faster [...] Would it be faster than the following simple array trick? I use this a lot: ...but my real life LUTs are massive and performance is everything. Note that I already separate the independent variable X into a separate 1D array and forget about interpolating. Quote Link to comment
Aristos Queue Posted September 20, 2006 Report Share Posted September 20, 2006 Would it be faster than the following simple array trick? ...but my real life LUTs are massive and performance is everything. Note that I already separate the independent variable X into a separate 1D array and forget about interpolating. You're probably faster for lookup times than the variant, but you'll pay pay a bigger penalty at insert time. If you use the Search 1D Array prim to find what position to do the insert, that's an O(n) penalty (though you might have written your own binary search on sorted data to get this down to O(log n) ). When you add a value to the array, possibly in the middle of the array, you're going to reallocate the array to make room for the new element. That's a lot of data movement. Compare that with the O(log n) time needed to add a variant attribute to the tree with zero data movement. Same comparison applies for removing an element from the lookup table. So if your lookup table is pretty stable from the time it is built until you're done with it, your method probably has advantages. If you're adding entries and deleting entries a lot, then I wager the variant attrib is going to beat you hands down, even including the expense of converting your doubles to strings to do the key lookup. Quote Link to comment
Variant Posted December 15, 2008 Report Share Posted December 15, 2008 Can anyone exaplin this " O(log n) )" How it works. Really curious Quote Link to comment
jzoller Posted December 15, 2008 Report Share Posted December 15, 2008 QUOTE (Variant @ Dec 13 2008, 10:13 PM) Can anyone exaplin this " O(log n) )" How it works. Really curious http://en.wikipedia.org/wiki/Big_O_notation In very general terms, it's a description of how long it takes for an algorithm to run to completion. Quote Link to comment
Aristos Queue Posted December 15, 2008 Report Share Posted December 15, 2008 QUOTE (jzoller @ Dec 14 2008, 03:48 AM) In very general terms, it's a description of how long it takes for an algorithm to run to completion. More specifically, it specifies as N gets larger, how does the time needed to complete the algorithm grow. So if I have N items in my array, saying an algorithm takes O(N) means that the function grows linearly -- double the number of items, double the amount of time. If it grows O(N * N), aka N squared, then if I double the number of items in my array, I expect the time to increase as N squared. In general: Any polynomial, even large ones like N to the 10th, is considered a good "solvable" algorithm. Any exponential (2 to the Nth, or N to the Nth, or worse) is considered a bad "unsolvable" algorithm since those would generally take longer to compute than we have time to wait for any non-trivial value of N. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.