drjdpowell Posted December 13, 2011 Report Posted December 13, 2011 Hello, I've been using the feature of Variant Attributes to store and lookup values in an efficient way. In particular, I've been storing complex objects such as the (simplified) example below where I post messages to "Observers" of those messages. My question is: is this the most efficient way to do this? In particular, I select one attribute, modify it, and then return it to the variant: does this involve copying the entire cluster of objects, or does the LabVIEW compiler identify this as an operation that can be done "in place"? -- James Quote
Jarrod S Posted December 13, 2011 Report Posted December 13, 2011 This will cause a copy to be made of the data. The LabVIEW compiler cannot recognize Get Variant Attribute and Set Variant Attribute pairs as being inplace. If you are worried about performance, consider storing a data value reference to the contained object in your variant look-up, so that the only thing copied is a small reference value. 1 Quote
asbo Posted December 13, 2011 Report Posted December 13, 2011 This will cause a copy to be made of the data. The LabVIEW compiler cannot recognize Get Variant Attribute and Set Variant Attribute pairs as being inplace. Can the In Place Structure force the compiler to see this, or does inplaceness only occur for operations which can be tied to the structure? Quote
Tim_S Posted December 13, 2011 Report Posted December 13, 2011 It may not be the best way, depending on what you're trying to do, but I've stored an index to an array in a variant instead of the data. This wound up being efficient for me as I needed to store all instances of a piece of data, but only needed to access the latest until end of test. Quote
drjdpowell Posted December 13, 2011 Author Report Posted December 13, 2011 Thanks, I was worried that copies are made. I was considering storing an index to an array and Tim suggests, but I might wait till I upgrade (still on 8.6) and go with the DVR as the code would be simpler. Quote
mje Posted December 14, 2011 Report Posted December 14, 2011 As Jarrod confirmed, the attribute operations always generate a copy. I'll refer to an old idea exchange post of mine which I would still love to see implemented though. For reasons I've never gotten around to posting, I doubt that this would be doable with objects, but I expect it should be possible for any static sized piece of data. It may not be the best way, depending on what you're trying to do, but I've stored an index to an array in a variant instead of the data. This wound up being efficient for me as I needed to store all instances of a piece of data, but only needed to access the latest until end of test. If you do this, just be sure your array is relatively static. Otherwise be aware any time you hope to gain via associative look-ups can easily be lost by having to operate on the array: reallocation of the entire data space as the array size changes, frameshifting the array when removing elements, etc. Basically you need to weigh the cost of manipulating the entire array when the size of the data set changes versus the cost of copying single elements. Unless your array barely ever changes, I think you'd be better off with the plain old variant and living with a single copy on each operation. DVRs might help, but keep in mind the synchronization overhead involved with the DVR isn't necessarily free so I wouldn't bother with them unless you can prove to yourself your data copies are costing you. Quote
drjdpowell Posted December 14, 2011 Author Report Posted December 14, 2011 As Jarrod confirmed, the attribute operations always generate a copy. I'll refer to an old idea exchange post of mine which I would still love to see implemented though. Kudoed. I was going to make the same suggestion if you hadn't already. Don't see why it wouldn't work with objects, though. If you do this, just be sure your array is relatively static. Otherwise be aware any time you hope to gain via associative look-ups can easily be lost by having to operate on the array: reallocation of the entire data space as the array size changes, frameshifting the array when removing elements, etc. Basically you need to weigh the cost of manipulating the entire array when the size of the data set changes versus the cost of copying single elements.Unless your array barely ever changes, I think you'd be better off with the plain old variant and living with a single copy on each operation. DVRs might help, but keep in mind the synchronization overhead involved with the DVR isn't necessarily free so I wouldn't bother with them unless you can prove to yourself your data copies are costing you. My array is relatively static (rare additions, no deletions), but perhaps I'll live with the copies for now, until I get to the point that I can do comparative testing. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.