Jump to content
Aristos Queue

LV2016: New In-Place Struct border nodes for Variant Attribute access

Recommended Posts

Fantastic! Some of us have been waiting for this for a while.

VASTLY is not an overstatement.

Share this post


Link to post
Share on other sites

Half of me: yay! finally! this is great! :wub:

Other half: yet another IPE thing I have to do to get good performance :(

  • Like 2

Share this post


Link to post
Share on other sites
14 hours ago, Aristos Queue said:

I want to let LAVA know about a feature of LV 2016 that may not get much press. The In-Place Element structure has a new pair of border nodes to let you access the attributes of a variant without copying them out of the variant. This will VASTLY improve the performance of tools that use variant attributes as lookup tables.

I strongly encourage everyone who works on this to look at the shipping example:


labview\examples\Performance\Variant Attribute Lookup Table\

 

The Current Value Table is one tool that used to have the actual values of its key-value pairs stored as attributes, but ended up only storing type specific array indexes in the attribute to (vastly) improve its performance. I wonder how much of that difference (if any) could be removed with the new IPE feature... Has anyone checked that already? I've been evaluating different dictionary solutions lately and this might change the picture slightly. I'll download 2016 and do some testing later this week.

Edited by Mads

Share this post


Link to post
Share on other sites

The CVT is extremely slow (is that an overstatement?) and the variant attributes look up table will probably be on the order of 100X faster (no test at the moment).  I've been using my Variant Repository (which uses Variant Attributes) for a few years now for messenging APIs because I figured it was the fastest solution.  This improvement just makes it even faster and I'll likely update my Variant Repository to use the IPE instead.

Share this post


Link to post
Share on other sites

Let's not get carried away, I doubt the lack of inplaceness was the limiting factor for most use cases, which probably has to do with the issue not seeing much daylight.

Don't get me wrong, I've run across a few situations where copies were performance killers (which is how the idea exchange post originated) but they've been few and far between. These include working with nested structures or where the attributes are just plain big. Most of the time the copies are small and there are other operations on the retrieved values that take far more time.

That said, the syntax alone is useful so I'll likely end up abusing the IPE a lot for this even if it's not needed. The extra nanoseconds saved in otherwise slow code will be incidental.

Edited by mje

Share this post


Link to post
Share on other sites
On 8/3/2016 at 8:14 AM, Mads said:

The Current Value Table is one tool that used to have the actual values of its key-value pairs stored as attributes, but ended up only storing type specific array indexes in the attribute to (vastly) improve its performance. I wonder how much of that difference (if any) could be removed with the new IPE feature... Has anyone checked that already? I've been evaluating different dictionary solutions lately and this might change the picture slightly. I'll download 2016 and do some testing later this week.

I've run some preliminary testing of 2016 and the Variant attribute IPE for key value pair lookups now, and have compared it with 2016and 2015 without the IPE:

Without the IPE, 2016 is equal in speed or negligibly slower than 2015 (so there is no instantly free lunch in just upgrading to 2016).

With the IPE the speed of dictionary read and write operations are considerably faster, on my machine they were 1.7x faster than without/in LV2015.:) This was with only an array index in the attribute (value of key-value pair stored in separate array).

I also did a quick test where I stored a DBL directly in the attribute, which turned out to be even faster (1.4x) for writes, and equal for reads. That's probably not the case for more complex data types, but the gap will definitely be smaller than before. The CVT for example then will in most cases be better off using attributes to store the actual values instead of keeping separate arrays for them. That would also allow it to be made more flexible when it comes to adding or removing tags.

 

Edited by Mads

Share this post


Link to post
Share on other sites
2 hours ago, Mads said:

I also did a quick test where I stored a DBL directly in the attribute, which turned out to be even faster (1.4x) for writes, and equal for reads. That's probably not the case for more complex data types, but the gap will definitely be smaller than before. The CVT for example then will in most cases be better off using attributes to store the actual values instead of keeping separate arrays for them. That would also allow it to be made more flexible when it comes to adding or removing tags.

This breaks down if you want to access N items which is the more common use. Accessing N variant attributes is what, N*usec every time? Accessing N array elements is N*usec once then N*nanosec ever after.

We already went through this on the other side so just to summarize for anyone who might care: CVT used to be broadly applied to a lot of dictionary-type needs. Since it was created like a decade ago, there are a lot more cool libraries out there which are better (frequently much better) for the dictionary use case. People are quickly whittling down the use cases for the CVT, which is great. The fewer FGVs in the world the better :)

On 8/5/2016 at 1:02 AM, hooovahh said:

The CVT is extremely slow (is that an overstatement?) and the variant attributes look up table will probably be on the order of 100X faster (no test at the moment). 

CVT uses variant attributes. The old version (before about 4-5 years ago) stored named lookups in arrays if thats what you're thinking of.

Edited by smithd

Share this post


Link to post
Share on other sites
24 minutes ago, smithd said:

CVT uses variant attributes. The old version (before about 4-5 years ago) stored named lookups in arrays if thats what you're thinking of.

So glad to hear that.  I swear I looked into this less than 4 years ago and found the array searching code.  I made the Variant Repository code because I wanted something light weight, and fast.  I wouldn't have made my own if CVT at the time met my needs.

Share this post


Link to post
Share on other sites
2 hours ago, smithd said:

This breaks down if you want to access N items which is the more common use. Accessing N variant attributes is what, N*usec every time? Accessing N array elements is N*usec once then N*nanosec ever after.

We already went through this on the other side so just to summarize for anyone who might care: CVT used to be broadly applied to a lot of dictionary-type needs. Since it was created like a decade ago, there are a lot more cool libraries out there which are better (frequently much better) for the dictionary use case. People are quickly whittling down the use cases for the CVT, which is great. The fewer FGVs in the world the better :)

CVT uses variant attributes. The old version (before about 4-5 years ago) stored named lookups in arrays if thats what you're thinking of.

Most experienced engineers are using messaging systems today. CVT doesn't really fit with those architectures. Can you elaborate on what use cases has CVT been whittled down to and what has been found not appropriate for?

Share this post


Link to post
Share on other sites
4 hours ago, smithd said:

This breaks down if you want to access N items which is the more common use. Accessing N variant attributes is what, N*usec every time? Accessing N array elements is N*usec once then N*nanosec ever after.

I assume you are thinking about the static access here (where you always will be requesting the same value so that you can skip the lookup after the first call)? With no lookup you get into a different league of course. 

For random access there is no break-down; it will perform faster or comparable to the alternatives I guess you are thinking of. Personally I see more value in dictionaries in the random access scenario, so that's probably why I do not have the same focus on that bit as you.

Share this post


Link to post
Share on other sites
7 hours ago, ShaunR said:

Most experienced engineers are using messaging systems today. CVT doesn't really fit with those architectures. Can you elaborate on what use cases has CVT been whittled down to and what has been found not appropriate for?

Well if you buy into the trio of tags//streams//messages that come up in these discussions, nothing stops you from using tags alongside a messaging architecture.

Anyway, the way it was always designed to be used (as far as I know) is as an abstraction layer for control systems. You create generic processes which all write to or read from different segments of the CVT and then one or more control loops which operate on that data. The CVT in this situation could also hold configuration data. But the fundamental concept was to make programming labview more like a PLC, where system tasks fill in an I/O table, handle networking, handle logging, and all you do is scan through your logic which operates on that data. This can be done with messages, sure, this works nicely too.

Since it was made, cRIOs have gotten a lot faster. More people have produced dictionaries, esp DVR- or session-based ones. The DCAF video in the NI week recordings is a project essentially intended as a replacement to this design, with a similar end goal but better protection against races and configuration screwups...Its not that CVT is decaying, there are just newer options which seem simpler and easier to use. And with less FGVs.

One area where I think it is still a good choice is something like this (https://decibel.ni.com/content/docs/DOC-41894) although again, because cRIOs are so ridiculously fast recently its questionable how much value the CVT provides rather than just taking the hit of always using random access in a dictionary library, or spamming your web service with messages for every update that comes in.

Edited by smithd

Share this post


Link to post
Share on other sites
6 hours ago, smithd said:

One area where I think it is still a good choice is something like this (https://decibel.ni.com/content/docs/DOC-41894) although again, because cRIOs are so ridiculously fast recently its questionable how much value the CVT provides rather than just taking the hit of always using random access in a dictionary library, or spamming your web service with messages for every update that comes in.

Yes.When mapping LabVIEW variables to other languages then you tend to need "tags" and I can see why the CVT is probably the best solution here. You can define a tag name and use the name in JavaScript to update the UI. You can play around with spamming get requests and consolidating messages into larger update messages but the CVT would be simpler, easier and require less framework. I do the same for websockets but am able to generate events from the LabVIEW UI changing so the tags are implicit in the control/indicator names. Of course. That wouldn't work on a cRIO with no UI. Do the latest ones with a UI support UI events?

DCAF. Yes.:blink:

Edited by ShaunR

Share this post


Link to post
Share on other sites
9 hours ago, ShaunR said:

Do the latest ones with a UI support UI events?

I haven't used one personally, but the answer I got from NI was yes.  Things like right click events, mouse down, mouse up, etc in an event structure work, but I think they only work from the embedded UI, and not from the remote front panel connection if you are connected from a host.

Share this post


Link to post
Share on other sites
1 hour ago, hooovahh said:

I haven't used on personally, but the answer I got from NI was yes.  Things like right click events, mouse down, mouse up, etc in an event structure work, but I think they only work from the embedded UI, and not from the remote front panel connection if you are connected from a host.

When I was beta testing the Embedded UI the behavior I saw was this (hopefully this isn't too off topic):

  • When the Embedded UI was disabled in MAX the behavior of the system was consistent with typical RT targets (e.g. don't use event structures with front panel events)
  • When the Embedded UI was enabled in MAX it worked (with some caveats) just like VIs running under windows
    • When the embedded system is remotely controlled, e.g. you are deploying and interacting with the target from your development system event structures work properly
    • When you disconnect from the target you can then interact with the VI directly from the cRIO via KB + Mouse, and VIs with event stuctures properly processed everything

The application I was testing was the standard QMH with an event loop and a message handling loop.

So, if the embedded UI is enabled, UI Events are supported!

  • Like 1

Share this post


Link to post
Share on other sites
6 hours ago, ShaunR said:

That wouldn't work on a cRIO with no UI. Do the latest ones with a UI support UI events?

DCAF. Yes.:blink:

I believe events do work, yes.

And...I take no responsibility for the name, they changed it after I left and stopped being involved at all ;). Although I think its better than the one we had before. Shorturl is ni.com/dcaf.

Share this post


Link to post
Share on other sites
2 hours ago, Craig_ said:

When I was beta testing the Embedded UI the behavior I saw was this (hopefully this isn't too off topic):

  • When the Embedded UI was disabled in MAX the behavior of the system was consistent with typical RT targets (e.g. don't use event structures with front panel events)
  • When the Embedded UI was enabled in MAX it worked (with some caveats) just like VIs running under windows
    • When the embedded system is remotely controlled, e.g. you are deploying and interacting with the target from your development system event structures work properly
    • When you disconnect from the target you can then interact with the VI directly from the cRIO via KB + Mouse, and VIs with event stuctures properly processed everything

The application I was testing was the standard QMH with an event loop and a message handling loop.

So, if the embedded UI is enabled, UI Events are supported!

Sweet. How about the  Val(sgnl) property node?

Share this post


Link to post
Share on other sites

Last year I tinkered with one of the Linux based cDAQ systems, and yes it supported a GUI. I did not try any kind of events, just wanted to show the FP of the RT code for info purposes. Some strange things happened though, some of the indicators just would not update even though they were being fed data in the same loop as others which were working.

I had no real use-case for the GUI, it was more out of curiosity, so I left it there.

Share this post


Link to post
Share on other sites

Thread hijack!

DCAF, (de-caf, right?). Looks interesting, but am a bit put off by the code smells in this screenshot...

Capture.PNG

Share this post


Link to post
Share on other sites
25 minutes ago, Neil Pate said:

Thread hijack!

DCAF, (de-caf, right?). Looks interesting, but am a bit put off by the code smells in this screenshot...

I think you should start a new thread to discuss it :) A hijack of a hijack is a bit much :lol:

Edited by ShaunR

Share this post


Link to post
Share on other sites

Sorry, I will check out the community page and keep my comments there.

(sometimes I get my tone wrong after being handed two babies at 6 AM)

Edited by Neil Pate

Share this post


Link to post
Share on other sites

Because you're doing all the operations in parallel, this is a poor test and should be only doing one thing at a time.  Also you should disable debugging, and automatic error handling.  I also think your time is too small to measure well enough.  Maybe try a larger sample size.

Here is a speed test I did on the writing of the new IPE and it shows improvements.

https://forums.ni.com/t5/LabVIEW/Correct-way-of-using-the-Variant-Get-Replace-In-Place-Element/m-p/3334502#M978560

Share this post


Link to post
Share on other sites
4 hours ago, hooovahh said:

Because you're doing all the operations in parallel, this is a poor test and should be only doing one thing at a time.  Also you should disable debugging, and automatic error handling.  I also think your time is too small to measure well enough.  Maybe try a larger sample size.

Here is a speed test I did on the writing of the new IPE and it shows improvements.

https://forums.ni.com/t5/LabVIEW/Correct-way-of-using-the-Variant-Get-Replace-In-Place-Element/m-p/3334502#M978560

Thanks for the feedback. By incorporating your feedback, I discovered that I got similar speed improvements to your speed test if I do not show the returned values as array indicators. If I showed the indicators the IPE and Get Variant Attribute were pretty similar if not the same.  I also discovered that if I don't show the array indicators or the variant as indicators then my test times were basically zero. Attached is my VI showing these behaviors.

 

Lookup Table.vi

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.