Jump to content

VIRegister


Recommended Posts

Hi.

Version 1.1 of VIRegister is now ready for download:

Version changes from v1.0:

- Removed the write- and read-functions without error terminals.

- Removed type-dependency from VIRegisters, so it's no longer possible to have two different VIRegister types with the same name.

- Added support for using the same VIRegister function with varying names and/or scope VI refnums (in a loop for instance).

- Improved read performance when no write has yet been performed.

- Added array of path to supported data types.

- Updated the polymorphic 'VIRegister - Write.vi' to adapt to input.

- Added 'VIRegister - Release.vi'.

Cheers,

Steen

  • Like 1
Link to comment

Thanks for the update :thumbup1: . One first comment:

Calling the same node with varying register names works now, but it is very slow.

If you use a variant to store and look up the index of requested queue (instead of using the standard search function) the use of the cache will be much quicker.

Link to comment

Thanks for the update :thumbup1: . One first comment:

Calling the same node with varying register names works now, but it is very slow.

If you use a variant to store and look up the index of requested queue (instead of using the standard search function) the use of the cache will be much quicker.

VIRegister was never intended as a lookup-table, it will always be much slower to look for the correct queue instead of using the same one every time. Even supporting multiple register access through the same node has lowered best case performance from almost 2,000,000 reads/s (on my laptop) to 700,000 reads/s. Accessing 10 registers in a loop lowers performance to around 400,000 reads/s.

That aside, how many different registers are you reading with the same node? 1000 lowers performance to a crawling 9,000 reads/s on my machine, but I'd consider that seriously beyond the intent of the toolset. But anyways, you are doing something you can't do with a local variable.

I assume you mean variant attributes, taking advantage of the binary search instead of the linear search of Search 1D Array? That will only be beneficial if we're talking about a serious number of different registers. The major bottleneck is typecasting the Scope VI refnum anyway, so I wouldn't dare to guess at which number of registers break-even is, and I wouldn't want to sacrifice static register name performance to get better dynamic name performance.

What I have definetely learned though is that LabVIEW sucks majorly when implementing polymorphism. I don't think I'll ever want to change anything in VIRegister again, I'm fed up with making the same VIs over and over again. I have to copy and change icons too, but have opted to have the same VI Description for all instances, or else I wouldn't have finished v1.1 yet. In other languages it's so easy to make function overrides, but in LabVIEW each and every instance has be be implemented 100%. It takes 5-10 minutes to get to an improved version of a VIRegister instance, and then literally days of programming instead of a couple more minutes before all the instances are done. It's way beyond pathetic.

Cheers,

Steen

Link to comment

Yes, the way polymorph VIs work in LabVIEW I can definitely understand why you do not want to update the VIRegister library again. In most cases the current implementation will be just fine. Thanks again for the work.

I did run a quick test to see what kind of performance I could get if needed though. To simplify the change I skipped the scope part and just used the register name as the reference.

Updating 10 000 booleans with the same node used to take 1,5 seconds, now it runs in 39 ms.

Link to comment

As before in 1.1 when the linear search was used to find the register refnum.

The linear search is just a bit slower in absolute terms, but with 10000 registers this slowness has extra impact due to the combination of having a large list to search, and having to do it so many times...

In the test I changed the register name on every call so it is a worst case scenario. The very first run takes a bit longer than the quoted times because of the array build though (that part could perhaps be sped up as well by expanding the list in chuncks instead of on every call, but that's a minor issue).

Edited by Mads
Link to comment

As before in 1.1 when the linear search was used to find the register refnum.

The linear search is just a bit slower in absolute terms, but with 10000 registers this slowness has extra impact due to the combination of having a large list to search, and having to do it so many times...

In the test I changed the register name on every call so it is a worst case scenario. The very first run takes a bit longer than the quoted times because of the array build though (that part could perhaps be sped up as well by expanding the list in chuncks instead of on every call, but that's a minor issue).

Ok, so you have 1.5 seconds in v1.1 and 39 ms for the same operation in your v1.1 modified for VA? Looks promising, but the hit on constant-name lookup better be small! ;).

How do you solve the lookup problem regarding not being able to lookup the VA by Register name alone, but will need to use Register ID (currently a cluster of both Register name and Scope VI refnum)? I have a use case where only Scope VI refnum differ, while Register name is constant.

/Steen

Edited by Steen Schmidt
Link to comment

I did not include the scope in the test. To use the variant attribute to get the sort and search we need a string key, not a cluster. This should be easy enough to fix though, either we could add the scope number to the name, or by flattening the cluster. I have not tested that yet though.

Now all you need is a script that updates all the polymorph VIs :-)

There is no hit on the constant name lookup as a check for that precedes the search in the list of refnums.

Edited by Mads
Link to comment

Yes, the way polymorph VIs work in LabVIEW I can definitely understand why you do not want to update the VIRegister library again. In most cases the current implementation will be just fine. Thanks again for the work.

I did run a quick test to see what kind of performance I could get if needed though. To simplify the change I skipped the scope part and just used the register name as the reference.

Updating 10 000 booleans with the same node used to take 1,5 seconds, now it runs in 39 ms.

Ok, I quickly ported a VIRegister (read/write Boolean) to use variant attributes (let's call that v1.2), but was sorely disappointed when I benchmarked it against v1.1. v1.2 came out 1000 times slower than v1.1 on 10000 variable registers. 6 hours later I've isolated a bug in LabVIEW 2009 SP1 regarding initialized feedback nodes of variant data :wacko:. The code snippet below runs in almost 3 minutes on LabVIEW 2009 SP1, but in just 40 milliseconds on LabVIEW 2010 SP1:

post-15239-0-24095600-1310799926_thumb.p

Just by changing the feedback node into a shift register equalizes the performance on LV 2009 SP1 and 2010 SP1:

post-15239-0-11443300-1310800220_thumb.p

But it has something to do with initialization or not, and how the wire of the feedback/shift register branches, so it can also fail with a shift register under some circumstances. I think I can get around that bug, but it'll cause a performance hit since none of the optimum code solutions work on 2009 SP1 (I need this code to work on 2009 SP1 also). This means I'll probably release v1.2 implementing VAs soon.

/Steen

Link to comment

Ok, I quickly ported a VIRegister (read/write Boolean) to use variant attributes (let's call that v1.2

First benchmarks indicate a performance improvement especially when using variable register names (500-1000 times better performance), but also when using constant register names (15-20%). When using variable register names you actually only halve performance now compared to a constant name. I wouldn't have believed that possible a few days ago. With v1.1 the two use cases are a world apart, but that's the difference between n and Log(n) search performance right there.

Performance on LV 2010 SP1 seems slightly better even than 2009 SP1 (5-10%).

Thanks Mads, for pushing me to investigate this in depth :thumbup1:.

/Steen

Edited by Steen Schmidt
  • Like 1
Link to comment
6 hours later I've isolated a bug in LabVIEW 2009 SP1 regarding initialized feedback nodes of variant data :wacko:.

Not quite fair to call it a bug in LV2009. We added new features to the compiler to do deeper optimizations in LV2010, which gave us better performance than we've had before. A bug would be something that got slower in a later version. This is just new research giving us new powers.
Link to comment

Not quite fair to call it a bug in LV2009. We added new features to the compiler to do deeper optimizations in LV2010, which gave us better performance than we've had before. A bug would be something that got slower in a later version. This is just new research giving us new powers.

Not to split hairs over words, but several orders of magnitude slower (~4000 times), for a specific data type only (variant), in a specific configuration (initialized feedback node) equals highly unexpected behaviour. For sure the code runs without error, so the computation itself is fine. It's a question of how we define a bug. When I see performance vary so greatly between neighboring configurations I can't help but wonder if no memory got hurt in the longer computation? :P

In LabVIEW 8.6.1 the same code runs in about 50 ms, so very comparable to LV 2010 SP1. But in LV 2009 SP1 that figure is 3 minutes.

I'm not sniping at you in R&D, who make huge improvements to the LabVIEW we love without ever getting due credit for what you're doing, but something did happen here. In this case it almost made me abandon variant attributes as a means of key cache, since VAs have varied greatly in performance over the last several versions of LabVIEW. Had I (falsely) concluded that VAs were unreliable performance wise, that would've had widespread consequences for many people, including myself, since my recommendations in regards of LabVIEW and NI SW/HW in general is taken very seriously by alot of people in Denmark.

The only thing that made me stick to digging in this until I found out what was wrong, was the fact that I started out by making two small code snippets that clearly demonstrated that the binary and linear algorithms were in fact still used as expected, and thus setup my expectations toward performance. My VIRegister implementation using VAs showed a completely different picture though, and it took me a long time to give up finding where I'd made a mistake in my code. Then I was forced to decimate my code until I was down to the snippet I posted earlier, which hasn't got more than a handful of nodes in it. Big was my surprise when performance went up by a factor of 4000 just by replacing the feedback node with shift registers.

But nevermind, I couldn't find anywhere to report this "bug" onlline (I usually go through NI Denmark, but it's weekend), so it hasn't been reported to anyone. If it isn't a bug I won't bother pursuing it. Now I at least know of another caveat in LV 2009 SP1 :D.

Cheers,

Steen

Link to comment

Your post here prompted me to take another look... I'm now inclined to call this a bug.

I'm never sure how to read feedback nodes that are initialized... To my eyes, when the loop executes each iteration, that variant constant is going to generate something and send it to the initializer node. The fact that the initializer chooses to do nothing with it is its problem. It's one of the reasons that I don't like feedback nodes at all, especially inside loops. In LV2010, I know we got new optimizations to improve dead code elimination and loop unrolling, so I figured this was covered by those optimizations. But I tried a couple of other feedback nodes in a loop and didn't see the same slowdown, so maybe it is a bug that is variant specific.

Link to comment

Yes, that's on purpose. I'm usually quite anal about wiring the error terminals, so it's quite rare I choose not to.

In this case the error in will always be "no error" due to the error-case before, and no nodes in between to generate an error. I thought only wiring from error out on the lossy enqueue underlined the fact that any error out of this case was from that node.

And as I write this I realize that any warnings coming in through the error wire will be dropped. I don't know if any warnings can be generated by the nodes before, as there is no way to tell which errors and warnings each LabVIEW function can generate. This means I also don't know if I really want to propagate any warnings or not... Oh well...

And I like nit-picking, it's a great way to improvement :yes:.

Cheers,

Steen

Edited by Steen Schmidt
Link to comment
  • 2 weeks later...
Sorry for been a nitpicker... but is the non propagating errorcluster implemented on purpose?

Functionally irrelevant to propagate or not since it is guaranteed to be "no error" in that case. But connecting it through will sometimes give LV the opportunity to use a single allocation of the error cluster all the way through the app, which cuts down on the size of the VI in memory (by a few bytes), so I usually do wire it.

Link to comment
  • 8 months later...
  • 5 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.