Jump to content

Performance boost for Type Cast


Recommended Posts

2 hours ago, bjustice said:

This is cool.  I hadn't considered moveblock.  Thanks for sharing

The comparison is however not entirely fair. MoveBlock does simply a memory move, Typecast does a Byte and Word Swap for every value in the array, so is doing considerably more work. That is also why Shaun had to add the extra block in the initialization to use another MoveBlock for the generation of the byte array to use in the MoveBlock call. If it would use the same initialized buffer the resulting values would look very weird (basuically all +- Inf).

But you can't simulate the Typecast by adding Swap Bytes and Swap Words to the double array. Those Swap primitives only work on integer values and for single precision and doubles it simply is a NOP. I would consider it almost a bug that Typecast does swapping for single and double precision values but the Swap Bytes and Swap Words do not. It doesn't seem entirely logical.

Edited by Rolf Kalbermatter
Link to comment
2 minutes ago, Rolf Kalbermatter said:

The comparison is however not entirely fair. MoveBlock does simply a memory move, Typecast does a Byte and Word Swap for every value in the array, so is doing considerably more work. That is also why Shaun had to add the extra block in the initialization to use another MoveBlock for the generation of the byte array to use in the MoveBlock call. If it would use the same initialized buffer the resulting values would look very weird (basuically all +- Inf).

But you can't simulate the Typecast by adding Swap Bytes and Swap Words to the double array. Those Swap primitives only work on integer values and for single precision and doubles it simply is a NOP. I would consider it almost a bug that Typecast does swapping for single and double precision values but the Swap Bytes and Swap Words do not. It doesn't seem entirely logical.

Well. just to be argumentative....

It is <almost> entirely equivalent. A type cast is used in AQ's original  to convert between formats initially and assumes a particular memory format such that when the reverse is operated, it produces the expected result (consistent memory topology). Memcopy, of course, won't work with the LabVIEW type cast since the expected memory formats are different (and would be different on different endian machines and not portable). There are also a lot more checks and an allocation with the type cast, naturally.

I suspect the performance boost that AQ sees by converting to a string first is to do with bypassing byte swapping-perhaps he can tell us in intricate detail why it is faster converting to a string first.

The memcopy is doing a lot less work because the array initialisation is outside of the timing and a fixed size. You can move the array initialisation into the timing area to create the buffer on-the-fly at the cost of performance in order to generalise but it is still slightly faster. If then you check the length and allocate that amount on-the-fly, then you end up with a similar performance to the tostring trick, sans protection. Most of the differences will be to do with compiler optimisations.

The take-away is that the type cast (rather than memcopy) won't crash labVIEW if you get it wrong, is portable and poke-yoke. Use it.

Link to comment
11 hours ago, ShaunR said:

I suspect the performance boost that AQ sees by converting to a string first is to do with bypassing byte swapping-perhaps he can tell us in intricate detail why it is faster converting to a string first.

No, byte swapping happens in both cases. The code with and without ByteArrayToString is functionally equivalent. This is an oversight in the optimization of the Tyecast node, where it takes some shortcut in the case of the string input, but doesn't apply that shortcut for the byte array too, which in essence is the same as a LabVIEW string so far (but shouldn't be for many many years already).

The BytArrayToString is in terms of runtime performance pretty much a NOP since the two are technically exactly the same in memory. But it enables a special shortcut in the Typecast function that handles string inputs differently than other datatypes.

Link to comment
10 hours ago, dadreamer said:

I meant these ini tokens.

Ok. Sweet. I get the same now. I tried it on some other functions that I thought might benefit but it turned out that the majority of the overheads were elsewhere.

Is that feature sticky in that if it is set during design time it stays with the VI when loaded on a LabVIEW install without the ini settings? 

Link to comment
1 hour ago, ShaunR said:

Ok. Sweet. I get the same now. I tried it on some other functions that I thought might benefit but it turned out that the majority of the overheads were elsewhere.

Is that feature sticky in that if it is set during design time it stays with the VI when loaded on a LabVIEW install without the ini settings? 

The ini key enables the UI options to actually set these things. The configuration for those things is generally some flag or other thing that is stored with the VI. So yes it will stick. Except of course if you do a save for previous. If the earlier version you save to did not know that setting, it is most likely simply lost during the save for previous.

Edited by Rolf Kalbermatter
Link to comment
1 hour ago, ShaunR said:

Is that feature sticky in that if it is set during design time it stays with the VI when loaded on a LabVIEW install without the ini settings? 

Yes.

Also stays (and works) in RTE, when the VI is compiled. Also works if saved for previous down to LV 8.0. In fact LV 8.0 didn't have that token in its exe code, but the call remained inlined. LV 8.6 had that token, so confirmed it there as well.

Link to comment
2 hours ago, dadreamer said:

Yes.

Also stays (and works) in RTE, when the VI is compiled. Also works if saved for previous down to LV 8.0. In fact LV 8.0 didn't have that token in its exe code, but the call remained inlined. LV 8.6 had that token, so confirmed it there as well.

:lol::frusty: I'm nothing if not consistent.  I have a first in, first out memory with limited buffer. I guess the buffer has reduced to less than 2 years in my old age. Not long now before I'm yelling at clouds, I guess. :D

 

  • Haha 1
Link to comment
  • 9 months later...
On 9/29/2022 at 1:18 AM, Rolf Kalbermatter said:

If LabVIEW ever gets a Big Endian platform again

Revisiting this thread for a new project.
(Rolf, your posts here have been very educational.)

A bit of an academic question here (I'm mostly trying to make sure that I understand how this all works):
1) Are there any primitives in LabVIEW that would return the endianness of the platform?  (I supposed this would be absurdly boring if LabVIEW only ships on little-endian platforms at the moment.)

2) If this primitive existed, could I theoretically use this in conjunction with the MoveBlock command to replicate the behavior of the TypeCast primitive?
My understanding:

IF platform endianness = big endian, then perform memory copy without byte swaps
IF platform endianness = little endian, then perform memory copy with byte swaps


 

Link to comment

Ok, I made a VI that emulates the TypeCast primitive with a few notable differences:

  • Uses the MoveBlock command to perform the memory copy
  • Input "x" restricted to u8 byte array type.  (Common use-case)
  • input "type" restricted to scalar or 1D numeric types.  (Common use-case)
  • Assumes platform endianness = Little Endian.  (Valid due to above convos)

What's cool about this is that it gives me control over whether or not I want to perform the endianness conversion.
(You can see that I use the reverse 1D array primitives to handle this.)
If your byte array is already little-endian ordered, then you can remove the reverse 1D array functions and reclaim that performance.

image.png.9f86135ac0b87a9b635bd4d99280a2e2.png

TypeCast using MoveBlock.vim

Edited by bjustice
Link to comment

In THIS THREAD, Rolf suggested that the TypeCast primitive should have an endianness selector.
So, I took the code that I created above and slapped a byte order input on it!
Of course, this isn't as flexible as the real primitive.  But this fills a wide use-case from a project that I'm working on.
image.png.19a9d28a28799f524eb6c165fd07fa72.png

 

Why is this interesting to me?:
On more than one occasion now, I've run into situations where I have to receive a stream of bytes at high rate over the network, and TypeCast these into LabVIEW numerics.
and usually when this happens, I'm getting those bytes sent to me in little-endian order.  Because that's what dominates the industry these days it seems.
I use the TypeCast primitive to convert the byte stream into numerics, but this means that I have to reverse array order before handing the data over to TypeCast.
And then, depressingly, TypeCast performs another set of byte swapping against the data.
So, I was hoping to remove all the byte swap operations with this VIM.

I plugged my VIM into the benchmark tester, and the results seem to make really good sense to me:

image.png.c9070de9a3492fc66381b4975d92ff96.png

1 = My VIM with "Big Endian" input      --> My VIM must perform array reversing; this makes it slower than all other methods thus far.
2 = My VIM with "Little Endian" input   --> My VIM does not have to perform array reversing;  This makes it almost as fast as MoveBlock with preallocation.

Would love to know what you guys think

TypeCast.zip

Link to comment
12 hours ago, bjustice said:

Would love to know what you guys think.

I always use Unflatten from String/Flatten to String. It even has a third Endianess selection to never byte swap. It may be not as fast as MoveBlock but has been fast enough so far for me.

The only thing I don’t like about it is that it forces String on the flattened side but a Byte Array to String or String to Byte Array solves this.

It would be trivial for the LabVIEW devs to allow Byte Array too, just a flag in the node properties that would need to change and should have been done so 20 years ago and set as default as a first step to allow the divorce of String==Byte Array assumption that still hampers proper adoption of non ASCII strings in LabVIEW.

I even hacked a LabVIEW copy in the past to allow automatic adaption of the Unflatten from String Node to Byte Arrays but that was of course a completely undistributable hack. An according LabVIEW Idea Exchange suggestion from me was acknowledged by AQ himself but dismissed as highly unlikely to be implemented by LabVIEW R&D.

https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Allow-Unflatten-from-String-to-also-accept-an-Unsigned-Byte/idi-p/3968413

Edited by Rolf Kalbermatter
Link to comment
36 minutes ago, ShaunR said:

image.png.ae8051eb4d73a144975797957bb53b8b.png

The closest in terms of no runtime cost would be Conditional Compile for CPU architecture: anything but x86, x64, arm32 and arm64 is Big Endian. Alternative would be to check the VI Server property for CPU type. My version of runtime check used to be to use MoveBlock to copy a two character string with different ASCII characters “BE” into an int16 and check its value to be equal to 0x4245

Link to comment
7 hours ago, Rolf Kalbermatter said:

It may be not as fast as MoveBlock but has been fast enough so far for me

Via the afformentioned benchmark test, the unflatten from string method is on-par with the "To String/Typecast" method.
My VIM is still beating those noticeably for little endian operations.
So, I'm going to try using my VIM for a project and I'll see how that goes.

image.png.bc94947a8664e741bb3a8607db4d9cc5.png

 

Link to comment
On 7/8/2023 at 8:38 PM, bjustice said:

Via the afformentioned benchmark test, the unflatten from how to take phentermine string method is on-par with the "To String/Typecast" method.
My VIM is still beating those noticeably for little endian operations.
So, I'm going to try using my VIM for a project and I'll see how that goes.

image.png.bc94947a8664e741bb3a8607db4d9cc5.png

 

That's good to know. Do share your experience about it! 

Edited by SmartArthur
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.