Jump to content

Inplace Structure and Datatype Conversion


Recommended Posts

My guess is that in memory, the computer not only needs to know the value of the 32 bits, but it needs to know the type of data it is as well.

Actually, it just needs to know the value.... but the value of the number "4"

  • as an integer is binary 00000000000000000000000000000100
  • as a single precision floating point is binary 01000000100000000000000000000000

The binary is actually different, so code that needs the single-precision float cannot share the memory with code that needs an integer.

Link to comment

There's no allocation shown even with all outputs wired.

Something to consider is that even though the original array has been decimated into N new arrays of M length, M is the same for every output array.

I have it in the back of my head that LabVIEW describes arrays in memory with a size (M), a pointer to the 0th element, and a step value (or something like that). Could it be that all the N decimated arrays are actually sharing literally the same M length value in the same register?

Link to comment

I have it in the back of my head that LabVIEW describes arrays in memory with a size (M), a pointer to the 0th element, and a step value (or something like that). Could it be that all the N decimated arrays are actually sharing literally the same M length value in the same register?

No. Each array has a dimSize of four bytes at the beginning of the array. You can't have a pointer to multiple arrays: it's possible to vary the decimated array sizes independently of one another, without another buffer allocation, by using the subset primitive.

Edit: From the "How LabVIEW Stores Data in Memory" doc in the help.

LabVIEW stores arrays as handles, or pointers to pointers, that contain the size of each dimension of the array in 32-bit integers followed by the data. If the handle is 0, the array is empty.

post-793-124836835115_thumb.png

Edited by jzoller
Link to comment

Actually, it just needs to know the value.... but the value of the number "4"

  • as an integer is binary 00000000000000000000000000000100
  • as a single precision floating point is binary 01000000100000000000000000000000

The binary is actually different, so code that needs the single-precision float cannot share the memory with code that needs an integer.

This is absolutely correct! Floating point is dirty :(

I couldn't remember which side of the word the sign bit was on, so I used the rather helpful table here:

http://steve.hollasch.net/cgindex/coding/ieeefloat.html

I've never used the in-place structure (*gasp* ?), but converting your floating point to an integer is not a single operation (that I know of... maybe there's some crazy opcode for it). I was able to bang out something that did this with a few ANDs, an OR, a multiply (which is just a sign change, so could be replaced with an XOR and an increment), a few increments and a few shifts. Don't forget the extra floating point add of 0.5 to the beginning if you care about rounding :) (Also, I saved myself the headache of dealing with special numbers like NaN, denorm, etc that you might or might not have to consider)

While technically this takes a 32bit number (single) and produces a 32 bit number (i32), there are so many intermediate values that I have no idea if it actually works "in place". It probably doesn't help that I have no feel for how well that structure works. Does it count if I only have one input and one output to my structure ? (And does it matter if I place constants on the inside or outside?)

If this is of any interest (and the whole thing isn't broken by the additional N-1 pointer words) I'll happily clean this into something human readable ;)

Hugs,

memoryleak

  • Like 1
Link to comment
  • 2 weeks later...

No. Each array has a dimSize of four bytes at the beginning of the array. You can't have a pointer to multiple arrays: it's possible to vary the decimated array sizes independently of one another, without another buffer allocation, by using the subset primitive.

Edit: From the "How LabVIEW Stores Data in Memory" doc in the help.

LabVIEW stores arrays as handles, or pointers to pointers, that contain the size of each dimension of the array in 32-bit integers followed by the data. If the handle is 0, the array is empty.

I don't think that's entirely true. If you look at the wires on those arrays where you expect allocations, you'll see that those are not exactly Arrays, they are "sub-Arrays". See this AQ post for more details: -> post #13.

Link to comment
  • 3 weeks later...

I don't think that's entirely true. If you look at the wires on those arrays where you expect allocations, you'll see that those are not exactly Arrays, they are "sub-Arrays". See this AQ post for more details: http://lavag.org/top...mensional-array -> post #13.

Sorry I missed this thread for a while Jason, thanks for the correction. Looks like I'm totally wrong... sorry Justin!

What on earth would possess them to do this? Were there just too many memory allocations to use a standard array as an output?

It seems like the compiler would need a lot of custom logic to handle a different type of array representation in memory, or at least a bunch of cases where it needs to know to convert to a "regular" array somewhere downstream.

And I thought I overcomplicated things ;)

Joe Z.

Link to comment

It looks like the decimate never shows an allocation.

Given that the decimated arrays each has an extra 4 bytes (for dimSize) padded onto the front, they couldn't all be re-using the same memory... The sum of the parts would be greater than the whole.

wacko.gif

Actually they can. LabVIEW knows internally a data type called subarray. It is just a structure containing a reference to the original array handle and some bookkeep information such as offset, stride, and whatever. Most array functions know how to deal with subarrays and if they don't for a particular subarray configuration they will invoke a subarray to normal array conversion, which will of course incur a new buffer allocation.

What on earth would possess them to do this? Were there just too many memory allocations to use a standard array as an output?

It seems like the compiler would need a lot of custom logic to handle a different type of array representation in memory, or at least a bunch of cases where it needs to know to convert to a "regular" array somewhere downstream.

Well I would be pretty sure LabVIEW does handle these things in an object oriented manner, so it is not such a complicated thing but more a well structured object method table to handle various variations of arrays and subarrays.

The reason why they do it is performance optimization. Memory allocations and copies are very expensive so spending some time to try to avoid them can pay off big time.

Rolf Kalbermatter

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.