AlexA Posted April 20, 2012 Report Share Posted April 20, 2012 I was doing some light reading and I noticed that Labview apparently stores Boolean data as bytes, where 8 zeros indicates false and anything else is true. Does anyone have any insight into the logic behind this decision? This is not an important question just a curiosity. It seems like an awful waste of space for embedded applications like FPGAs. Quote Link to comment
Yair Posted April 20, 2012 Report Share Posted April 20, 2012 I don't know the actualy answer, but I can say that in the past booleans were actually kept in a single bit and this was changed (presumably with LV 5, which would have been some time before LV FPGA came out). If you right click certain data conversion nodes (such a type cast and unflatten), you can tell them that your data is in 4.x format. Pure guess - maybe this was done because of alignment and performance issues (i.e. everything else uses at least a byte)? Quote Link to comment
Popular Post mike5 Posted April 20, 2012 Popular Post Report Share Posted April 20, 2012 In most languages the Booleans are stored as bytes. Memory is cheap. Also, to store a single Boolean on any modern computer, it will always use a byte. It is the smallest addressable piece of data on any computer. Even for arrays it would not pay to pack the Booleans into bits. As I said, memory is cheap, and having to do arithmetic to read a single Boolean value is a bigger waste of resource than memory space is... In the end, you usually want it simple and fast. Br, Mike 4 Quote Link to comment
swenp Posted April 20, 2012 Report Share Posted April 20, 2012 Hi Alex, It seems like an awful waste of space for embedded applications like FPGAs. when programming LabVIEW FPGA the output is VHDL code which is compiled and optimized by the Xilinx compiler. So you cannot transfer LabVIEW memory management to FPGA. And because of that I don't think that there is any waste of memory wrt. FPGA programming. 1 Quote Link to comment
AlexA Posted April 20, 2012 Author Report Share Posted April 20, 2012 Thanks mike, I thought it'd be something simple I hadn't understood. Thanks for the insight. Quote Link to comment
Rolf Kalbermatter Posted April 20, 2012 Report Share Posted April 20, 2012 I don't know the actualy answer, but I can say that in the past booleans were actually kept in a single bit and this was changed (presumably with LV 5, which would have been some time before LV FPGA came out). If you right click certain data conversion nodes (such a type cast and unflatten), you can tell them that your data is in 4.x format. Pure guess - maybe this was done because of alignment and performance issues (i.e. everything else uses at least a byte)? Actually it is a little different. In LabVIEW < 5.0 a skalar boolean was a 16 bit integer and the most significant bit defined the boolean status and everything else was don't care. However boolean arrays where packed into words, so an array of <= 16 boolean would consume 16 bit. Now the history or this is of course presumably the MacOS which had somehow a similar notion, but the packing and unpacking of boolean arrays caused actually quite a bad performance for some operations. So LabVIEW 5.0 changed it to the more common 1 byte per boolean notion, which is what most C compilers use also as their default boolean implementation (although the C standard does nowhere specify what size a boolean has to be). Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.