LabVIEW's typecast is more complex than that. It is in essence a typecast like what you see in C but with the extra twist of byte swapping any multi-byte integer to be in Big Endian format on the byte stream side.
I think the problem here is that Unflatten does other things like checking the input string length to be valid and whatever. The implementation of Unflatten is certainly a lot more complex since it has to work with any data type including highly complicated variable sized types of clusters containing variable sized data, containing ......
Typecast on the other hand only works on flat data which excludes any form of clusters containing variable sized data. Possibly Flatten/Unflatten could be improved since little endian conversion on a little endian machine should certainly not take longer than the Typecast and additional byte swap, but the priority for such a performance boost might be rather low, since it would certainly make the implementation of Flatten/Unflatten even more complex and hence more prone to bugs in the implementation.
But thanks for showing me that the good old Typecast/Swapping still seems to be the better way than using Flatten/Unflatten with the desired endian setting .
The reason for this is that LabVIEW originates from the Mac with its 68000 CPU which was always a big endian CPU. While the later PPCs in the PPC Macs had the option to either use big or little endian as preferred format, Apple choose to use the same big endian format that came from the 68k.
When NI ported LabVIEW to Windows (and other architectures like Sparc and PA Risc later) they had to tackle a problem. In order to send binary data to a GPIB device or over the network, one had always used the Typecast or Flatten operator to convert it into the binary string and it would have been very nice if the data sent over the network or written into a binary file by a LabVIEW program on the Mac, could be easily read by a LabVIEW program on Windows. This required the same byte order for flattened data, so the flattened format was specified to be big always endian, independent of the platform LabVIEW is running on.
A C typecast will be difficult to do in LabVIEW. Trying to do that with a small external code could be an option but it is quite tricky. It's not enough to simply swap the handles but you also need to adjust the array length in the handle accordingly so a different function for each different integer sizes would be required.
Rolf Kalbermatter