Jump to content

Conversion of binary data in a file


Recommended Posts

Dear all,

I'm trying to load files with blocks of consecutive binary data; each block has the following structure:

block.png.0ce79f233e5cf1d0c093186a4f10b5c1.png

At first I tried with a for loop to read the header of each block and than read the following "N sampls" floats, for each iteration of the loop. But, every time, after some iterations, something goes wrong and the payload/headers numbers became huge. Each files "breaks" at different points. Below two images: one is the first blocks (until everything is fine), the second is the full file converted where you can see the huge numbers.

LabVIEW_upToBlock46.png.6ea1cea5daf3e5d6259c78cba4c31e29.pngLabVIEW_full.png.0529e073a862c979eeba735ce66bd098.png

I have also tried with a much simpler method, it's the section on the top of snippet.

viSnippet.png.8dfe41a215e834e6f1b18e329c0ca010.png

The real problem is that the following Matlab code, running on the same file, has no problem and the converted array is as expected:

fileID = fopen('file.bin');
A = fread(fileID,[1010 600],'float','b');
fclose(fileID);

a = reshape(A(11:end,:),1,[]);
figure, plot(a)

matlab.png.fac499b9590d6fe15967de94adb3612c.png

Anyone can help me find the problem with the LabVIEW code? This is the binary file, if anybody wants to try: file.bin it fails after the block with index 46.

Thank you,

Marco.

 

Edited by Bruniii
Link to comment
1 hour ago, Bruniii said:

The real problem is that the following Matlab code, running on the same file, has no problem and the converted array is as expected:

Why do you use Read from Text File VI instead of Read from Binary File VI?

2022-06-16_14-06-33.jpg.ade06b9786f680631fe92d09f949a98b.jpg

Read from Text File VI does End-Of-Line chars conversion by default. It could be disabled in the RMB context menu of the node.

 

  • Like 1
Link to comment
5 minutes ago, dadreamer said:

Why do you use Read from Text File VI instead of Read from Binary File VI?

2022-06-16_14-06-33.jpg.ade06b9786f680631fe92d09f949a98b.jpg

Read from Text File VI does End-Of-Line chars conversion by default. It could be disabled in the RMB context menu of the node.

 

😶 That's a really nice point.... because I haven't thought about it enough, I guess. After disabling the End-Of-Line, everything is fine; and I'm sure that it will also using Read from Binary File.

Thank you.

Marco.

Link to comment
7 minutes ago, ShaunR said:

I'm guessing that your "array of floats" is an Array of Doubles but the floats are singles.

Give this a try.

Untitled 1.vi 15.31 kB · 0 downloads

image.png.9c1c33e3b3bcc1a98a7e23c01d8b5a94.png

The array of float is indeed an array of singles; after the "End-Of-Line" suggestion by dadreamer, my implementation is working fine. But yours is cleaner and very much appreciated. Thank you!

Link to comment
56 minutes ago, Bruniii said:

The array of float is indeed an array of singles; after the "End-Of-Line" suggestion by dadreamer, my implementation is working fine. But yours is cleaner and very much appreciated. Thank you!

What system has created these data? It is extremely rare nowadays that data is stored in Big Endian format which is incidentally what LabVIEW prefers. You need to get rid of the Typecast however in there. Currently the read of the header uses default byte ordering which tells the Binary File read to use Big Endian, so LabVIEW will swap the bytes on read to make it Little Endian since you work on an x86 hardware. So far so good.

You can directly replace the u32 constant in there with a single precision float constant and forget about the Typecast altogether.

Link to comment
4 minutes ago, Rolf Kalbermatter said:

What system has created these data? It is extremely rare nowadays that data is stored in Big Endian format which is incidentally what LabVIEW prefers. You need to get rid of the Typecast however in there. Currently the read of the header uses default byte ordering which tells the Binary File read to use Big Endian, so LabVIEW will swap the bytes on read to make it Little Endian since you work on an x86 hardware. So far so good.

You can directly replace the u32 constant in there with a single precision float constant and forget about the Typecast altogether.

The data are created by LabVIEW 😅 then streamed to a remote server for storage. Now I was writing the vi that has to read data to perform some offline analysis.

Yes, I have already changed the vi from ShaunR and now it looks like this (forgot about the "u32" label)

snip.png.8f41e9e1e498d61d01112285eb17084d.png

 

Thank you.

Marco.

Edited by Bruniii
Link to comment

The interesting thing is that they document the header bytes to be 64-bit long numbers. Traditionaly long in C always has been a 32-bit integer and the same as an int (except on 16 bit platforms including Windows 3.1 an int was a 16-bit integer).

Only in GCC (or Unix) a long is defined to be a 64-bit integer. Windows (and Microsoft Compilers) continue to treat a long as 32-bit integer even in 64-bit mode. If you want to be sure to get 64-bit you should use long long or a compiler/SDK defined private type such as _int64 for MSC and QUAD for Windows APIs. Newer C compilers also tend to have the standard types such as int64_t that you can use when you want to have a specific integer size.

Edited by Rolf Kalbermatter
Link to comment
1 hour ago, Rolf Kalbermatter said:

You can directly replace the u32 constant in there with a single precision float constant and forget about the Typecast altogether.

A bugger to debug though ;)

32 minutes ago, Rolf Kalbermatter said:

Only in GCC (or Unix) a long is defined to be a 64-bit integer.

And Mac with the Intel compiler, I believe.

Link to comment
1 hour ago, ShaunR said:

A bugger to debug though ;)

And Mac with the Intel compiler, I believe.

Very possible since Mac is technically Unix too, BSD Unix at that but still Unix. Intel tries to make their compiler behave as what the platform expects. Microsoft tends to try to make it as they feel is right. Although I would expect their Visual Studio Code platform to at least have a configurable switch somewhere in one of many configuration dialogs to determine if it should behave like GCC on non Windows platforms in this respect. It's not like there would be much of a problem to add "yet another configuration switch" to the zillion already existing ones.

Edited by Rolf Kalbermatter
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.