Bruniii Posted June 16, 2022 Report Share Posted June 16, 2022 (edited) Dear all, I'm trying to load files with blocks of consecutive binary data; each block has the following structure: At first I tried with a for loop to read the header of each block and than read the following "N sampls" floats, for each iteration of the loop. But, every time, after some iterations, something goes wrong and the payload/headers numbers became huge. Each files "breaks" at different points. Below two images: one is the first blocks (until everything is fine), the second is the full file converted where you can see the huge numbers. I have also tried with a much simpler method, it's the section on the top of snippet. The real problem is that the following Matlab code, running on the same file, has no problem and the converted array is as expected: fileID = fopen('file.bin'); A = fread(fileID,[1010 600],'float','b'); fclose(fileID); a = reshape(A(11:end,:),1,[]); figure, plot(a) Anyone can help me find the problem with the LabVIEW code? This is the binary file, if anybody wants to try: file.bin it fails after the block with index 46. Thank you, Marco. Edited June 16, 2022 by Bruniii Quote Link to comment
dadreamer Posted June 16, 2022 Report Share Posted June 16, 2022 1 hour ago, Bruniii said: The real problem is that the following Matlab code, running on the same file, has no problem and the converted array is as expected: Why do you use Read from Text File VI instead of Read from Binary File VI? Read from Text File VI does End-Of-Line chars conversion by default. It could be disabled in the RMB context menu of the node. 1 Quote Link to comment
Bruniii Posted June 16, 2022 Author Report Share Posted June 16, 2022 5 minutes ago, dadreamer said: Why do you use Read from Text File VI instead of Read from Binary File VI? Read from Text File VI does End-Of-Line chars conversion by default. It could be disabled in the RMB context menu of the node. 😶 That's a really nice point.... because I haven't thought about it enough, I guess. After disabling the End-Of-Line, everything is fine; and I'm sure that it will also using Read from Binary File. Thank you. Marco. Quote Link to comment
ShaunR Posted June 16, 2022 Report Share Posted June 16, 2022 (edited) I'm guessing that your "array of floats" is an Array of Doubles but the floats are singles. Give this a try. Untitled 1.vi EDIT: Ah. You already fixed it while I was posting Edited June 16, 2022 by ShaunR 1 Quote Link to comment
Bruniii Posted June 16, 2022 Author Report Share Posted June 16, 2022 7 minutes ago, ShaunR said: I'm guessing that your "array of floats" is an Array of Doubles but the floats are singles. Give this a try. Untitled 1.vi 15.31 kB · 0 downloads The array of float is indeed an array of singles; after the "End-Of-Line" suggestion by dadreamer, my implementation is working fine. But yours is cleaner and very much appreciated. Thank you! Quote Link to comment
Rolf Kalbermatter Posted June 16, 2022 Report Share Posted June 16, 2022 56 minutes ago, Bruniii said: The array of float is indeed an array of singles; after the "End-Of-Line" suggestion by dadreamer, my implementation is working fine. But yours is cleaner and very much appreciated. Thank you! What system has created these data? It is extremely rare nowadays that data is stored in Big Endian format which is incidentally what LabVIEW prefers. You need to get rid of the Typecast however in there. Currently the read of the header uses default byte ordering which tells the Binary File read to use Big Endian, so LabVIEW will swap the bytes on read to make it Little Endian since you work on an x86 hardware. So far so good. You can directly replace the u32 constant in there with a single precision float constant and forget about the Typecast altogether. Quote Link to comment
Bruniii Posted June 16, 2022 Author Report Share Posted June 16, 2022 (edited) 4 minutes ago, Rolf Kalbermatter said: What system has created these data? It is extremely rare nowadays that data is stored in Big Endian format which is incidentally what LabVIEW prefers. You need to get rid of the Typecast however in there. Currently the read of the header uses default byte ordering which tells the Binary File read to use Big Endian, so LabVIEW will swap the bytes on read to make it Little Endian since you work on an x86 hardware. So far so good. You can directly replace the u32 constant in there with a single precision float constant and forget about the Typecast altogether. The data are created by LabVIEW 😅 then streamed to a remote server for storage. Now I was writing the vi that has to read data to perform some offline analysis. Yes, I have already changed the vi from ShaunR and now it looks like this (forgot about the "u32" label) Thank you. Marco. Edited June 16, 2022 by Bruniii Quote Link to comment
Rolf Kalbermatter Posted June 16, 2022 Report Share Posted June 16, 2022 (edited) The interesting thing is that they document the header bytes to be 64-bit long numbers. Traditionaly long in C always has been a 32-bit integer and the same as an int (except on 16 bit platforms including Windows 3.1 an int was a 16-bit integer). Only in GCC (or Unix) a long is defined to be a 64-bit integer. Windows (and Microsoft Compilers) continue to treat a long as 32-bit integer even in 64-bit mode. If you want to be sure to get 64-bit you should use long long or a compiler/SDK defined private type such as _int64 for MSC and QUAD for Windows APIs. Newer C compilers also tend to have the standard types such as int64_t that you can use when you want to have a specific integer size. Edited June 16, 2022 by Rolf Kalbermatter Quote Link to comment
ShaunR Posted June 16, 2022 Report Share Posted June 16, 2022 1 hour ago, Rolf Kalbermatter said: You can directly replace the u32 constant in there with a single precision float constant and forget about the Typecast altogether. A bugger to debug though 32 minutes ago, Rolf Kalbermatter said: Only in GCC (or Unix) a long is defined to be a 64-bit integer. And Mac with the Intel compiler, I believe. Quote Link to comment
Rolf Kalbermatter Posted June 16, 2022 Report Share Posted June 16, 2022 (edited) 1 hour ago, ShaunR said: A bugger to debug though And Mac with the Intel compiler, I believe. Very possible since Mac is technically Unix too, BSD Unix at that but still Unix. Intel tries to make their compiler behave as what the platform expects. Microsoft tends to try to make it as they feel is right. Although I would expect their Visual Studio Code platform to at least have a configurable switch somewhere in one of many configuration dialogs to determine if it should behave like GCC on non Windows platforms in this respect. It's not like there would be much of a problem to add "yet another configuration switch" to the zillion already existing ones. Edited June 16, 2022 by Rolf Kalbermatter Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.