Jump to content

Move C code to LabVIEW -- writing to binary file


Recommended Posts

I have some existing code that writes to a binary file. We are trying to move this to LabVIEW so that our log files will be in the same format as those in the existing system which we are replacing. 

 

in an email from our customer, I got the following:

 

"The application was built with the byte alignment option set to one"

Dfr.Hdr.RecordLen = 512;

WriteFile (DfileHandle, (char *)&Dfr, Dfr.Hdr.RecordLen, &byteswritten, NULL);          

 

dfr is a structure, that contains a small header structure and a union of other structures based on the particular record being logged. The union stuff I'm not concerned about, as I'm rewriting this with LVOOP and I have record types as different classes which will "format themselves" to be logged.

 

I'm quite positive that I can't just typecast a LabVIEW cluster to a byte array and still have this work, so I am assuming much of this formatting will have to be done manually, including padding/aligning the data. I am looking for direction as to first steps to take to make sure I'm not overlooking anything. If you need more information please let me know; I just put in the bare bones of what I thought would be enough.

 

 

Link to comment
I'm quite positive that I can't just typecast a LabVIEW cluster to a byte array and still have this work, so I am assuming much of this formatting will have to be done manually, including padding/aligning the data. I am looking for direction as to first steps to take to make sure I'm not overlooking anything. If you need more information please let me know; I just put in the bare bones of what I thought would be enough.

What makes you positive that the typecast won't work? If "byte alignment option set to 1" means there's no padding (aligned on bytes, rather than multiples of bytes) then it's possible that you won't need much more than a typecast, since LabVIEW packs clusters with no padding. I'd recommend that you flatten to/unflatten from a string instead of typecast since that gives you the option to swap endianness. You might want to use data structures that do not exactly match the ones in C. For example you could create a cluster that is only the header and then separate clusters for each item of the union, even if those are all part of the same struct in C. Then you read only the header cluster, and extract the relevant field to determine which other cluster to use for the data that follows. I'm making some assumptions here about how the struct is written to disk; if the WriteFile function does something interesting with it, then you'll need to look at that.

 

I assume you have some sample files sitting around. I'd start simple by trying to read them with a simple unflatten from string. If you get the right values for any one-byte values and anything longer is wrong, swap the endianness and try again.

Link to comment
I assume you have some sample files sitting around. I'd start simple by trying to read them with a simple unflatten from string. If you get the right values for any one-byte values and anything longer is wrong, swap the endianness and try again.

Thanks, I will try this. I don't have a sample file yet, but I will be getting one. I was going to use write to binary file which allows the swap to little endian (what I need for this application) but that is a good point about the flatten/unflatten from string. I will take a look at that. Thanks!

Link to comment

This will be running on a pxi chassis which I think runs Pharlap or VxWorks, but the previous code was running on something that was modern at the time I was born, so I have no idea what the byte alignment on that was. Then data was then read in with a more recently written program, using what looks to be c++/MFC. This application doesn't do anything fancy, just reads the records into a structure: lRet = hDataFile->Read((char *)&Dfr+512, Dfr.Hdr.RecordLen-512);

 

I suppose i'll have to do some testing using the trial and error method to see what I get.

 

Edit: I did some digging. I remembered the old application transferred data, and all logging was done on the windows side. This means it had 1 byte alignment, but now that's probably out the window (no pun). Does this imply I will have to write element-by-element and cannot just flatten a cluster?

Edited by for(imstuck)
Link to comment
This will be running on a pxi chassis which I think runs Pharlap or VxWorks, but the previous code was running on something that was modern at the time I was born, so I have no idea what the byte alignment on that was. Then data was then read in with a more recently written program, using what looks to be c++/MFC. This application doesn't do anything fancy, just reads the records into a structure: lRet = hDataFile->Read((char *)&Dfr+512, Dfr.Hdr.RecordLen-512);

 

I suppose i'll have to do some testing using the trial and error method to see what I get.

 

Edit: I did some digging. I remembered the old application transferred data, and all logging was done on the windows side. This means it had 1 byte alignment, but now that's probably out the window (no pun). Does this imply I will have to write element-by-element and cannot just flatten a cluster?

 

PXI comes in windows or RT flavours. RT (aka Pharlap ETS) is a cut-down windows kernel so for your purpose it makes no difference. VxWorks is for Power PC platforms so you will only see that in [some] CRIOs or Fieldpoint units. I wouldn't worry about it. I was just pointing out that byte alignment padding is not the same across platforms and assuming clusters are byte aligned can yield unexpected results on them.

Link to comment
I wouldn't worry about it. I was just pointing out that byte alignment padding is not the same across platforms and assuming clusters are byte aligned can yield unexpected results on them.

Yes, thanks for the clarification. I meant that LabVIEW will remove padding when flattening to a string; a flattened string is portable between LabVIEW platforms (although different platforms may not produce identical strings or values due to differences in floating-point representations). Since it sounds like your data is stored without any padding (it would be unusual if it did include the padding bytes, especially on an old system designed when storage space was more expensive) it is likely you can flatten or unflatten on any LabVIEW platform without problems.

 

ShaunR's comments apply when manipulating LabVIEW data in memory on different platforms, for example when passing a LabVIEW cluster to a C dll. That's not what's happening here, though.

Edited by ned
Link to comment

And also only Windows 32 Bit, AFAIK. A more and more important distinction. I thought LabVIEW for Windows 64 Bit uses 8 Byte alignment which is the default for Visual C. So there might be a documentation CAR in order.

 

One thing to mention though: I believe Flatten and Unflatten will go to great lengths to make sure that the resulting flattened byte stream will be compatible across platforms. This includes things like using Big Endian as default byte order, using 128 bit for extended precision numbers on all platforms (even-though most LabVIEW platforms nowadays use the 80 bit extended floating point format of the x86 CPU architecture internally, and a few simply use the double format to make things trivial to port LabVIEW to it) 1), and using a byte alignment of 1 for all elements. And since the binary File read and write supposedly use Flatten and Unflatten internally too, they should also be safe to use for all applications and should be fully platform independent. If they are not then that would be really a bug!

 

The byte alignment as discussed here only comes into play when you pass LabVIEW native data to external code like shared libraries and CINs. Here the data is aligned with the LabVIEW default alignment for that platform.

 

1) In fact the only platform ever making use of true 128 bit extended floating point numbers was the Sparc platform. But that was not based on a CPU extended floating point format but a Sun software library implementing an extended floating point arithmetics. It was as such quite slow and died completely when the Solaris version of LabVIEW was discontinued. Nowadays extended floating format is really an 80 bit format on most platforms, (and on VxWorks platforms it is internally really just a double format).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.