Jump to content

What's the latest thinking on future compatiblity of LabVIEW's Flattened


Recommended Posts

I'm curious what the latest thinking is on using LabVIEW's flattened data for long term data storage. Some years back, the data format had changed such that some content would not read properly between different versions of LabVIEW without making some code adjustments. While I'm not a big fan of large collections of text files such as those that are sometimes created with the flatten to string object, my question is more towards storing data in a database in binary form, for situations where LabVIEW is the only conceivable application that ever needs to access the data. In many situations, breaking the data into non-binary components would degrade performance significantly relative to binary data. My only concern with binary is how compatible it will be going forward. Just wondering how others are handling this.

Link to comment

I have been burned in the past using NI's binary save when as you mentioned newer versions were not able to open older versions. Since then I have generally created my own generic data format which would save the data in a format that I could use to populate a native NI structure such as a cluster. I would save each of my records in the most suitable format for the data. Strings would generally be a byte array which would allow nonprintable character to be stored. My records would contain a header which would define the total length of the record. Within that record the elements would be stored with a length and a type field follwed by the data it self. This is similar to the way variant data is stored if you looked at the raw data for a variant but since NI has modified their storage methods in the past I choose not to rely on it being a static format.

Now if you are truly storing it in a database then your table or tables should use the appropriate data type for the individual elements of your data. You only need to write a read and a write VI which decomposes you data into the respective columns in your table and another VI which can read the data from the database and load it into your cluster. You shouldn't actually have to translate the data between types. Integers will be stored in binary format, strings can be stored as byte arrays, strings or blobs in the database. Other data can be stored in the appropriate format for the data as well.

Link to comment

QUOTE (Mark Yedinak @ Apr 14 2009, 10:44 AM)

I have been burned in the past using NI's binary save when as you mentioned newer versions were not able to open older versions.

I don't understand this. The flatten/unflatten functions have compatilibility modes. If you right-click on them, you can specify 4.x data or 7.x data. If you upgrade software written in those versions, those options will be automatically selected. NI has generally been good about this.

I'm not sure flattened data is my favorite storage format, but I wouldn't think to worry about upgrade issues. Since I haven't been storing that way, maybe I am off base here.

If you have performance issues, then I think you would need to explain the constraints before anyone else could give you an opinion about the easiest way to program around them. If you have real performance issues, then a real database with native data types like Mark suggested could be appropriate. If you are just 'worried' about performance, then just code up what is easiest, and if it performs acceptably, you could probably leave it alone.

Link to comment

QUOTE (jdunham @ Apr 14 2009, 01:01 PM)

I don't understand this. The flatten/unflatten functions have compatilibility modes. If you right-click on them, you can specify 4.x data or 7.x data. If you upgrade software written in those versions, those options will be automatically selected. NI has generally been good about this.

Actually I should clarify this since I was burned using binary data. I used LabVIEW functions to write an array of clusters out to a file using the "Write to Binary File" VI. Admittedly I did this way back when in something like LV 4.0 or 5.0 and after an upgrade I was no longer able to read the data. I ended up writing a conversion VI. Since then I avoided using the "Write to Binary File" to directly write out clusters. NI may have gotten better about this but I like to know the format of the data will be consistent on disk so upgrades aren't a problem.

So, the flatten to string may have worked but I don't make use of them too often. I apologize if this has caused any confusion.

Link to comment

QUOTE (Mark Yedinak @ Apr 14 2009, 02:33 PM)

Actually I should clarify this since I was burned using binary data. I used LabVIEW functions to write an array of clusters out to a file using the "Write to Binary File" VI. Admittedly I did this way back when in something like LV 4.0 or 5.0 and after an upgrade I was no longer able to read the data. I ended up writing a conversion VI. Since then I avoided using the "Write to Binary File" to directly write out clusters. NI may have gotten better about this but I like to know the format of the data will be consistent on disk so upgrades aren't a problem.

So, the flatten to string may have worked but I don't make use of them too often. I apologize if this has caused any confusion.

Yes, this was the compatibility issue that I experienced in the past that's prompting my question. At the time, we had multiple standalone applications accessing text files encoded with flattened string data. I forget just which direction the compatibility wouldn't work - forward or backward. The newer application was in 8.2.0 or 8.2.1, I don't recall how old the prior version was. The flatten to string had been used to be able to transfer data generated by a data collection tool to another standalone tool for analysis without having to expend any effort on structuring the data in any specific format mainly due to the complexity of the content and lack of any need to interchange other than LabVIEW to LabVIEW.

In regards to database performance, say I've just collected 1000 points of data from a load cell. No timestamps or anything, just a sequence of points at some implied sample rate. All point data is to be stored in the database. Only LabVIEW will ever be used to read the points. Analysis will always use the all 1000 points as a group. There is no need to ever query the data, e.g. find the maximum value using SQL. If I were to write the data to a table in the database as individual rows, it might require say thirty seconds or longer to complete the write (even when implemented as a parameterized query as would be most appropriate in this case). Sure it could be spawned as a background task so the user isn't held up, but in this case, I've generally been either formatting the data into a binary/ascii form of some custom syntax, or using the flatten to string functions. Either way, writing a single row, with a large binary object is relatively quick from an OLEdb/SQL standpoint in contrast to individual rows. My question is whether anyone has any reservations with flattened data. Generally it's pretty compact, at least relative to representing the same data in ASCII.

Link to comment

If you want to ensure compatibility couldn't you create a single binary buffer containing all of your data? If all of your data points are 32 bits then cycling through your array of 1000 data points and creating a single buffer with all of the data (4 bytes for each data point) would be a very fast process and when you read the data out simply dump every 4 bytes into an array element. Again, this process will be fast and you won't have to worry about NI changing the flatten to string format.

Link to comment

I certainly hope the flatten to string protocol won't be changing anytime soon as that's how I serialize data for transmission over TCP/IP to and from non-labview targets. And I don't think it will. I think the subject of binary data files (and datalog files - that one has bit me on LabVIEW release changes) is different - at least I hope so.

Mark

Link to comment

QUOTE (mesmith @ Apr 14 2009, 10:46 PM)

... binary data files (and datalog files - that one has bit me on LabVIEW release changes) is different

is that because of changes in the header to the datalog file, or because the actual encoding changed?

Link to comment

QUOTE (Mark Yedinak @ Apr 14 2009, 03:54 PM)

If you want to ensure compatibility couldn't you create a single binary buffer containing all of your data? If all of your data points are 32 bits then cycling through your array of 1000 data points and creating a single buffer with all of the data (4 bytes for each data point) would be a very fast process and when you read the data out simply dump every 4 bytes into an array element. Again, this process will be fast and you won't have to worry about NI changing the flatten to string format.

Yeah, this the sort of semi-custom binary format that's easy to implement for this type of data. It takes a bit more work to build interfaces for more complicated data types. The best code I've worked with incorporated custom headers ahead of the data, not all that different than the headers produced by some of the flatten functions. If a version identifier gets incorporated, readers and writers could even be expected to cope with revisions. The one good thing about a defined structure is that it provides for ready compatibility with external code. I have some spare time right now as I'm back in job search mode at present, one of the things that I'm contemplating is to produce a group of functions for encoding/decoding generic stuff in/out of this sort of database field. Perhaps something that reads/writes a group of variants, with extensibility and compatibility on par with xml or along the lines of the config file VIs. The key would be that the low level interface should be unaware of the data types or even how many elements are being stored.

Link to comment

QUOTE (Dirk J. @ Apr 14 2009, 03:25 PM)

The problem I remember was due to header changes (LV7.1->8.0, I think) causing the read to fail. I don't know if the data encoding changed or not because at that point I just used the older version of LabVIEW to open the data and save in a data-neutral format (strings, I think) that I could read in the newer version.

Mark

QUOTE (Mark Yedinak @ Apr 14 2009, 01:54 PM)

If you want to ensure compatibility couldn't you create a single binary buffer containing all of your data? If all of your data points are 32 bits then cycling through your array of 1000 data points and creating a single buffer with all of the data (4 bytes for each data point) would be a very fast process and when you read the data out simply dump every 4 bytes into an array element. Again, this process will be fast and you won't have to worry about NI changing the flatten to string format.

This is what flatten to string already does - if you have an array of 32 bit values, it just streams that as one 32 bit value followed by the next and will prepend a 32 bit int (or not, if you tell it not to) to the data stream. Flatten to string doesn't carry any type data with the exception of prepended 32 bit size fields for arrays and strings - it's the bare binary image of your data. The only change I'm aware of in the way that LabVIEW flattens data was from 4.x to 5.x where the size of a boolean changed. All of the type info required in unflatten from string is in the data type that must be wired to the type input. As long as 32 bit ints are 32 bits and doubles follow IEEE specs, I don't expect there will ever be any problems. The fact that flatten to string used to expose a type descriptor string (7.x) is sort of beside the point.

I use flatten to string (and unflatten from string) extensively to communicate with C++ apps through TCP/IP. All I have to know is the data struct the C++ apps expect and I can flatten and send the data with no problems. U32's are U32's, doubles are doubles (as long as the endianess is known), fixed size arrays are LabVIEW clusters (use the array to cluster), etc.

Mark

Link to comment

QUOTE (mesmith @ Apr 14 2009, 09:39 PM)

I use flatten to string (and unflatten from string) extensively to communicate with C++ apps through TCP/IP. All I have to know is the data struct the C++ apps expect and I can flatten and send the data with no problems. U32's are U32's, doubles are doubles (as long as the endianess is known), fixed size arrays are LabVIEW clusters (use the array to cluster), etc.

I am not sure you would run into too often these days but have you ever run into packing issues when loading your structure into the C code? I know back in the day different compilers could pack structures differently leading to different byte alignments of the internal storage. Since most computers are 32 bit these days I don't suspect that you would run into this much today but I know in the embedded world you could set compiler options that would pack structures ensuring that they have the smallest memory footprint possible.

Link to comment

QUOTE (Mark Yedinak @ Apr 14 2009, 09:32 PM)

I am not sure you would run into too often these days but have you ever run into packing issues when loading your structure into the C code? I know back in the day different compilers could pack structures differently leading to different byte alignments of the internal storage. Since most computers are 32 bit these days I don't suspect that you would run into this much today but I know in the embedded world you could set compiler options that would pack structures ensuring that they have the smallest memory footprint possible.

The packing scheme isn't important to the serialized data because I have to send a U16 as 16 bits and the receiving application has to accept that as 16 bits - whether the receiving app then wants to pack 16 bit data on 32 bit boundaries or 16 bit boundaries internally is up to the receiving code. It's a much easier problem than handing the receiving app a pointer to a byte array in memory and then hoping all gets unpacked correctly!

Mark

Link to comment

QUOTE

I use flatten to string (and unflatten from string) extensively to communicate with C++ apps through TCP/IP. All I have to know is the data struct the C++ apps expect and I can flatten and send the data with no problems. U32's are U32's, doubles are doubles (as long as the endianess is known), fixed size arrays are LabVIEW clusters (use the array to cluster), etc.

Hi Mark;

I have an c++ application and Labview application. I make some calculation on Labview and want to send these calculating result in cluster(struct) to the c++ application on udp socket. I have achived general string on upd to application but I want to send cluster(struct) so I don't know how can I do kind of this operation?

Could you give me some clue or reference about this operation?

Thanks in advance.

Umit Uzun

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.