Jump to content

[CR] Hooovahh's Tremendous TDMS Toolkit


Recommended Posts

I haven't tested everything in there on NI Linux RT but a few of them I have.  The only thing that uses any thing questionable is the Circular buffer has a compression option where it zips the circular buffer before logging it to disk.  I used the native LabVIEW zip API so I suspect it works on Linux RT...but I forgot to test it before posting it.  Everything else is just pure G and I see no reason it wouldn't work.

Link to comment
13 hours ago, bjustice said:

Thanks Hooovahh, I've used your TDMS concatenate VIs in a few places.  Really convenient to see this wrapped in a VIPM with a few other tools.  Will install this right alongside Hooovahh arrays

The VIM Array package is a dependency, and actually included in this VIPC release.  It was just easier for me as a developer than trying to remove the dependency, and easier for you guys if everything is in one file.

Link to comment
  • 1 year later...

Hi,

What I would like to see (sorry it I missed that and there is a workaround) is to be able to load only data that is relevant for displaying. I don't know how do explain properly, like google maps, for example. Given the width of the graph in pixels, the min and max time value the graph would display only what is relevant (min and max value of the data at a given pixel and only in the given timerange). And of course this should be quite fast, running in the background

I will look into the functions in the toolkit, thanks for posting!

Link to comment

Okay this is possible, but it may need extra work when making the file.  TDMS data in channels by themselves are just 1D data.  They have no way of knowing how much time is between each sample, or the start time.  But you can write to the properties of a TDMS channel, and put in extra information like the start time, and time between samples.  This works great for things like the Waveform data type, and when you write this data type to a channel, these properties are already written.  So you can read the start time property, the dT (time between samples), and the number of samples in a channel (property automatically written), and with this information you could make a slider where the user selects what section of data you want to load.  You would have to convert the sliders input, in the number of samples to read, and the offset of the read, but that can be done based on the sample rate.  

If your data isn't being written at a consistent rate, you can also have a separate channel that is Time.  Then you can read the first sample and know the start time, and read the last and get the end time.  Intelligently grabbing just the data you want would likely take some kind of binary search but would be faster than reading all samples, when the channel contains lots of data.  This requires that when you write the channels samples, that you also are writing the samples time data.  These two channels will need to have the exact same number of samples.  There are a few options, and all of them go outside of this toolkits functionality.

Link to comment
  • 8 months later...

Thanks for a great tool.

One comment though. In the Initialize Periodic  TDMS File the timing input should be labeled New File Period [s] and not New File Frequency. Frequency is a measure of the number of events per time period, like [1/s].

Thanks for considering this - albeit tiny but never the less incoherent detail.

 

image.png.7f3b34ff33ba93edd2403f9be715ba4b.png

  • Thanks 1
Link to comment

Thanks I appreciate the feedback and I'll change it, but not push out a new version of the package for a change like this.  Also your comment does at least help me know that someone does use it on a cRIO and finds it useful.

Edit: Actually looking at the newest version of the package here, and the newest version of the package on VIPM.IO, the front panel looks like this.

TDMS.png.b970c7bb385bb24352e53db07a2ac0bd.png

And actually I think I always put defaults in parenthesis and not brackets, and I don't think I would bold a front panel label.  Are you sure someone didn't edit that?

Link to comment

Hooovah, I appreciate this toolkit and the work you've done to make it. I have a common problem that I run into and eventually just have to bit the bullet and roll my own solution. When streaming large datasets to disk I have to use the TDMS Advanced vis to get it to avoid a memory leak. It is even worse with waveforms, though I would like to be able to write those directly you can't with the Advanced vis. So I wind up stripping the t0 and dt off and saving as waveform components, flushing the file to apply them, configuring block sizes, etc. Could this library be adapted to use the more performant vis, with some preconditions, say that all subsequent writes must be identical in size/composition, so that I can stream waveforms to disk? I attempted to use your size based file writer and ran into the same memory leaks I encountered when using the regular tdms files, described here.

Link to comment

So I'm still reading the thread, but I don't fully understand the issue, or the solution.  I get that you are saying there is some issue with the normal TDMS Write, and this issue is most commonly seen when writing waveform data.  And I get that waveform data really is just a 1D array of doubles, with a bunch of special properties.  So I can absolutely update the write function to instead of writing a waveform, write the properties and the doubles differently.  But I don't have much experience with the advanced primitives, and I'm not sure how they come into play.  Is it possible to attach a VI that demonstrates this uncontrollable memory leaking/growing problem?

Link to comment

So, I think you have pulled it apart fairly well in your summary. I believe the issue with the regular Write function is that it can fragment the data and builds the index on the file to take care of this. That combined with flushing, segmenting file writes, and defragmenting after completion will address it for many use cases. The waveform issue is that first the advanced write won’t take the data type due to the properties next to it, and that’s all it is like you said, and array of doubles, some standard components (t0, dt), and possibly some variants. Then second even if you were to write an array of doubles using the standard write vi it is not as performant. When using the advanced VI you specify the block sizes and it streams that to disk exactly as written. (I’m sure there’s a little more complexity at the c level here.) So you must write the same size every time, but it is quite fast and does not leak memory. 
 

So, I see a space here where in general advanced tdms functions could be chosen given the condition that subsequent writes follow the same size as the first write (allowing to read that and perform the configure), and then to further that, could automatically unbundle a waveform type to package the properties up and write the array. 
 

It’s a thought, and something I’ve encountered a few handfuls of times over the years and it’s a pain every time.  

Link to comment

Okay here is a quick first update to the Write VIM that has special code if you are writing an analog waveform, or an array of analog waveforms.  It will write the waveform properties and then flush the file, then write the 1D array of doubles for a scalar, or a 2D array for a 1D array of waveforms.  On the next write it will see the property for the waveform exists, and not write it again, but the dT between the write must match.  If it doesn't it generates an error.

This is still using just the normal TDMS write.  So if you are having memory issues, this doesn't use the advanced stuff, and might not fix it.  Partially because don't have experience with advanced TDMS stuff, and also because I don't fully understand the original issue. Write Tremendous TDMS - Waveform.vimSet Waveform Properties.vi

Also I noticed that there is a disabled structure that has a partially implemented cluster write.  I should probably finish and test that.

  • Like 1
Link to comment
On 1/10/2022 at 3:05 PM, hooovahh said:

Thanks I appreciate the feedback and I'll change it, but not push out a new version of the package for a change like this.  Also your comment does at least help me know that someone does use it on a cRIO and finds it useful.

Edit: Actually looking at the newest version of the package here, and the newest version of the package on VIPM.IO, the front panel looks like this.

TDMS.png.b970c7bb385bb24352e53db07a2ac0bd.png

And actually I think I always put defaults in parenthesis and not brackets, and I don't think I would bold a front panel label.  Are you sure someone didn't edit that?

Hi hooovahh

Sure I edited the front panel just to highlight where the issue is. All or none should be bold.

Bracket or parenthesis: Parenthesis is for default - bracket is for unit. So different usage. I tend to be careful in specifying the unit of a numerical control or indicator as units are a constant source of confusion.

cRIO: I actually do not use it on cRIO but dont se why not. Just happened to open the file in this context.

Agree: Change is minor, so whenever convenient.

thanks for considering.

Link to comment
On 1/15/2022 at 8:43 PM, Jordan Kuehn said:

Awesome, I will give it a try. The cluster writing would be great as well!

Yes I do have some issues with flattening a cluster to a string for logging.  I already have a set of code in this package for reading and writing cluster data to a TDMS file.  It just doesn't support the various TDMS Classes I made, it works on the TDMS reference alone.  But the code in this package has a ton of overhead.  It flattens the string to a string, with human readable decimals, and tab delimiters.  I posted an update on the dark side here which works much better but by writing the array of bytes as a string, with padding used on the bytes that need escape characters.  It also supports different methods of compression.  (some of this is in the VIPM.IO package too) 

This works much better but I found a bug in regards to how the extended ASCII table is used.  If you write a cluster to a TDMS file, then FTP that file over to a Linux RT target, it won't be able to read the bytes where the 8th bit is used.  I believe this to be a bug that NI hasn't confirmed.  In any case extra escaping characters can be used for these cases too.  Anyway this is all just to say that at the moment the reading and writing of cluster data needs some attention before being updated in this package.

Link to comment

For what it's worth, I've been heavily using/stress-testing that padding code for flattened strings in TDMS.  On the Windows OS, I've had no issues.
I'm sad to hear about the issues on the Linux RT OS.
I think the saving grace is that, I rarely ever need to read a TDMS on a Linux RT.  Usually it's create/write to TDMS on RT, then read on Windows

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.