Popular Post hooovahh Posted December 12, 2019 Popular Post Report Share Posted December 12, 2019 View File Hooovahh's Tremendous TDMS Toolkit This toolkit combines a variety of TDMS functions and tools into a single package. The initial release has a variety of features: - Classes for Circular, Periodic, Size, and Time of Day TDMS generation with examples of using each - Reading and Writing Clusters into TDMS Channels - XLSX Conversion example - File operations for combining files, renaming, moving, and saving in memory to zip - Basic function for splitting TDMS file into segments (useful for corrupt files) - Reorder TDMS Channel with Demo There is plenty of room for improvements but I wanted to get this out there and gauge interests. The variety of classes for doing things, along with VIMs, and class adaptation makes for using them easier. If I get time I plan on making some blog posts explaining some of the benefits of TDMS, along with best practices. Here is a youtube video demonstration of some of the features of this toolkit. Submitter hooovahh Submitted 12/12/2019 Category *Uncertified* LabVIEW Version 2018 License Type BSD (Most common) 3 3 Quote Link to comment
Antoine Chalons Posted December 12, 2019 Report Share Posted December 12, 2019 WOW, thx for sharing this, it looks great! Is there any OS restrictions? I'm thinking about NI Linux RT. Quote Link to comment
hooovahh Posted December 12, 2019 Author Report Share Posted December 12, 2019 I haven't tested everything in there on NI Linux RT but a few of them I have. The only thing that uses any thing questionable is the Circular buffer has a compression option where it zips the circular buffer before logging it to disk. I used the native LabVIEW zip API so I suspect it works on Linux RT...but I forgot to test it before posting it. Everything else is just pure G and I see no reason it wouldn't work. Quote Link to comment
bjustice Posted December 13, 2019 Report Share Posted December 13, 2019 Thanks Hooovahh, I've used your TDMS concatenate VIs in a few places. Really convenient to see this wrapped in a VIPM with a few other tools. Will install this right alongside Hooovahh arrays Quote Link to comment
hooovahh Posted December 13, 2019 Author Report Share Posted December 13, 2019 13 hours ago, bjustice said: Thanks Hooovahh, I've used your TDMS concatenate VIs in a few places. Really convenient to see this wrapped in a VIPM with a few other tools. Will install this right alongside Hooovahh arrays The VIM Array package is a dependency, and actually included in this VIPC release. It was just easier for me as a developer than trying to remove the dependency, and easier for you guys if everything is in one file. Quote Link to comment
Lipko Posted April 30, 2021 Report Share Posted April 30, 2021 Hi, What I would like to see (sorry it I missed that and there is a workaround) is to be able to load only data that is relevant for displaying. I don't know how do explain properly, like google maps, for example. Given the width of the graph in pixels, the min and max time value the graph would display only what is relevant (min and max value of the data at a given pixel and only in the given timerange). And of course this should be quite fast, running in the background I will look into the functions in the toolkit, thanks for posting! Quote Link to comment
hooovahh Posted April 30, 2021 Author Report Share Posted April 30, 2021 Okay this is possible, but it may need extra work when making the file. TDMS data in channels by themselves are just 1D data. They have no way of knowing how much time is between each sample, or the start time. But you can write to the properties of a TDMS channel, and put in extra information like the start time, and time between samples. This works great for things like the Waveform data type, and when you write this data type to a channel, these properties are already written. So you can read the start time property, the dT (time between samples), and the number of samples in a channel (property automatically written), and with this information you could make a slider where the user selects what section of data you want to load. You would have to convert the sliders input, in the number of samples to read, and the offset of the read, but that can be done based on the sample rate. If your data isn't being written at a consistent rate, you can also have a separate channel that is Time. Then you can read the first sample and know the start time, and read the last and get the end time. Intelligently grabbing just the data you want would likely take some kind of binary search but would be faster than reading all samples, when the channel contains lots of data. This requires that when you write the channels samples, that you also are writing the samples time data. These two channels will need to have the exact same number of samples. There are a few options, and all of them go outside of this toolkits functionality. Quote Link to comment
larsen Posted January 9, 2022 Report Share Posted January 9, 2022 Thanks for a great tool. One comment though. In the Initialize Periodic TDMS File the timing input should be labeled New File Period [s] and not New File Frequency. Frequency is a measure of the number of events per time period, like [1/s]. Thanks for considering this - albeit tiny but never the less incoherent detail. 1 Quote Link to comment
hooovahh Posted January 10, 2022 Author Report Share Posted January 10, 2022 Thanks I appreciate the feedback and I'll change it, but not push out a new version of the package for a change like this. Also your comment does at least help me know that someone does use it on a cRIO and finds it useful. Edit: Actually looking at the newest version of the package here, and the newest version of the package on VIPM.IO, the front panel looks like this. And actually I think I always put defaults in parenthesis and not brackets, and I don't think I would bold a front panel label. Are you sure someone didn't edit that? Quote Link to comment
Jordan Kuehn Posted January 12, 2022 Report Share Posted January 12, 2022 Hooovah, I appreciate this toolkit and the work you've done to make it. I have a common problem that I run into and eventually just have to bit the bullet and roll my own solution. When streaming large datasets to disk I have to use the TDMS Advanced vis to get it to avoid a memory leak. It is even worse with waveforms, though I would like to be able to write those directly you can't with the Advanced vis. So I wind up stripping the t0 and dt off and saving as waveform components, flushing the file to apply them, configuring block sizes, etc. Could this library be adapted to use the more performant vis, with some preconditions, say that all subsequent writes must be identical in size/composition, so that I can stream waveforms to disk? I attempted to use your size based file writer and ran into the same memory leaks I encountered when using the regular tdms files, described here. Quote Link to comment
hooovahh Posted January 13, 2022 Author Report Share Posted January 13, 2022 So I'm still reading the thread, but I don't fully understand the issue, or the solution. I get that you are saying there is some issue with the normal TDMS Write, and this issue is most commonly seen when writing waveform data. And I get that waveform data really is just a 1D array of doubles, with a bunch of special properties. So I can absolutely update the write function to instead of writing a waveform, write the properties and the doubles differently. But I don't have much experience with the advanced primitives, and I'm not sure how they come into play. Is it possible to attach a VI that demonstrates this uncontrollable memory leaking/growing problem? Quote Link to comment
Jordan Kuehn Posted January 13, 2022 Report Share Posted January 13, 2022 So, I think you have pulled it apart fairly well in your summary. I believe the issue with the regular Write function is that it can fragment the data and builds the index on the file to take care of this. That combined with flushing, segmenting file writes, and defragmenting after completion will address it for many use cases. The waveform issue is that first the advanced write won’t take the data type due to the properties next to it, and that’s all it is like you said, and array of doubles, some standard components (t0, dt), and possibly some variants. Then second even if you were to write an array of doubles using the standard write vi it is not as performant. When using the advanced VI you specify the block sizes and it streams that to disk exactly as written. (I’m sure there’s a little more complexity at the c level here.) So you must write the same size every time, but it is quite fast and does not leak memory. So, I see a space here where in general advanced tdms functions could be chosen given the condition that subsequent writes follow the same size as the first write (allowing to read that and perform the configure), and then to further that, could automatically unbundle a waveform type to package the properties up and write the array. It’s a thought, and something I’ve encountered a few handfuls of times over the years and it’s a pain every time. Quote Link to comment
hooovahh Posted January 14, 2022 Author Report Share Posted January 14, 2022 Okay here is a quick first update to the Write VIM that has special code if you are writing an analog waveform, or an array of analog waveforms. It will write the waveform properties and then flush the file, then write the 1D array of doubles for a scalar, or a 2D array for a 1D array of waveforms. On the next write it will see the property for the waveform exists, and not write it again, but the dT between the write must match. If it doesn't it generates an error. This is still using just the normal TDMS write. So if you are having memory issues, this doesn't use the advanced stuff, and might not fix it. Partially because don't have experience with advanced TDMS stuff, and also because I don't fully understand the original issue. Write Tremendous TDMS - Waveform.vimSet Waveform Properties.vi Also I noticed that there is a disabled structure that has a partially implemented cluster write. I should probably finish and test that. 1 Quote Link to comment
Jordan Kuehn Posted January 16, 2022 Report Share Posted January 16, 2022 Awesome, I will give it a try. The cluster writing would be great as well! Quote Link to comment
larsen Posted January 18, 2022 Report Share Posted January 18, 2022 On 1/10/2022 at 3:05 PM, hooovahh said: Thanks I appreciate the feedback and I'll change it, but not push out a new version of the package for a change like this. Also your comment does at least help me know that someone does use it on a cRIO and finds it useful. Edit: Actually looking at the newest version of the package here, and the newest version of the package on VIPM.IO, the front panel looks like this. And actually I think I always put defaults in parenthesis and not brackets, and I don't think I would bold a front panel label. Are you sure someone didn't edit that? Hi hooovahh Sure I edited the front panel just to highlight where the issue is. All or none should be bold. Bracket or parenthesis: Parenthesis is for default - bracket is for unit. So different usage. I tend to be careful in specifying the unit of a numerical control or indicator as units are a constant source of confusion. cRIO: I actually do not use it on cRIO but dont se why not. Just happened to open the file in this context. Agree: Change is minor, so whenever convenient. thanks for considering. Quote Link to comment
hooovahh Posted January 19, 2022 Author Report Share Posted January 19, 2022 On 1/15/2022 at 8:43 PM, Jordan Kuehn said: Awesome, I will give it a try. The cluster writing would be great as well! Yes I do have some issues with flattening a cluster to a string for logging. I already have a set of code in this package for reading and writing cluster data to a TDMS file. It just doesn't support the various TDMS Classes I made, it works on the TDMS reference alone. But the code in this package has a ton of overhead. It flattens the string to a string, with human readable decimals, and tab delimiters. I posted an update on the dark side here which works much better but by writing the array of bytes as a string, with padding used on the bytes that need escape characters. It also supports different methods of compression. (some of this is in the VIPM.IO package too) This works much better but I found a bug in regards to how the extended ASCII table is used. If you write a cluster to a TDMS file, then FTP that file over to a Linux RT target, it won't be able to read the bytes where the 8th bit is used. I believe this to be a bug that NI hasn't confirmed. In any case extra escaping characters can be used for these cases too. Anyway this is all just to say that at the moment the reading and writing of cluster data needs some attention before being updated in this package. Quote Link to comment
bjustice Posted January 19, 2022 Report Share Posted January 19, 2022 For what it's worth, I've been heavily using/stress-testing that padding code for flattened strings in TDMS. On the Windows OS, I've had no issues. I'm sad to hear about the issues on the Linux RT OS. I think the saving grace is that, I rarely ever need to read a TDMS on a Linux RT. Usually it's create/write to TDMS on RT, then read on Windows Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.