pete_dunham Posted August 18, 2011 Report Share Posted August 18, 2011 Hello all, I thought someone else may have some insight on this error on a program that runs for an extended period of time. Thanks in advance! (summary: C++ error after 8+ hours) I am working with another LabVIEW programmer on a Data Acquisition Program that stores a significant amount of data over an extended period of time: 1000 samples/sec for 14+ hours. This development, because of an IT mandate, has been limited to Windows XP (32-bit) machines. We have tried various approaches to avoid file size limits of XP. We have also had to cleanup processes that were continuously allocating significant memory in LabVIEW. Our current approach is to save the data in multiple TDMS files of approximately 2.5 GB; as Windows XP was choking when a TDMS reached a size greater than 4GB. Basically when the current TDMS file we are writing to gets to 2.5 GB, we start a new one. Currently, after ~8 hours we get an error message "Microsoft Visual C++ runtime Library runtime error. R6016 not enough space for thread data" We verified that a reinstall the C++ runtime Library does not resolve this error. Has anyone had similar hurdles when saving a lot of data for a long period of time? Quote Link to comment
Tim_S Posted August 18, 2011 Report Share Posted August 18, 2011 I've not seen that error before, however a former coworker has done long-term data collection with success. He created a highly packed binary file. The data collection was setup to return counts instead of voltage. The unscaled counts were saved to file, thus requiring less space than a single or double floating point. The DAQ card was 12 bits which meant that 4 bits of the I16 was unused data. He took out the unused 4 bits and packed the 12-bit values together, thus further reducing the file size. Downside to all of this is the processing to save and read it back and you can't pull it into something like Excel. Quote Link to comment
asbo Posted August 18, 2011 Report Share Posted August 18, 2011 How often are you writing to TDMS, or rather, how often is the TDMS committed to disk? Does the 8 hour marker correlate to a disk write? That 4GB barrier sounds vaguely like a FAT32 limitation. Are you running NTFS as your filesystem? Quote Link to comment
pete_dunham Posted August 19, 2011 Author Report Share Posted August 19, 2011 Thanks for the replies. How often are you writing to TDMS, or rather, how often is the TDMS committed to disk? Does the 8 hour marker correlate to a disk write? That 4GB barrier sounds vaguely like a FAT32 limitation. Are you running NTFS as your filesystem? I believe we are writing to the TDMS many times a second. I had a hunch that this could be a problem, but figured it would show up pretty quickly (unrelated to the file size increasing). Changing our program to write maybe once a second or every 2 seconds is probably a robust change. We haven't tried this yet since it requires a change to our architecture. As far as the the 4GB barrier, I had the same reaction as you: "That 4GB barrier sounds vaguely like a FAT32 limitation" However, our HDs are NTFS. I wasn't involved in the programming at the point that the team was running into this issue. But I did some research on ni.com and found some related articles on this (and of course, I can't find them now). I eventually bought into this limitation since NI AEs verified this limitation and recommended using advanced TDMS (TDMS+ in palette). These VIs allow you to allocated the file sizes >4GB during setup. But, these TDMS+ seemingly have their own issues. Since you had the same feeling as me, I am going to setup a simple data program on an XP machine and see if duplicate this error. (By intentionally creating a TDMS >4GB). I will post the results. Quote Link to comment
asbo Posted August 19, 2011 Report Share Posted August 19, 2011 I believe we are writing to the TDMS many times a second. I had a hunch that this could be a problem, but figured it would show up pretty quickly (unrelated to the file size increasing). Changing our program to write maybe once a second or every 2 seconds is probably a robust change. We haven't tried this yet since it requires a change to our architecture. Do you mean the TDMS is being committed to disk multiple times per second, or that you use the TDMS Write VI multiple times per second? The latter is completely acceptable if you configure a smart minimum buffer size. Quote Link to comment
P de Boevere Posted August 27, 2011 Report Share Posted August 27, 2011 At that data-rate with those kind of intervals, I would pre-process the data before storing to disk, as mentioned before. My approach is that I store the channel to individual binary files, not directly but via a buffer. In one file the data is stored, in the other the timestamps, so you basically have an X-Y save. Steps: - compare signal values; if data is changed? add to the buffer datastring and to the timestring. - If storage interval is reached, copy the buffers to the binary files - If a certain time-frame is reached, say 1 hour, I would store to new files and auto-archive the older data. In that way you get more but smaller files and only store relevant data, not values that do not change. Typical applications are low-noise signals only gradually changing or only at small time-intervals, or digital I/O values. On the decoding side, it takes additional effort. Quote Link to comment
Ton Plomp Posted August 27, 2011 Report Share Posted August 27, 2011 We've been using TDMS and DAQmx with much larger files with WinXP+NTFS. Never had an issue during writing. However sometimes the closing is an issue (that's when it failed the 1 weekend run on 32 channels@20 kHz. But we definitly succeeded in bigger files than 4GB, maybe you've got an issue with an anti-virus that opens the file to scan? Ton Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.