Jump to content

Developer Zone Win32 File I/O VIs


Recommended Posts

Have any of you looked at the Win32 file I/O functions included in some of the newest Developer Zone examples? (example: NI-SCOPE Stream to Disk Using Win32 File IO). I've seen a few of these pop up in the What's New RSS feed, and just started looking at them today.

The consumer loop continuously empties data from the queue and writes data to disk using the LabVIEW primitive Write to Binary File.vi. The file is opened using the Win32 Open File.vi, which disables Windows caching during the write process, resulting in increased performance. However when Windows caching is disabled, data must be written to file in a multiple of the sector size of the disk (typically 512 bytes).

I don't use the NI-SCOPE or NI-FGEN instrumentation, but I do log data to disk at a high rate and can appreciate a performance boost/CPU load reduction. I understand what the VIs do and how they work, I'm just curious if anyone else has looked at them. I ran the included benchmarks on my desktop and the Win32 performance is better.

I'm mostly concerned about creating a runtime version of my app with these, and what might happen if say, a Windows service pack comes along that changes things... :o

Link to comment

According to MS documentation, using the Windows file functions without buffering requires you to allocate memory at certain offsets and in certain chunk sizes (both should be multiples of your hd sector size). It works for the vast majority of hard discs without that. I do not know which hard discs are exceptions to that. LabVIEW does not allocate memory that way. I'm not aware of any other limitations of that technology.

The performance gain with a single hard disc should be easy enough to measure, but it is not going to be huge. Performance gains are very impressive though if you're using a RAID array. A RAID array with 4 hard drives will speed up traditional LabVIEW file I/O by roughly 20%. Without buffering, writing speed will almost quadruple.

Writing speed obviously varies with the number of drives connected, but it also strongly depends on which controller you use. Some controllers are as much as 20% faster than others, and different controllers have different "sweet spots" for the size of the data chunks you write to disc. If you want to get the best out of that technology, you will need to figure out these things for your actual equipment.

Herbert

Link to comment

I have been using these same API functions for camera data streaming, with these API functions, 8 SATA hard drives (RAID 0) and a Highpoint RAID controller, I can sustain a 400MB/sec stream of data for more than a minute. You do have to pad your data to be a multiple of 512, but for me that is possible. In fact to stream a camera that doesn't have a frame size that is a multiple of 512, I pad the file until it is a multiple of 512, then reread the temp file and remove the padding for the final data file.

Another issue I found when I started using these API funtions was the inability of C's fwrite function to write a file bigger than 4 GB. So now I use these functions for writing any file I need that have a possibility to being over 4 GB in size. I've used these functions, with no changes in 2K and XP, not yet in Vista. One thing to look at is the MSDN entry for these functions, which lists what operating systems they are avaliable in.

Link to comment

QUOTE(chrisdavis @ Apr 18 2007, 04:51 PM)

Another issue I found when I started using these API funtions was the inability of C's fwrite function to write a file bigger than 4 GB.

fwrite, iostreams etc. are just planted on top of the native Windows file I/O for compatibility with standard C/C++. The lowest-level and fastest API is the one centered around CreateFile etc. That's where all the good stuff is accessible. The fastest thing in Windows is asynchronous ("overlapped"), non-buffered I/O. That keeps your hard drive going at maximum speed while the processor(s) are free to do other things.

Herbert

Link to comment

Thanks for the pointer. One thing I find very attractive about this is that it's possible (I imagine - haven't played yet) to overwrite older data segments if the total file size would otherwise be excessive. Then I can decimate my data in-place as needed. With TDMS, I wouldn't know how to go about that.

Link to comment

QUOTE(torekp @ Apr 24 2007, 01:13 PM)

Then I can decimate my data in-place as needed.

I see the point of decimating a large file, but why would you want to do that in-place rather than into a new file? How would a reading application know which parts of the file are decimated and by which degree?

If what you want to achieve is some kind of ring buffer, you can do that with either LabVIEW File I/O (reset pointer to beginning of file) or you could use TDMS. In case of TDMS you would simply use 2 files and switch between them every time one of them reaches the "critical" size. The only downside of TDMS here is that you need twice the disc space. Reading data back from a file that you partly overwrote in-place is just as complicated (or simple) as reading data back from two different TDMS files, so the solutions are really similar.

Just some thoughts,

Herbert

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.