Jump to content

LabVIEW Solid State Drive (SSD) Benchmarks


Recommended Posts

So I have just had a bit of a play. First off it appears there is no cache even on PXI RT. I have just tried this and disabling buffering throws an error.

I still can't understand the periodic jitter you see but I suspect you may be pushing the boundries of the HD write speeds. I have just tried a similar set up on an PXI-8106RT and the fastest I see is 12ms for the 512kB files as you are writing, based on what you said we need to hit 10ms. Of course newer targets may be able to achieve more.

Link to comment

Ok, further to my previous testing I did some more testing of my own based on some investigation on the forums. I'm working in a PC based LV-RT environment, I think the disc is formatted as Fat32? Anyway, I ran the profiler on my code till the spikes were reaching about 400ms. It turned out that the open file operation was taking a really long time. One of the suggestions on the forums said that Fat32 doesn't handle long file names well. So I removed all the fluffy information and just wrote the pure count as the file name. That resulted in MUCH greater stability, except that now, the profiler didn't record the spikes, but they were observable on the trace. Less frequent, but still increasing in duration.

Another suggestion was that you can't have too many files in a directory (as you mentioned James), so I modified the code to create a new directory every 40 images. This stopped the monotonic increase in spike duration, and reduced the random fluctuations in spikes, but I still get periodic spiking (I think it may be related to the first access in each directory as the spikes are spaced every 40 writes). Just to be clear, I'm not recording the creation of a directory in this timing information. That operation is sent as a separate command after 40 images and then the write is clocked.

Anyway, the results currently look like this:

gallery_16778_60_18972.png

I'll try preallocating for files, it was something I was doing originally but I stripped it out to benchmark a type of basic file IO. I'll put it back in and see if I can get those write times down.

I'm still stumped by these spikes, they don't show up in the profiler as being due to one particular sub VI...

Ok, further to my previous post, I modified the code to preallocate a file size (524800 bytes, enough for the image and the header), the results are below.

Note that I haven't actually solved the monotonic spike increase as I thought, I just didn't observe for long enough.

The write time is down to about 20ms but the spikes are really bad.

gallery_16778_60_8614.png

Edited by AlexA
Link to comment

Well off course you could precreate the folders as well in a parallel process.

One advice is to use exactly 512 kilobytes as the file size (524288 bytes) since FAT systems like powers of 2. Your values is just over that size.

Perhaps store the header in an index file per directory.

Maybe adding some more folder layers get you to stabilize the increase. It looks like the lookup of folder names is biting you. What options do you use when creating a new file? Perhaps that an option 'replace or create' helps.

Since it looks like the bookkeeping of FAT is limiting, you could open the file in previous loop (as well as the folders):

post-2399-0-14424700-1341566038_thumb.pn

Ton

Link to comment

What are the final requirements for this? You metioned that you have images coming in at up to 100Hz. Is this a continuous process? Or is there downtime where we can catch up?

It appears that even the good writes take 20ms which will only allow you to achieve 50Hz, even if we can remove these spikes, is this acceptable to your application? My concern is that at 100Hz you have over 50MB/s to disk which I'm not sure is going to be achievable with a traditional HDD on an RT system. Alternatives would be whether an SSD drive would be faster or squeezing it down gigabit ethernet to a system which supports RAID.

I think preallocating would be good if you were streaming multiple times to the same file but as you are constantly opening new files I suspect you won't see much benefit.

Link to comment

Disclaimer: I have zero RT experience.

James's point is valid. A 20 ms time implies you won't be able to keep up with I/O at 100 Hz. The 20 ms time also makes sense given the strategy you've been taking. Most hard disks have average seek times on the order of 10 ms. Even with server class disks, I think their seek times still measure in the 1-5 ms. Most vendors stopped specifying seek times a long time ago though because the numbers are so bloody awful and haven't changed much in the last two decades. Solid state drives do get you into the sub millisecond domain though.

That said, I think having one file per image might not be the best way to do it. A pre-allocated larger file might allow you better throughput. Modern drives are capable of impressive throughput as long as you're operating on continuous segments and don't need to do seek operations. Maintaining disk layout here will also be key, think defragmented data. I don't think you can do this in LabVIEW so you might need to invoke a third party library.

Also, if you start mucking about with the frequency of incoming data do you still see the spikes every 40 files? I'm wondering if perhaps something else is accessing the disk?

Link to comment

That said, I think having one file per image might not be the best way to do it. A pre-allocated larger file might allow you better throughput. Modern drives are capable of impressive throughput as long as you're operating on continuous segments and don't need to do seek operations. Maintaining disk layout here will also be key, think defragmented data. I don't think you can do this in LabVIEW so you might need to invoke a third party library.

I believe using Set File Size from the File IO palette should ensure this (though the documentation does not shout it out as it does for the equivalent TDMS function)

  • Like 1
Link to comment

You can benchmark your disks using the Win32 File IO benchmarks which was designed to show the difference between the native labview functions and win32 IO.

If I remember correctly (it's been a while since I played with ETS), the win32 ones should work. If not, there are the native labview ones which definitely work.

And just for sheer awsomeness. Here's the benchmark for my SSD :)

Edited by ShaunR
Link to comment

Thats the sort of performance we could do with in this case! However I don't think these will run. These give us the ability to run unbuffered to the hard disk on Windows. On RT I tried this option earlier and it threw the fantastic :frusty: generic file IO error (and in Windows you can now do this in the native API now)

That would be good to confirm if it is the case.

I checked with a few people on the file size set. With this we will request a file of that size to be reserved by the OS, but we still can't guarantee that the OS wont fragment the file still.

  • Like 1
Link to comment

[using Set File Size] will request a file of that size to be reserved by the OS, but we still can't guarantee that the OS wont fragment the file still.

Thanks James! Even a negative answer is better than not knowing.

Link to comment

Hi guys, thanks for the insights! A lot of good information from you guys, sorry it took so long for me to get back to it. I did a few tests to kind of side-step the issue. One of the biggest reasons that I was doing file IO on the RT system was the fact that 100 Hz at 512kb per image is ~50MB/s throughput, this was too much to just get everything onto the host computer via the network (when I was on the Uni network). I linked up my host and RT computers with a cross-over cable and ran a simple test that shows I should be able to get images over the network at a rate of about 80Hz without loss (does this seem right to you guys?).

Combined with the fact that the windows machine file IO takes ~4ms to save a file and I've chosen to side step the problem like that. Sorry I didn't take it further and really investigate what could be done on the RT system to speed things up!

Link to comment
  • 1 year later...

Hey guys, sorry to reanimate this topic.

 

Just wondering if since last year, anyone has had a chance to play with Solid-State discs in a PC based, LV-RT set up? We're about to get a Solid-State drive at my lab. I'll try chuck it in the RT machine and see what can be done with it in terms of high-speed, high-volume file IO. Before I do though, was just wondering if anyone has had an success with it before?

 

Cheers,

Alex

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.