alexadmin Posted April 8, 2008 Report Share Posted April 8, 2008 I had try to capture ~1Mb/s data flow from hardware and write it to text file (with some formatting). But the problem was found: my Labview writes file with speed less than 100 kb/s. I attach the simple example (it is need to set correct file name) - it takes ~10 minutes to write 50 MB file . Where am I wrong? Certainly, I work with modern PC (Pentium iV, SATA HDD, etc). The program built with c++ has no such problems. Quote Link to comment
Ton Plomp Posted April 8, 2008 Report Share Posted April 8, 2008 QUOTE (alexadmin @ Apr 7 2008, 01:50 PM) I had try to capture ~1Mb/s data flow from hardware and write it to text file (with some formatting). But the problem was found: my Labview writes file with speed less than 100 kb/s. I attach the simple example (it is need to set correct file name) - it takes ~10 minutes to write 50 MB file . Where am I wrong? Certainly, I work with modern PC (Pentium iV, SATA HDD, etc). The program built with c++ has no such problems. A few things might speed up file operations: Write in disk-compatible sizes (eg. multiples of 512 bytes) Preallocate disc-space (set file size in advance) Another question, is the code you've shown representative? (one byte at a time) I get 1 MB after 30 seconds. If I write 1 KB per write action I have 306 MB after 30 seconds. With 1 MB per write action I have 514 MB after 30 seconds. I think the problem is in another part of your program, but without any code it is hard to tell. Ton Quote Link to comment
Mellroth Posted April 8, 2008 Report Share Posted April 8, 2008 QUOTE (alexadmin @ Apr 7 2008, 02:50 PM) ...speed less than 100 kb/s. I attach the simple example (it is need to set correct file name) - it takes ~10 minutes to write 50 MB file... If you write data in larger chunks I don't think there is a problem, maybe you can buffer data instead of writing the file byte by byte? As a test I generated 5MB of data and wrote that to the file in one shot, then the code completed in a few seconds. /J Quote Link to comment
Phillip Brooks Posted April 8, 2008 Report Share Posted April 8, 2008 Attached is a sample file (.LLB) that I created as an example of logging UDP data at high speed. I've tested this on a P4 3.0 GHz machine with a SATA drive and 2 GB of RAM. I've logged data at 4.9 MB/s with 45% CPU load. This example does not format the data written to disk, it is binary. As long as your formatting isn't excessive, this technique should work for you. Download File:post-949-1207578519.llb Quote Link to comment
alexadmin Posted April 8, 2008 Author Report Share Posted April 8, 2008 Yes, you a right about big data packetss - it gives considerable increasing of transfer rate in test application ;-) I know about unefficiency per-byte I/O but I didn't wait so huge degradation. I didn't flush buffers manually so physical writing must be performed by I/O library automatically. Ok, I slightly change my application - now it uses "format into string" with buffering data instead of "format into file". Unfortunately the result is worse. I attach my model - it cannot work without FTDI USB driver but shows my implementation. It reads data from USB port and writes to file in hex format, 5 bytes per line. Could you help me to find bottleneck in the design, please? Quote Link to comment
Phillip Brooks Posted April 8, 2008 Report Share Posted April 8, 2008 QUOTE (alexadmin @ Apr 7 2008, 11:08 AM) It reads data from USB port and writes to file in hex format, 5 bytes per line. Could you help me to find bottleneck in the design, please? I think the problem is that your are reading from the USB device and writing to disk inside the same loop. In the example I provided, I have several loops, data is passed between them via a queue. The logging queue executes at a much slower rate (200 msec intervals) and is not dependant upon the completion of reading from the receiver. If you use separate loops for your receiver and logger and pass the data between them using a queue, I think you will see much better performance. Quote Link to comment
Mads Posted April 10, 2008 Report Share Posted April 10, 2008 As others have commented here buffering the data makes the file IO much faster, and it could be an idea to spearate the logging from the sampling. If I assume the USB returns about 1000 bytes on each read, doing the formatting and writing the data to a file the way you do it runs at about 2,2 MB/sec on my machine. One trick you can apply to bump that up a bit is to not build the string the way you do it (feedback node etc.) - instead you just output the string and let it be autoindexed by the for loop. You then use a single input concatinate on the output array to get the string. The speed you gain by this relates to the length of the arrays, however with a 1000 byte input the logging went up to 3,1 MB/sec just by doing this. The fact that things slow down that much when you do the sampling in the same loop might point to a problem with that part rather than the file IO...How fast does that part run if you skip the file IO? On a side note I would suggest that you try to keep the diagrams more compact - it was barely readable on a 1280*1024 display, and there was not really any reason for it, the code could easily have fitted vertically. If you need space on your diagram later on just hold down the Ctrl-key while you click and drag the area you want on the spot you need it on the diagram...When programming in LabVIEW it's also a good thing to trust (and/or make) the data flow to drive the execution. Most of the sequence structures you have are either unnecessary or could easily be replaced by pure data flow. QUOTE (alexadmin @ Apr 7 2008, 01:50 PM) I had try to capture ~1Mb/s data flow from hardware and write it to text file (with some formatting). But the problem was found: my Labview writes file with speed less than 100 kb/s. I attach the simple example (it is need to set correct file name) - it takes ~10 minutes to write 50 MB file . Where am I wrong? Certainly, I work with modern PC (Pentium iV, SATA HDD, etc). The program built with c++ has no such problems. Quote Link to comment
alexadmin Posted April 10, 2008 Author Report Share Posted April 10, 2008 I make new example which does not depend on USB device and more compact. It based on example model of UDP logger and some suggestion above. It uses queue to transmit data between parallel process. The source process generates data from predefined array and put it to queue. The output process reads data blocks from queue and writes one to file. The running period of both process are controllable from front panel. I perform experiments with various array size/read/write period and found that maximum speed is about 260 kb/s when writing to file is enabled. I suppose the problem is not writing to file but performance in a whole. But I cannot have any new implementation ideas. Quote Link to comment
PJM_labview Posted April 11, 2008 Report Share Posted April 11, 2008 Write binary data if you can and this will be very fast. I did a quick test and I wrote about 500 MB in 30 seconds (<=> ~ 16MB/s). PJM Quote Link to comment
alexadmin Posted April 11, 2008 Author Report Share Posted April 11, 2008 QUOTE (PJM_labview @ Apr 10 2008, 04:37 AM) Write binary data if you can and this will be very fast. I did a quick test and I wrote about 500 MB in 30 seconds (<=> ~ 16MB/s).Below is a screenshot of the VI I use to write the data. Like other have said I send chunk of data to an asynch loop (via a queue) that called this VI. My test model above (speed_test) makes the similar thing - writes chunks of binary data from queue. But the problem is in data generating (processing). Processing limit is about 250 kb/s. Procesing consist of reading new byte from array, converting to hex (text) representation and putting into queue. Quote Link to comment
Ton Plomp Posted April 11, 2008 Report Share Posted April 11, 2008 QUOTE (alexadmin @ Apr 10 2008, 09:14 AM) Procesing consist of reading new byte from array, converting to hex (text) representation and putting into queue. Why do you sent the bytes one by one into the queue? Here's the same code rewritten (under the line): http://lavag.org/old_files/monthly_04_2008/post-2399-1207813579.png' target="_blank"> Ton Quote Link to comment
Mads Posted April 11, 2008 Report Share Posted April 11, 2008 Like you say, File IO is not the problem - it is faster than the data generation. Formatting the data takes 0,94 us per byte on my machine (generating 3 bytes of formatted data), that means that the maximum rate of data to write to the file that can be generated is: 3 bytes/0,94 us = 3191489 bytes/s = 3,04 MB/s That is the same file write speed I achieved yesterday...in other words, it's not a problem to write the data, however you cannot format it any faster than 3 MB/s. I'm not sure why you only get a few hundred KB/s, but it could be bacuse you actually use a wait in the write loop and have a very small array...with e.g. an array of 50 and a wait of 10 ms that loop will only generate 14,6 KB/s. If the sampling device is outputting data faster than this I would skip the formatting and just write the data directly to disk. You could then generate the formatted file at a different time - or in a parallell loop. Ironically this would swap the whole approach - put the file IO in the same loop as the data sampling...but separate the formatting loop - it's too slow:-) Mads QUOTE (alexadmin @ Apr 10 2008, 08:14 AM) My test model above (speed_test) makes the similar thing - writes chunks of binary data from queue. But the problem is in data generating (processing). Processing limit is about 250 kb/s. Procesing consist of reading new byte from array, converting to hex (text) representation and putting into queue. Quote Link to comment
alexadmin Posted April 11, 2008 Author Report Share Posted April 11, 2008 tcplomp, thank you for interes. I tried similar structure before. I have implement your example. For unkown reasons the result is worse than previous. It gives 120-140 kb/s only. QUOTE (Mads @ Apr 10 2008, 02:22 PM) Like you say, File IO is not the problem - it is faster than the data generation.Formatting the data takes 0,94 us per byte on my machine (generating 3 bytes of formatted data), Oh, how did you get this value? I tried to use Profiler but cannot vanquish it ;-) Thank you for suggestion. I will try it now. PS I have based on limitations of USB driver - it has buffer ~64 kb. So I need to read it every 30-40 ms (maximum). Quote Link to comment
Mads Posted April 11, 2008 Report Share Posted April 11, 2008 Well, how large arrays are you using and what is the write wait time? You can generate up to 3MB/s and the file IO will handle that, the data formatting will not run any faster. Unrelated to the problem at hand, but a tip for the future: The code can be written much more compact (both in logic and display size)..attached is a picture of basically the same approach(no optimization of the logic though, that is still the same as in your speed test). Not the optimal code that either, but much easier to read. QUOTE (alexadmin @ Apr 10 2008, 12:31 PM) tcplomp, thank you for interes. I tried similar structure before. I have implement your example. For unkown reasons the result is worse than previous. It gives 120-140 kb/s only. Quote Link to comment
alexadmin Posted April 11, 2008 Author Report Share Posted April 11, 2008 QUOTE (Mads @ Apr 10 2008, 02:49 PM) Well, how large arrays are you using and what is the write wait time? My test applications is used for familiarity with Labview. I analyze LabView adequacy for our tasks. We plan to produce several hardware boards (USB, PCI, PCIe) for demodulation, telemetry, etc. The planned rates is up to 6 MB/s for full data logging and maximal available momentary rates (depended on used bus) for data window capturing (PSD graphs printing, oscilloscope, raw signal form analyzing). QUOTE You can generate up to 3MB/s and the file IO will handle that, the data formatting will not run any faster. Hmm. Finally, it is a sad. 10 years old borland compiler gives 20 Mb/s in this case. Perhaps we would use LabView for GUI and simple data server for data logging with communication between them via sockets. QUOTE Unrelated to the problem at hand, but a tip for the future: The code can be written much more compact (both in logic and display size)..attached is a picture of basically the same approach(no optimization of the logic though, that is still the same as in your speed test). Not the optimal code that either, but much easier to read. Thank you. I will try Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.