Jump to content

Video Player/Recorder Methodology


Recommended Posts

Hello all,

I'm working on a conceptually fairly simple project: coding a video recorder/player with the capability of recording DAQmx channels alongside. I'm using the IMAQ AVI functions in combination with some queue-powered buffers I've wired. I have a working video player but my video recorder application still can't get better than 15 frames per second. I've done some timing tests on my program and I've noticed the following: If I try to record video/DAQ at 20fps I should have 50ms between frames to queue up all my data to my write buffer. My sub-vis only account for 30ms worth of processing time in that window but I the whole process of acquiring a frame still takes at least 75ms. Could queues and primitive functions really be taking up to 20ms? How slow are queues really? Also, I am using the IMAQ buffers described in the LL Ring.vi example so the actual IMAQ acquisition is fairly quick. On top of that my write loop also uses a ring buffer. Right now I'm thinking my approach might just be wrong. Does anyone know how media recording software is constructed in general? Has anyone attempted this task?

Thanks for any help!

Cheers,

Jan

Link to comment

Hi Jan,

You should be able to do 20fps and probably faster, but this figure is heavily dependent upon the following:

1) Are you displaying the captured frame in real-time on the screen?

2) What is the resolution and image type (8-bit grayscale 32-bit RGB)?

3) What video codec are you using to create the AVI file - Microsoft MPEG-4 Video Codec V2 is the best legacy codec that IMAQ can handle,

but it depends if you can accept a lossy recording.

4) Use the IMAQ Grab Acquire.vi for high-speed image acquisition.

5) I am using queues to pass a single image buffer across to a processing task and it takes basically 0ms so I doubt that is you're problem, after

all it is only passing a reference and not the whole image data.

I have attached an example that can capture, display and record an AVI movie at around 20fps, on my P4 2.8Ghz, 1GB RAM, MS XP, 100GB H/D.

I think you're slowest step will be the encoding of the AVI file. IMAQ currently only supports the old VFW legacy format and most of the codecs are

awful (ie. Microsoft RLE, Indeo 5.10, MJPEG compressor etc...). I think (hope) that in the next release of the Irene's IVision toolkit their will be

support for writing AVI files using the latest codecs (ie. Xvid, DivX, TSCC etc.)

Link to comment

What capture board are you using? I have been successful at creating a continuous capture\display\save application at 30fps (or pseudo-60fps\320x480 resolution using uninterlaced capture) with IMAQ and the 1409 capture board, as well as synchronizing it with DAQmx operations (on an M series board). I think you have most of the concepts down, but I'll lay out a few guidelines. If you are still having trouble, I may be able to strip that section of the code from my larger application and post an example.

1) Use 2 loops: a capture loop, and a process\display\writing loop. As you have already realized, the LL Ring example is a good starting point. Configure multiple image buffers (I use 30-100 to be safe) and use IMAQ Extract buffer with a running buffer count to aquire the frames.

2) In each loop iteration, don't just capture one frame. Find out the current cumulative buffer number compared to your running total and extract and enque all pending buffers in a small for loop. It's kind of like flushing the frame capture buffer. Use a wait primitave in the capture loop of about 33ms (or a wait until next ms multiple). This way, you'll minimize processor monopolization, but you won't fall behind.

3) When you enque each frame, enque it as a flattened image string (using IMAQ Flatten Image to String), not as an image reference. If you only enqueue a reference, then you can run into a race condition if you call IMAQ Extract Buffer later, since that releases the current (queued) image reference and therefore makes it vulnerable to being overwritten before it can be dequeued and processed. This is less efficient, since the image string data is much larger, but LV handles queues very efficiently, especially when the size of your string will never change. This way, your images are isolated IMAQ buffer list.

4) In your processing loop, flush the queue in each iteration, and again use a small for loop to process each image (unflatten using the Unflatten from String primative). Write the data to your AVI file in this mini-loop, and only display the last image from the flushed queue. Again, put a 33 ms or so wait (or wait until next ms multiple) in the while loop, so that you minimize processor monopolization, but still theoretically maintain proper loop speed.

5) If you've got good processor\ram speed , then each frame will be extracted, queued, dequeued, processed, written, and displayed individually. However, if there are any delays or your computer can't keep up, you will be protected from buffer overrun, and your display will stay realtime (albeit choppy). My video streaming program has been tested on a relatively basic 2Gz (hyperthreaded)\2Gb computer at full 640x480 capture, with many other parallel processes (including DAQmx and VISA tasks), and task manager says my CPU usage is 40% (on each "processor").

6) Finally, there are several other codecs that might be of interest to you. You can go to www.free-codecs.com to find lots of codec packs that can be installed and do work with LV. I personally use Microsoft's MPEG4 V3 codec (available from Microsoft's website), which is faster and more accurate than V2 (according to the AVI Codec Compressor Comparison example program). You might run into some issues with getting Windows Media Player and\or LV to recognize the new codecs. I had to install the attached program (Windows Media Tools) to get Windows (and LV) to truly recognize all the extra installed codecs (i.e., showing up in the list produced by IMAQ AVI Get Filter Names). It's some utility released by Microsoft a long time ago, and is now unsupported, but it still works to force registration of all installed codecs with Windows.

7) If you want to synchronize DAQmx with IMAQ, you need to use an RTSI cable. Once installed, you can export a digital trigger from your DAQmx task (using DAQmx Connect Terminals) to an RTSI pin. You can then used IMAQ Configure Trigger2 to configure a start trigger based on the RTSI signal.

Hope that helps. Good luck!

Link to comment

Thanks for the great feedback! I am indebted to all of you :D

I am displaying every frame in real-time using en embedded IMAQ WindDraw window and crelf's WindDraw embedding VI. The draw speed is actually fairly fast, on the order of 3ms. I have also tried displaying every second frame of the capture to speed things up, but without success. I am working with 8-bit 640*480 images.

I've played around with a few codec's and even got DivX working for a while but I'm just sticking with Microsoft Video 1 at this point. The actual encoding of the AVI is independant of the capture (as I will explain in a second). (Sidenote: has anyone tried implementing x264? I compiled with MinGW but it's a long ways off from LabVIEW integration.)

I'm using a 1410 frame grabber and a 6024E DAQmx card. They have a RTSI cable running between them and I'm using that to synchronize/time the acquisition. In MAX a simple grab will show that I can acquire 29fps but display only 15fps. I think this could explain why 15fps is the tipping point for a DAQ buffer overflow I was experiencing.

I've designed my program with three loops instead of two. One GUI loop with an event structure that doesn't timeout and waits for user input. One video loop which runs in real time with the video playback/record speed. And one buffer loop which runs as fast as possible (with a small 1ms delay) to either cue up frames into the read buffer or write frames to file from the write buffer. In acquisition mode the program uses the constructs from the LL Ring example and so I am using a ring buffer to acquire images. Unlike Yuri33's suggestion though, I'm actually passing the image references to the write queue. So far, I've been reliant on the fact that Microsoft Video 1 is a fairly fast codec and I haven't had any overwrite of the ring buffer before actually writing the AVI frame. I've also had at one point a second ring buffer of IMAQ images to store the images as they are passed between the 2nd and 3rd loops but I've removed it in effort to gain efficiency.

I like your idea of flushing the whole IMAQ buffer Yuri, but is that possible if I want to synchronize DAQ acquisition with each frame? You also suggest flushing the whole queue for recording the AVI and only then displaying the image, does this still ensure that images are displayed "live", that is to say, does there exist a delay between what the camera records and what the user sees because of the time taken to flush the whole queue and write each frame?

Currently I capture and display the image at the same time but I write the AVI in a different loop. It sounds like the solution to speeding things up is to seperate the display from the capture. I'll see if I can implent something like that and come back.

I really appreciate the help though! Plus I'm glad that I'm not the only one who's tried this.

Cheers,

Jan

Link to comment

1) I've never used the WinDraw functions (I always display images in an image display control on the front panel), and I don't know about crelf's embedding VI, but unless WinDraw is very inefficient, I doubt there is much impact between the two display methods.

2) I used to use a 1410 card before. It worked find as well.

3) How do you know what MAX's display frame rate is? In my MAX, I can continuously aquire at 30fps (NTSC capture) and I assume that every captured frame is displayed, since the display is very responsive. Is there an indicator for real frame rate?

4) Synchronization between the capture card and DAQ board is easy with LV. Each hardware component has its own (very accurate) onboard clock. So the only thing you need to worry about is synchronizing their starts, which done as I explained above with an RTSI cable and IMAQ Configure Trigger2 (make sure to start the dependant tasks--IMAQ in this case--before the task that produces the digital trigger). Other than that, as long as both your acquisitions are are buffered and there are no buffer overruns, everything remains hardware timed, and your data will always be synchronized. Nothing in the manner you capture and save the data (e.g., frame by frame vs. a few frames at a time) will alter this timing.

5) My method will ensure that the most recently captured frame is always displayed, since you are only displaying the last image in the flushed queue each loop iteration. This is "live" as any program can be. If your computer is fast enough (and there's no reason it shouldn't be), then your queue will never be more than 1 element long, and you will be displaying every frame. You can easily check this by adding a probe to check the queue size as your run your program. If however your computer can't process and save each frame in 33ms, then my method will still show the most recently captured frame ("live"), but the display will look jumpy, because not every frame will be displayed. This is because you will flush more than 1 frame each iteration. If you only process\save one frame per iteration, this may eventually lead to a buffer overrun. But if you process\save multiple frames per iteration as needed (which is always more efficient than one at a time), you will prevent the overrun. The AVI data file that you are writing to will play smoothly after the data collection (all frames captured and saved), but your "live" display will not. The same priciple applies to the DAQmx aquisition. Each iteration, I read all available samples in the buffer (i.e. by wiring a -1 to the samples to read input) and queue that data for processing\display. If I only read one sample per iteration, I would be calling the read function too many times, which is very inefficient.

6) Most definitely seperate the capture and display! Data capture (whether frame grabber or DAQ board) is always the highest priority because you can't afford to have a buffer overrun. Processing and writing to file are the next highest in priority, with display of data the lowest importance. You should not have serial dependence between any of these priority tiers. The display is just to let us know things are working--it usually doesn't matter if it is choppy. So long as the record is complete, we can always see all the data after the fact.

7) I've never compiled a codec, but theoretically, if it is a legitamate encoder, it should work with LV. Every codec that shows up in ffdshow is available to me in LV, but not until I ran that Windows Media Tools program I attached in my earlier post.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.