Jump to content

Yuri33

Members
  • Posts

    42
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Yuri33

  1. QUOTE(crelf @ Jan 17 2008, 11:01 AM) But not by the user, correct? In my experience, if I disable (or disable&gray) a control, when the user attempts to change the control (button press for a boolean, increment\decrement for a numeric, etc.), nothing happens, and no value-change event is triggered. Perhaps the mouse down event is, but not the value change event, and I thought that is what was at issue in this thread. But I do understand that even if the control is disabled, that doesn't prevent a programmatically triggered value-change event. Hence the second part of my suggestion that you mirrored.
  2. Raging Goblin, I'm the one who created the NI forum post you referenced. I never responded to the NI engineer because my code was both massive and propriatary. I continued to ask for support via private communication with the an NI engineer, but they were unable to find a satisfactory solution beyond upgrading to LV8.5, which I was unable to do. I thought I was being clever by storing an array of generic references that I can typecast as necessary. The solution I eventually came up with is a dumbed down way to do things, but at least it works: I created a global (not LV2, but true global) that contained strictly-defined references (i.e., they have a little orange star) to all the controls I needed to reference. At the beginning of my program, I created references to all these controls and stored them to the global. Whenever I needed a reference to a control, I then simply read from the global. When compiled, my program executed properly. What this means, however, is that I cannot dynamically reference my controls, since I'm using references to strictly typed controls. Compiled LV seems fine typecasting to a more generic form (either by "To more generic class" or by simply collecting a bunch of strict references into a common array), but like you posted, typecasting to a more specific class produced problems in the executable. One more thing: the global solution above worked for every control I had except Image controls. According to the NI engineer I worked with, it seems that there is NO way (at least in LV8.2) to store a strictly typed refernce to an Image Control, or even to pass it to a subVI, which to me is really strange. In the developement environment, I could typecast the control to a generic form to store and\or pass it to subVI's (where they can be recasted back to an Image control reference), but as you've seen, this doesn't work in the executable. My only recourse was to execute all property reads\writes and\or methods for Image Controls directly in the VI where Image Control was located in (in my case, the top level VI). This made for bloated and ugly sections in my top level VI, but that was what I had to live with.
  3. How about simply enabling\disabling the control? That way the user can't trigger a value change event when you don't want to. And if you are programmatically triggering a value change event, you can simply read the status of whether the control is enabled or not, and then decide to execute code.
  4. Option 2) I understand passing the data by a string\variant queue, but why all the extra for loop\shift register stuff in the dynamic VI? Just preview the queue elements (use Get Queue Status with Return Elements = True so that you don't lose the data by dequeuing) and form a string array and corresponding a variant array. Whenever you need the appropriate data, search the string array for the data name and use that to index the variant array, then convert the variant to the expected data type. If you encounter an error in the conversion, you obviously passed the wrong type of data for that data name. Option 3) Would a functional global work? Even though you dynamically spawn a VI, I believe any embedded functional global should still contain whatever you wrote to it in the calling program.
  5. It's a simple and pretty boring game, but I found myself trying to see how far I could go. I reached the point of frustration at level 7, where you have to balance the scale to within 2 units using only 1 ball.
  6. 32x32 seems big enough for me. In fact, most of my simple (get/set) functional globals are only 9x32. Do you really want a bunch of local-variable sized globals, especially when they become quite large if the data name is long?
  7. QUOTE(eaolson @ Nov 10 2007, 09:48 AM) Am I missing something here? I can create any size icon (up to 32x32) for any VI and\or control I want. Just leave the undesired space white (in all three versions of the icon--BW, 16, and 32 bit), and LV eliminates those areas from the icon. In fact, I've been able to create a multiple data type "global" by putting a bunch of mini-functional globals into one polymorphic VI. That way, when I wire an input or output, my polymorphic functional global adapts to type (and get\set function) automatically.
  8. QUOTE(george seifert @ Nov 12 2007, 02:08 PM) Yes, passing values by queue is waaaaaaaaay better than by reference. Passing values by reference always causes a context switch to the UI thread, which is the slowest thread there is. Additionally, passing values by reference does not easily allow for buffered data transfers like a queue does. In fact, using references can cause race conditions, which are Very Very Bad . The only time I use references to pass values are if I'm "mimicing" what a user might do (in which case you are already at "UI speed"), or if I need to change a singular value (like a single numeric or boolean) in a non-deterministic manner. Even in these two cases, I can usually find a better way to perform the task (user events, functional globals, etc.).
  9. I have programmed large projects where I needed to synchronize DAQmx data with serial port data. The key to performing this task is to make sure you are doing buffered recording. This is easily accomplished by using queues. What I do is create multiple parallel threads (while loops) in a producer/consumer architecture. You create one loop which will read the DAQmx data and queue it (1 queue element per sample), and another loop which will read the GPS data and queue that as well. Finally, a third loop monitors the 2 data queues and only dequeues a block of samples from both queues when enough data exists. In other words, let's say that the DAQmx data is producing 2 channels of data at 100 samples/s in a pretty steady fashion. In contrast, let's say your GPS data is producing 4 channels of data at 1 sample/s, but the data only comes in intermitantly. Both of the producer loops will queue up the data as soon as it comes in (the DAQmx loop at a steady rate of 100 queue elements per second and the GPS loop at an unsteady rate of 1 queue per second). You then have the consumer loop monitor the number of elements in each queue until an appropriate block of data is available (for example, 400 elements in the DAQmx queue and 4 elements in the GPS queue). When the conditions for a block have been met, your consumer queue can dequeue the appropriate block size, resample the data if needed, and display\record the data as needed. Therefore, the only problem is ensuring that the two data streams are synchronized. Becuase you can't really "start" or "stop" the data that streams in from the serial port, you have to create an artificial "software start" for the serial data. I do this by initiating a flush of the serial port right before I start collecting data. That way, you know that all the data that streams in is from after the flush event. We know that all the DAQmx stuff can be set to start at the same time (e.g., RTSI start triggers, etc.). So the best we can do at synchronizing the serial data with the DAQmx data is to flush the serial port, and then immediately set the start trigger of the DAQmx tasks. After that, the queued producer/consumer model described above ensures that the data is appropriately recorded.
  10. Don't animated GIFs use up a lot of CPU in LV?
  11. Is there any reason you can't implement the same solution in LV that you implemented in Matlab (read in\process\spit out 10000 samples at a time)? LV provides equivalent open\read\seek\close functions that Matlab has.
  12. 1) I've never used the WinDraw functions (I always display images in an image display control on the front panel), and I don't know about crelf's embedding VI, but unless WinDraw is very inefficient, I doubt there is much impact between the two display methods. 2) I used to use a 1410 card before. It worked find as well. 3) How do you know what MAX's display frame rate is? In my MAX, I can continuously aquire at 30fps (NTSC capture) and I assume that every captured frame is displayed, since the display is very responsive. Is there an indicator for real frame rate? 4) Synchronization between the capture card and DAQ board is easy with LV. Each hardware component has its own (very accurate) onboard clock. So the only thing you need to worry about is synchronizing their starts, which done as I explained above with an RTSI cable and IMAQ Configure Trigger2 (make sure to start the dependant tasks--IMAQ in this case--before the task that produces the digital trigger). Other than that, as long as both your acquisitions are are buffered and there are no buffer overruns, everything remains hardware timed, and your data will always be synchronized. Nothing in the manner you capture and save the data (e.g., frame by frame vs. a few frames at a time) will alter this timing. 5) My method will ensure that the most recently captured frame is always displayed, since you are only displaying the last image in the flushed queue each loop iteration. This is "live" as any program can be. If your computer is fast enough (and there's no reason it shouldn't be), then your queue will never be more than 1 element long, and you will be displaying every frame. You can easily check this by adding a probe to check the queue size as your run your program. If however your computer can't process and save each frame in 33ms, then my method will still show the most recently captured frame ("live"), but the display will look jumpy, because not every frame will be displayed. This is because you will flush more than 1 frame each iteration. If you only process\save one frame per iteration, this may eventually lead to a buffer overrun. But if you process\save multiple frames per iteration as needed (which is always more efficient than one at a time), you will prevent the overrun. The AVI data file that you are writing to will play smoothly after the data collection (all frames captured and saved), but your "live" display will not. The same priciple applies to the DAQmx aquisition. Each iteration, I read all available samples in the buffer (i.e. by wiring a -1 to the samples to read input) and queue that data for processing\display. If I only read one sample per iteration, I would be calling the read function too many times, which is very inefficient. 6) Most definitely seperate the capture and display! Data capture (whether frame grabber or DAQ board) is always the highest priority because you can't afford to have a buffer overrun. Processing and writing to file are the next highest in priority, with display of data the lowest importance. You should not have serial dependence between any of these priority tiers. The display is just to let us know things are working--it usually doesn't matter if it is choppy. So long as the record is complete, we can always see all the data after the fact. 7) I've never compiled a codec, but theoretically, if it is a legitamate encoder, it should work with LV. Every codec that shows up in ffdshow is available to me in LV, but not until I ran that Windows Media Tools program I attached in my earlier post.
  13. What capture board are you using? I have been successful at creating a continuous capture\display\save application at 30fps (or pseudo-60fps\320x480 resolution using uninterlaced capture) with IMAQ and the 1409 capture board, as well as synchronizing it with DAQmx operations (on an M series board). I think you have most of the concepts down, but I'll lay out a few guidelines. If you are still having trouble, I may be able to strip that section of the code from my larger application and post an example. 1) Use 2 loops: a capture loop, and a process\display\writing loop. As you have already realized, the LL Ring example is a good starting point. Configure multiple image buffers (I use 30-100 to be safe) and use IMAQ Extract buffer with a running buffer count to aquire the frames. 2) In each loop iteration, don't just capture one frame. Find out the current cumulative buffer number compared to your running total and extract and enque all pending buffers in a small for loop. It's kind of like flushing the frame capture buffer. Use a wait primitave in the capture loop of about 33ms (or a wait until next ms multiple). This way, you'll minimize processor monopolization, but you won't fall behind. 3) When you enque each frame, enque it as a flattened image string (using IMAQ Flatten Image to String), not as an image reference. If you only enqueue a reference, then you can run into a race condition if you call IMAQ Extract Buffer later, since that releases the current (queued) image reference and therefore makes it vulnerable to being overwritten before it can be dequeued and processed. This is less efficient, since the image string data is much larger, but LV handles queues very efficiently, especially when the size of your string will never change. This way, your images are isolated IMAQ buffer list. 4) In your processing loop, flush the queue in each iteration, and again use a small for loop to process each image (unflatten using the Unflatten from String primative). Write the data to your AVI file in this mini-loop, and only display the last image from the flushed queue. Again, put a 33 ms or so wait (or wait until next ms multiple) in the while loop, so that you minimize processor monopolization, but still theoretically maintain proper loop speed. 5) If you've got good processor\ram speed , then each frame will be extracted, queued, dequeued, processed, written, and displayed individually. However, if there are any delays or your computer can't keep up, you will be protected from buffer overrun, and your display will stay realtime (albeit choppy). My video streaming program has been tested on a relatively basic 2Gz (hyperthreaded)\2Gb computer at full 640x480 capture, with many other parallel processes (including DAQmx and VISA tasks), and task manager says my CPU usage is 40% (on each "processor"). 6) Finally, there are several other codecs that might be of interest to you. You can go to www.free-codecs.com to find lots of codec packs that can be installed and do work with LV. I personally use Microsoft's MPEG4 V3 codec (available from Microsoft's website), which is faster and more accurate than V2 (according to the AVI Codec Compressor Comparison example program). You might run into some issues with getting Windows Media Player and\or LV to recognize the new codecs. I had to install the attached program (Windows Media Tools) to get Windows (and LV) to truly recognize all the extra installed codecs (i.e., showing up in the list produced by IMAQ AVI Get Filter Names). It's some utility released by Microsoft a long time ago, and is now unsupported, but it still works to force registration of all installed codecs with Windows. 7) If you want to synchronize DAQmx with IMAQ, you need to use an RTSI cable. Once installed, you can export a digital trigger from your DAQmx task (using DAQmx Connect Terminals) to an RTSI pin. You can then used IMAQ Configure Trigger2 to configure a start trigger based on the RTSI signal. Hope that helps. Good luck!
  14. QUOTE(rolfk @ Jul 22 2007, 03:31 AM) Is there a list of undocumented calls like this to the LV library?
  15. I haven't tested if this will work, but there might be a possible (roundabout) way to do this if you have the Vision toolkit installed. There is a hidden subVI called "IMAQ Create&LockSpace.vi" in "Compatibility.llb" in the vi.llb -> Vision directory. From the description, it allows you to allocate an image with a fixed size in memory, ensuring that that memory space is always available. It's only supposed to be used in situations where memory constraints are tight, but I believe it may work for your situation. Just use this vi to create a memory space, typecast your data to a series of "pixels" that can be mapped to this place, and then use "IMAQ GetImagePixelPtr.vi" to obtain a pointer to that memory space for your dll call. As long as you can ensure that your data will never grow beyond the allocated size, I believe the memory location will stay constant.
  16. 1) I've tried typcasting to check for a non-zero number, but the same number remains even after a read or write session is closed. 2) IMAQ AVI Get Info only works on read sessions. It will always return the same error for write sessions, whether the reference is valid or not. 3) I do proper error checking, but I like to protect all the relavent functions with reference validity checks first. In fact, I'm pretty sure that's one of the first things done in the dll calls themselves. If an error is produced on the Open or Create function, then I output a null reference. But the question is how to check if that reference is really null down the line. I have a multithreaded application that captures a video stream and dynamically writes the data to disk. The capture and the write\display sections are contained in seperate loops, with frames of data transfered via queues (of flattened images). I only want to write data when I've opened a file for doing so (which itself is in a seperate UI loop), and discard the data otherwise. So that is why I want to check the AVI File reference. 4) The problem with checking for the presence of the file using normal "file exists" functions is that the AVI Create function doesn't actually write anything until the first frame is written. If you open a write session and then close it without writing any data, no file is created. 5) I'll see if any glaring reference check calls exist in the Vision dlls, but all in all it's bad form not to include a reference check function. If the primative can't be used, then usually LV provides that function (e.g., semiphores). It's also rather strange that other Vision references, such as an IMAQ session reference, do work with the Not a Ref primative while the AVI file reference does not. I suppose I could simply try to write a frame and trap the error if the reference is bad, but that's rather inelegant. Additionally, I would probably have to carry extra information as well, such as the frame size (IMAQ AVI Write thows an error if you change the size of the image midstream, though I don't know if that error overrules an invalid reference error). It might even be quite inefficient, if the dll call does other things before checking the reference itself (allocating memory, etc.). In my application, where many data streams are being captured and recorded in parallel (video, digital and analog data, serial port data, etc), efficiency is important.
  17. The IMAQ AVI file reference has a lot of strange properties that make it rather inflexible and hard to use. Chief among these is the fact that the "Not a Reference" primitive in LV always returns false no matter if the AVI file reference is valid or not. Does anyone have a way to reproduce the proper functionality? I've tried a number of things, including typecasting to another reference type (including the a datalog reference, since it seems related to that) and trying to use AVI File Info and trap the error. Nonething works. Is there any test to see if an AVI file reference points to an open file (whether it's a write session or a read session?). I'd settle for a write session only version if that's the only thing possible.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.