-
Posts
251 -
Joined
-
Last visited
-
Days Won
7
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by bbean
-
-
The easiest way to do this is just the replace “States on back†with a User Event message to oneself, but with a little sophistication one can create reusable subVIs that either do a delayed User Event or set up a “trigger source†of periodic events. Here’s an example of a JKI with two timed triggers:
Very impressed with the simplicity and elegance of that timer solution.
<possible thread hijack>
Along the same lines....do you guys ever use the DAQ Events functionality within the JKI state machine in the same manner? eg using the DAQmx events along with a daqmx read in the event structure similar to LabVIEW example:
\examples\DAQmx\Analog Input\Voltage (with Events) - Continuous Input.vi
Or are the downsides to reading daq at high speed in an event loop outweigh the potential benefits?
-
Hi bbean,
Thank you for your reply. Quick question before I try this, wouldn't this not work since the data flow will be in the Wait to Scan Complete while loop? If I press the Stop button, the event structure will not operate because I am stuck in the while loop.
I would need to stop the while loop, prior to moving on to the event structure. Please correct me if I am wrong.
The event structure is in a parallel loop (just like your consumer loops) with a branch of the VISA session going to it after your initialize. The event loop will be waiting for an event (in this case the stop button press).
When you press the stop button it will fire an event , in that event you would execute the VI I attached which should kill the VISA session, which will cause an error in the Read VI (unlocking it) in your loop where you currently have the stop button control wired up.
-
Not sure this will work, but try using the attached VI in a parallel while loop with an event structure and event case for stop button value change. Its just a wrapper around the viTerminate function in VISA.
Also put a sequence structure around the current stop indicator and wire the error wire from just above to the new sequence structure to enforce dataflow to the stop button. This way it will read the correct value after the event structure fires.
In terms of your idea with the timeout, I would like the hardware to do the counting of 15 minutes not the software.
Not sure why you would do this but if its your preference then go ahead. As you can see its causing you problems
I am leaning towards modifying the subVI and constructing my own algorithm to read the buffer of the hardware.
Don't waste your time.
-
The last time I did data acquisition that required start triggering (ai/StartTrigger) was 5 years ago in a LV8.6 application. I seem to remember that the dataflow would pause at the slave's DAQmx Start Task.vi until the master arrived at its DAQmx Start Task.vi, then each task would proceed to their DAQmx Read VIs.
Fast forward to LV2014 where I'm trying to help someone on another project get DAQmx start triggering working. In LV2014, the process appears to operate differently and the slave's DAQmx Start Task.vi executes without pause and continues to the DAQmx Read.vi where it doesn't return data until the master AI input starts.
Did something change or am I growing old and developing dementia?
-
Recently I was having massive slowdowns opening VIs, including classed, but the culprit turned out to be an old link to an SCC Perforce Server that was not valid. Not sure if that is your problem or not, but if you have source control enabled in labview, you could check that.
-
2nd thisRight click on the read and write primitives and select "Synchronous I/O Mode>>Synchronous".
-
The manual says that you are supposed to send 0x00 for 1 sec,...
What if you decrease the frequency of the 0x00 eg. have a 1ms wait between each 0x00 send and use a while loop that exits after 1 second of elapsed time (vs a for loop).
-
I can't really tell from the NItrace or code. Do you have the manual that describes the wakeup protocol that you could also attach?
Is the touch panel a "real" serial port or a USB to serial converter?
One problem I've had in the past with USB comm (or USB->serial com) was with the OS sending the USB ports to sleep. Not sure if it could/would be able to do this on a "real" COM port. Check the Advanced Power Settings in Control Panel\All Control Panel Items\Power Options\Edit Plan Settings and disable this everywhere. You may also have to disable something in the BIOS. The worst part about the USB sleep feature is that the more efficient you make your program the more likely the OS is to power down the USB port.
-
Can you attach your 2009/2012 VIs and NI IO Trace logs?
-
-
As a work around, what if you open with Option set to x100? it seems to release memory then. I don't know if its bad form or opens another can of worms to open with "Prepare to call and collect" but never collect. The documentation seems to indicate so.
Quote:
If you use this option flag (x100), you must include one Wait On Asynchronous Call node for every call that you begin with a Start Asynchronous Call node to ensure that LabVIEW does not retain any started calls in memory indefinitely.
-
For me the event structure is in a state machine, and after getting an event will go and o what it needs to. During this time the UI is no longer locked and will respond. So one solution is to make the code in the event structure very minimal so it locks the UI but only for a fraction of a millisecond.
Obviously this depends on the circumstances, but what mechanism do you use to execute the "do" portion of code that could hold up the event structure? Do you just pipe information into a parallel queued state machine or actor? Is there a post on lavag or on the darkside that lays out the benefits or pros and cons of this (using user events) vs just directly piping the "do" into a separate parallel queued state machine queue?
-
Do all your loops with CAN communication have a Wait ms in them? Maybe CAN performance got better and the PC doesn't have time to sleep.
-
Once you have an image inside an IMAQ ref. Never, never manipulate it outside of IMAQ (use only the IMAQ functions like copy, extract etc). Going across the IMAQ boundary (either direction) causes huge performance hits. As AQs signature states "we write C++ so you don't have to".
What type of performance hit would you expect manipulating pixels using IMAQ ImageToEDVR and an IPE? Curious as to the difference between an algorithm implemented with this technique vs a c DLL call, but I'm not near a development environment with Vision.
-
All three solutions have similar performance in all my scenarios, except when I limit consumer loop to a 25 Hz, in this case producer in 1. is also limited at 25 Hz. Trivial solution shows image corruption in some cases.
Except this case, I never see producer loop being faster than consumer, they both stay at roughly 80 Hz, while it has some margin: when I hide display window, producer goes up at its max speed (200 Hz in this benchmark). When CPU is doing other things, the rates go down to the same values at the same time, as if both loops were synchronized. This is quite strange, because in both 2. and 3. producer loop rate should be independent from consumer.
Consumer really does only display, so there's no reason it would slow down the producer like this... Everything looks like there's a global lock on IMAQ functions? Everything is shared reentrant. Producer is part of an actor, execution system set to "data acquisition" and consumer is in the main VI.
Are the Matrox dll calls thread safe? Are you making any dll calls in your image processing? Is it possible they are executing in the user interface thread?
-
1
-
-
Do you have 2012 or later as an option? If so, the IMAQ ImageToEDVR VI will be available.
Off topic: that looks like one of the most interesting/promising improvements to the Vision toolkit in a while.
-
CharlesB...I can't figure out where your race condition is. Also am not sure why you need all the extra mechanisms (Semaphore, DVR, Status) when you can achieve the same thing using 2 simple queues as shown in my example. Plus the 2 queue approach guarantees you can not work on the image being displayed until it is put back in the camera/processing pipeline. IMHO it is a simpler and easier to debug solution. The other thing my solution does is allow you to do the image processing in "parallel" to your acquisition.
-
ShaunR - I was unaware that you could use events like that with Imaq IO refs...very nice. I will have to remember that for my next vision app.
While the Godwin Global approach is nice, I think there are two issues: 1) I believe the poster is not using IMAQ camera interface (Matrox proprietary instead) and 2) Somehow his Imaq Image Display is getting corrupted by newer images when it is updated faster (400fps) than it can be painted by the OS.
I'm not suggesting the "triple buffering" approach is the proper solution here, but I am collaborating with the hopes that he can see a "simple" LabVIEW queue approach can work.
PS. I surprised ShaunR doesn't have a "Cantankerous Callback" approach and I haven't seen any Actor Framefork approaches with multiple Actors.
-
1
-
-
Triple Buffering with simple queues and shift register.
-
Is the attached closer to what you want?
it prevents the "corruption" of the imaq image ref by taking it out of the cam /processing buffer while it is being displayed.
The third image reference in the VI is pretty much worthless. But wanted to see if this was a step closer.
-
How fast can you run your acquire and processing loop if you do not display?
So passing the imaq reference to a display loop with a notifier doesn't work because you are afraid you may be overwriting the "displayed" image in the notifier loop?
-
Can you share your code? Are you doing any image processing? Can you drop frames on the image processing?
-
I think you are over thinking this. The inherent nature of a queue is your lock. Only place the IMAQ ref on the queue when the grab is complete and make the queue a maximum length of 3 (although why not make it more?). The producer will wait until a there is at least one space left when it tries to place a 4th ref on the queue (because it is a fixed length queque). If you have multiple grabs that represent 1 consumer retrieval (3 grabs then the consumer takes all three), then just pass an array of IMAQ refs as the queue element. As
See my code
-
Threw together some quick code using an Express VI (GASP) to test the theory. Haven't run or debugged other than with my webcam.
The Queue should prevent race conditions and provide a little margin on processing. Not sure how many data copies of the images will exist. Don't have time to investigate.
Bean
Delay values for large data, buffer?
in LabVIEW General
Posted
Use no delay and this
Circular Buffer:
https://lavag.org/files/file/250-circular-buffer/