Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Posts posted by bbean

  1. The easiest way to do this is just the replace “States on back†with a User Event message to oneself, but with a little sophistication one can create reusable subVIs that either do a delayed User Event or set up a “trigger source†of periodic events.  Here’s an example of a JKI with two timed triggers:

    attachicon.gifPeriodic Triggers with JKI statemachine.zip


    Very impressed with the simplicity and elegance of that timer solution.


    <possible thread hijack>

    Along the same lines....do you guys ever use the DAQ Events functionality within the JKI state machine in the same manner?  eg using the DAQmx events along with a daqmx read in the event structure similar to LabVIEW example:


    \examples\DAQmx\Analog Input\Voltage (with Events) - Continuous Input.vi


    Or are the downsides to reading daq at high speed in an event loop outweigh the potential benefits?



  2. Hi bbean,


    Thank you for your reply. Quick question before I try this, wouldn't this not work since the data flow will be in the Wait to Scan Complete while loop? If I press the Stop button, the event structure will not operate because I am stuck in the while loop.


    I would need to stop the while loop, prior to moving on to the event structure. Please correct me if I am wrong.



    The event structure is in a parallel loop (just like your consumer loops) with a branch of the VISA session going to it after your initialize.  The event loop will be waiting for an event (in this case the stop button press).  

    When you press the stop button it will fire an event , in that event you would execute the VI I attached which should kill the VISA session, which will cause an error in the Read VI (unlocking it) in your loop where you currently have the stop button control wired up.

  3. Not sure this will work, but try using the attached VI in a parallel while loop with an event structure and event case for stop button value change.  Its just a wrapper around the viTerminate function in VISA.


    Also put a sequence structure around the current stop indicator and wire the error wire from just above to the new sequence structure to enforce dataflow to the stop button.  This way it will read the correct value after the event structure fires.



    In terms of your idea with the timeout, I would like the hardware to do the counting of 15 minutes not the software.


    Not sure why you would do this but if its your preference then go ahead.  As you can see its causing you problems



    I am leaning towards modifying the subVI and constructing my own algorithm to read the buffer of the hardware.


    Don't waste your time.

    VISA Abort Pending Calls.vi

  4. The last time I did data acquisition that required start triggering (ai/StartTrigger) was 5 years ago in a LV8.6 application.  I seem to remember that the dataflow would pause at the slave's DAQmx Start Task.vi until the master arrived at its DAQmx Start Task.vi, then each task would proceed to their DAQmx Read VIs.


    Fast forward to LV2014 where I'm trying to help someone on another project get DAQmx start triggering working.   In LV2014, the process appears to operate differently and the slave's DAQmx Start Task.vi executes without pause and continues to the DAQmx Read.vi where it doesn't return data until the master AI input starts.


    Did something change or am I growing old and developing dementia?

  5. I can't really tell from the NItrace or code.  Do you have the manual that describes the wakeup protocol that you could also attach?


    Is the touch panel a "real" serial port  or a USB to serial converter?


    One problem I've had in the past with USB comm (or USB->serial com) was with the OS sending the USB ports to sleep.  Not sure if it could/would be able to do this on a "real" COM port.   Check the Advanced Power Settings in Control Panel\All Control Panel Items\Power Options\Edit Plan Settings and disable this everywhere.  You may also have to disable something in the BIOS.  The worst part about the USB sleep feature is that the more efficient you make your program the more likely the OS is to power down the USB port. 

  6. As a work around, what if you open with Option set to x100?  it seems to release memory then.  I don't know if its bad form or opens another can of worms to open with "Prepare to call and collect"  but never collect. The documentation seems to indicate so.  



    If you use this option flag (x100), you must include one Wait On Asynchronous Call node for every call that you begin with a Start Asynchronous Call node to ensure that LabVIEW does not retain any started calls in memory indefinitely.

  7. For me the event structure is in a state machine, and after getting an event will go and o what it needs to.  During this time the UI is no longer locked and will respond.  So one solution is to make the code in the event structure very minimal so it locks the UI but only for a fraction of a millisecond. 


    Obviously this depends on the circumstances, but what mechanism do you use to execute the "do" portion of code that could hold up the event structure?  Do you just pipe information into a parallel queued state machine or actor?  Is there a post on lavag or on the darkside that lays out the benefits or pros and cons of this (using user events) vs just directly piping the "do" into a separate parallel queued state machine queue?  


    Once you have an image inside an IMAQ ref. Never, never manipulate it outside of IMAQ (use only the IMAQ functions like copy, extract etc). Going across the IMAQ boundary (either direction) causes huge performance hits. As AQs signature states "we write C++ so you don't have to".



    What type of performance hit would you expect manipulating pixels using IMAQ ImageToEDVR and an IPE?  Curious as to the difference between an algorithm implemented with this technique vs a c DLL call, but I'm not near a development environment with Vision.

  9. All three solutions have similar performance in all my scenarios, except when I limit consumer loop to a 25 Hz,  in this case producer in 1. is also limited at 25 Hz. Trivial solution shows image corruption in some cases.


    Except this case, I never see producer loop being faster than consumer, they both stay at roughly 80 Hz, while it has some margin: when I hide display window, producer goes up at its max speed (200 Hz in this benchmark). When CPU is doing other things, the rates go down to the same values at the same time, as if both loops were synchronized. This is quite strange, because in both 2. and 3. producer loop rate should be independent from consumer.


    Consumer really does only display, so there's no reason it would slow down the producer like this... Everything looks like there's a global lock on IMAQ functions? Everything is shared reentrant. Producer is part of an actor, execution system set to "data acquisition" and consumer is in the main VI.


    Are the Matrox dll calls thread safe? Are you making any dll calls in your image processing? Is it possible they are executing in the user interface thread?

    • Like 1
  10. CharlesB...I can't figure out where your race condition is.  Also am not sure why you need all the extra mechanisms (Semaphore, DVR, Status) when you can achieve the same thing using 2 simple queues as shown in my example.  Plus the 2 queue approach guarantees you can not work on the image being displayed until it is put back in the camera/processing pipeline.  IMHO it is a simpler and easier to debug solution.  The other thing my solution does is allow you to do the image processing in "parallel" to your acquisition.

  11. ShaunR - I was unaware that you could use events like that with Imaq IO refs...very nice.  I will have to remember that for my next vision app.  


    While the Godwin Global approach is nice, I think there are two issues: 1) I believe the poster is not using IMAQ camera interface (Matrox proprietary instead) and 2) Somehow his Imaq Image Display is getting corrupted by newer images when it is updated faster (400fps) than it can be painted by the OS.


    I'm not suggesting the "triple buffering" approach is the proper solution here, but I am collaborating with the hopes that he can see a "simple" LabVIEW queue approach can work.


    PS. I surprised ShaunR doesn't have a "Cantankerous Callback" approach and I haven't seen any Actor Framefork approaches with multiple Actors.

    • Like 1
  12. I think you are over thinking this. The inherent nature of a queue is your lock. Only place the IMAQ ref on the queue when the grab is complete and make the queue a maximum length of 3 (although why not make it more?). The producer will wait until a there is at least one space left  when it tries to place a 4th ref on the queue (because it is a fixed length queque). If you have multiple grabs that represent 1 consumer retrieval (3 grabs then the consumer takes all three), then just pass an array of IMAQ refs as the queue element. As 


    See my code

  • Create New...

Important Information

By using this site, you agree to our Terms of Use.