By Maurice Rahme
I'm trying to implement an object tracking in LabVIEW. I've written a VI (attached) to detect and locate the centroid of elliptical shapes (because my circular subject can possibly be rotated and appear as an ellipse). However, the detection is not successful on every frame.
For this reason, I would like to use the object detection method I've written as a ROI reference for an object tracker, whereby the object detector runs every 10-20 frames or so to make sure the tracker has not accumulated too much error. Any advice on how I can do this? Thanks!
Additionally, any criticism or advice on my object detection method would be appreciated.
We have started with the development of our own embedded MIPI camera module for the Raspberry Pi with an industrial image sensor. Currently we are in the specification phase and we are interested in your preferences so we will develop a product that does fit your requirements. If you have some time, please consider filling in this 4-question questionnaire.
Research purpose, I am using Imagining Source (DMK 33Ux249) camera to capture the laser beam. I am trying to write a code for auto exposure according to power level of laser beam. I used Vision acquistion for capturing live video image and tried to open the front panel of the vision acquistion.
But, I couldn't figure out how to set the exposure level automatically. Basically whole task is,
1. Capturing live image
2. set the exposure time according to laser beam profile
3. Remember the exposure the time and set again according to next frame or beam profile.
Is anybody previously work or have an idea to solve this issue please let me know.
I am trying to removing the background and normalizing the image data. I have an image which I attached here.
All I want as end result of normalized image with no background.
At last I want to check the beam profile before and after.
Is anybody previously worked on it?
Any help would be appreciate.
Thanks in Advance