By Maurice Rahme
I'm trying to implement an object tracking in LabVIEW. I've written a VI (attached) to detect and locate the centroid of elliptical shapes (because my circular subject can possibly be rotated and appear as an ellipse). However, the detection is not successful on every frame.
For this reason, I would like to use the object detection method I've written as a ROI reference for an object tracker, whereby the object detector runs every 10-20 frames or so to make sure the tracker has not accumulated too much error. Any advice on how I can do this? Thanks!
Additionally, any criticism or advice on my object detection method would be appreciated.
Here, I have attached the VI in which I want to do auto exposure and set that exposure value.
Basically this is program for laser beam analysis on live image.
Basically I want to set exposure time accordingly laser beam intensity.
If anyone previously worked on it then please help me with that. I am new to image processing.
Other image I attached, that's the part I want to implement with this program.
Help me out to solve this problem.
Research purpose, I am using Imagining Source (DMK 33Ux249) camera to capture the laser beam. I am trying to write a code for auto exposure according to power level of laser beam. I used Vision acquistion for capturing live video image and tried to open the front panel of the vision acquistion.
But, I couldn't figure out how to set the exposure level automatically. Basically whole task is,
1. Capturing live image
2. set the exposure time according to laser beam profile
3. Remember the exposure the time and set again according to next frame or beam profile.
Is anybody previously work or have an idea to solve this issue please let me know.
I am trying to removing the background and normalizing the image data. I have an image which I attached here.
All I want as end result of normalized image with no background.
At last I want to check the beam profile before and after.
Is anybody previously worked on it?
Any help would be appreciate.
Thanks in Advance
I am trying to use image convolution inside FPGA. My Image size is around 6kx2k. The convolution is applied properly until 2600 pixels in x resolution. After that, the values seem to miss previous row data.
In Detail: As convolution is matrix operation, image data needs to be stored for the operation. But it seems there is an inadvertent error storing only 2600 pixels per row inside FPGA. And hence the filtered output is calculated assuming these pixels to be 0.
I have tried with different image sizes, different convolution kernels, and also in different targets (cRIO 9030 and IC 3173). All results are same.
I have attached a screenshot of FPGA VI and an example image.
The example image shows an input image of 4000x2500 of same pixel value 16.The kernel is 3x3 of values 1 with divider=1. The RT image is processed using IMAQ convolute inside RT controller and has value 144 [(9*16)/1] for all pixels. But the FPGA processed image (zoomed in) has 144 until 2597 pixels and then 112 (7*16- showing 1 column of 2 rows missing) at 2598, 80 (5*16- showing 2 columns of 2 rows missing) at 2599 and 48 after that (missing 3 columns of 2 rows- current row is always present). This shows the data is missing from the previous rows after 2600 index.
Is there some mistake in the code or any workaround available?