Jump to content

Object Tracking using Detection as ROI ref


Recommended Posts

Hello, 

I'm trying to implement an object tracking in LabVIEW. I've written a VI (attached) to detect and locate the centroid of elliptical shapes (because my circular subject can possibly be rotated and appear as an ellipse). However, the detection is not successful on every frame.

For this reason, I would like to use the object detection method I've written as a ROI reference for an object tracker, whereby the object detector runs every 10-20 frames or so to make sure the tracker has not accumulated too much error. Any advice on how I can do this? Thanks! 

Additionally, any criticism or advice on my object detection method would be appreciated. 

Vision.vi

Link to post
Share on other sites
  • 3 months later...

Hard to comment too much without some example images. I think you are on the right track to do an object detection with a more primitive image processing step to recenter the ROI and mitigate errors, but can't really comment further without a handful of images for context.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By GeT-Cameras
      Hello everyone,
      We have started with the development of our own embedded MIPI camera module for the Raspberry Pi with an industrial image sensor. Currently we are in the specification phase and we are interested in your preferences so we will develop a product that does fit your requirements. If you have some time, please consider filling in this 4-question questionnaire.
       
    • By Shaun07
      Hello,
       
      Here, I have attached the VI in which I want to do auto exposure and set that exposure value.
      Basically this is program for laser beam analysis on live image. 
      Basically I want to set exposure time accordingly laser beam intensity. 
      If anyone previously worked on it then please help me with that. I am new to image processing. 
       
      Other image I attached, that's the part I want to implement with this program. 
      Help me out to solve this problem. 
       
      Thanks
       
      Thanks,
      Basic_Camera_Live_Beam_Information.vi

    • By Shaun07
      Hello,
      Research purpose, I am using Imagining Source (DMK 33Ux249) camera to capture the laser beam. I am trying to write a code for auto exposure according to power level of laser beam. I used Vision acquistion for capturing live video image and tried to open the front panel of the vision acquistion. 
      But, I couldn't figure out how to set the exposure level automatically.  Basically whole task is,
      1. Capturing live image
      2. set the exposure time according to laser beam profile
      3. Remember the exposure the time and set again according to next frame or beam profile.
      Is anybody previously work or have an idea to solve this issue please let me know. 
       
      Thanks 
    • By Shaun07
      Hello All,
       
      I am trying to removing the background and normalizing the image data. I have an image which I attached here. 
      All I want as end result of normalized image with no background. 
      At last  I want to check the beam profile before and after.
       
      Is anybody previously worked on it?
      Any VI?
      Any help would be appreciate.
       
      Thanks in Advance 

    • By prabhakaran
      Hi,
       
       
      I am trying to use image convolution inside FPGA. My Image size is around 6kx2k. The convolution is applied properly until 2600 pixels in x resolution. After that, the values seem to miss previous row data. 
       
      In Detail: As convolution is matrix operation, image data needs to be stored for the operation. But it seems there is an inadvertent error storing only 2600 pixels per row inside FPGA. And hence the filtered output is calculated assuming these pixels to be 0. 
       
      I have tried with different image sizes, different convolution kernels, and also in different targets (cRIO 9030 and IC 3173). All results are same.
       
      I have attached a screenshot of FPGA VI and an example image.
       
      The example image shows an input image of 4000x2500 of same pixel value 16.The kernel is 3x3 of values 1 with divider=1. The RT image is processed using IMAQ convolute inside RT controller and has value 144 [(9*16)/1] for all pixels. But the FPGA processed image (zoomed in) has 144 until 2597 pixels and then 112 (7*16- showing 1 column of 2 rows missing) at 2598, 80 (5*16- showing 2 columns of 2 rows missing) at 2599 and 48 after that (missing 3 columns of 2 rows- current row is always present). This shows the data is missing from the previous rows after 2600 index.
       
      Is there some mistake in the code or any workaround available?


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.