Jump to content

Using Edge detection to help with applying a threshold to an image


Recommended Posts

Hello,

 

    I am having a bit of an issue applying a threshold to an image and getting reliable results.  I am thinking using edge detection prior to doing any other image processing could make this work...  but I am at a loss for how to implement it.  

 

    If you look at the image below you will see the root of my problem.  The top set of images and bottom are only a second or 2 apart but the threshold sees them differently, yet the edge detection seems to correctly detect the edges the same.  

 

I only care about the range of pixels that usually makes up the brightest part of the image (hence the thresholding)  but the low end of that range changes from frame to frame and I am left with an incorrect determination.  

 

My question is, does anyone know of a good way to use edge detection (or maybe some other method that I am not aware of) to help me with this problem?

 

4BN1c3q.png

Link to post
Share on other sites

Can you tell us a bit more about what your desired measurement is? Are you just trying to get the area of the white stuff? Usually, you want to make this work as easy as possible for the software (crap in crap out principle) so you want to control the lighting and have a fixed ROI. Secondly you might want to use an auto threshold method so that it dynamically changes if the lighting control isn't perfect. Third, you'll notice the top of your image is much darker than the bottom. Your eyes are really good at subtracting shadows. Here's an example illusion based on that effect. You'll need have a different threshold for different parts of the image (preferably a gradient) to compensate for this.

Link to post
Share on other sites

Can you tell us a bit more about what your desired measurement is? Are you just trying to get the area of the white stuff? Usually, you want to make this work as easy as possible for the software (crap in crap out principle) so you want to control the lighting and have a fixed ROI. Secondly you might want to use an auto threshold method so that it dynamically changes if the lighting control isn't perfect. Third, you'll notice the top of your image is much darker than the bottom. Your eyes are really good at subtracting shadows. Here's an example illusion based on that effect. You'll need have a different threshold for different parts of the image (preferably a gradient) to compensate for this.

 

    Thanks for the reply!  I have a couple different ROIs set up and I am measuring the amount of space inside the box that the "white stuff" occupies.  (see the image below)  I was a little vague about how I am actually doing the thresholding.  In short, I am not just using the threshold vi and hoping for the best.  I am trying to use an algorithm (using the Gaussian mixture model) that dynamically changes but like you said, with the lighting/contrast/brightness of the image always changing it does not always perform that well, whereas the edge detection seems to always find the correct edges.  

 

I suppose I was hoping I could use the edge detection as the base for some kind of mask or something...

 

9sjqlj9.png

Edited by cathair28
Link to post
Share on other sites

Using the same ROIs, focus on 1 and 2 right now, you can look at your edge detection results, threshold those to find a binary representation of your edges.  Then I might suggest using those results to identify a region inside your desired object and a region outside to use their original values as inputs to your thresholding on the original image.  Alternatively you could attempt to use the results of the edge detection to create a blob that roughly matches your desired object and use that as an image mask on the original image before doing further processing.  

 

Nothing is set in stone with Image processing; there are many ways to achieve a result.  Some are better/more robust than others.  As mentioned, you are always best off to ensure the best, most repeatable images (positioning, lighting, working distance, etc.) are acquired to begin with.  I often tell customers that it takes 10x as much work to correct something in processing than in hardware.  Regardless of the actual multiplier, it gets the point across.

Link to post
Share on other sites

    Thanks for the reply!  I have a couple different ROIs set up and I am measuring the amount of space inside the box that the "white stuff" occupies.  (see the image below)  I was a little vague about how I am actually doing the thresholding.  In short, I am not just using the threshold vi and hoping for the best.  I am trying to use an algorithm (using the Gaussian mixture model) that dynamically changes but like you said, with the lighting/contrast/brightness of the image always changing it does not always perform that well, whereas the edge detection seems to always find the correct edges.  

 

I suppose I was hoping I could use the edge detection as the base for some kind of mask or something...

 

 

 

What happens when you threshold your "edges" image? Can you just use particle analysis to "fill holes" on that img?

 

Alt path: It looks like a regular threshold should work on your ROIs if you manually select the threshold right?  Maybe do an erosion to get rid of the small particles? If that's the case maybe you can find some safe areas where you know there won't be any white? The corners maybe? Use that to set your threshold.

Link to post
Share on other sites

I suppose I was hoping I could use the edge detection as the base for some kind of mask or something...

 

You can use the edge detection with the IMAQ MagicWand VI bounded by your ROI boxes to produce the mask. It will fill to the edges of your region (either inside or outside your edge). You can then either use that as a mask or just count the pixels as a percentage of the area.

  • Like 1
Link to post
Share on other sites

You can use the edge detection with the IMAQ MagicWand VI bounded by your ROI boxes to produce the mask. It will fill to the edges of your region (either inside or outside your edge). You can then either use that as a mask or just count the pixels as a percentage of the area.

 

Thanks for the suggestion, I'll mess around with it and report back!

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By Deon
      The Threshold Hysteresis VI allows an input (for example, a sensor) to drift out of an inner limit without generating an invalid flag, but if it drifts outside an outer limit it then becomes invalid. For the reverse, when the input crosses back into the outer limit it remains invalid until the value falls inside the inner limit.
      For simplicity, only one input parameter is provided for both the inner & outer limits, and is negated for the lower limit, as generally tolerance limits are equidistant from the nominal.
    • By Deon
      View File Threshold Hysteresis v1.0 LV8.6.1
      The Threshold Hysteresis VI allows an input (for example, a sensor) to drift out of an inner limit without generating an invalid flag, but if it drifts outside an outer limit it then becomes invalid. For the reverse, when the input crosses back into the outer limit it remains invalid until the value falls inside the inner limit.
      For simplicity, only one input parameter is provided for both the inner & outer limits, and is negated for the lower limit, as generally tolerance limits are equidistant from the nominal.
      Submitter Deon Submitted 11/10/2014 Category General LabVIEW Version License Type  
    • By Elancheran
      Hi Everyone,
            I am trying to play the video in reverse decrementing the frame number in IMAQ Read Frame function. Its working but the result is very choppy as every frame takes significant time to load, but when I just increment the frame number and play the video forward, its executing without any problem. I have attached the VI and info regarding the video, could you guys please let me know why I am having problem when I am trying to display the video in the reverse order.
      Playing AVI file.vi

    • By hutha
      New Toolkit : OpenCV wrapper for Raspberry Pi (LinuxRT) for LabVIEW
      Great ! learning platform student and maker for learning machine vision application with LabVIEW.
      It's working with LabVIEW Home 2014 edition and  required LINX 3.0 toolkit.
      You can run NI-VISION toolkit with Raspberry Pi board too.
      1.QwaveCameraCV is a LabVIEW Camera Driver library for Raspberry Pi 3B/3B+ (LinuxRT)
      https://github.com/QWaveSystems/QwaveCameraCV


      2.QwaveOpenCV is OpenCV wrapper functions for LabVIEW for Raspberry Pi 3B/3B+ (LinuxRT)
      https://github.com/QWaveSystems/QwaveOpenCV


      3.QwaveOpenCV Examples using OpenCV (C/C++) and NI-VISION for Raspberry Pi 3B/3B+ (LinuxRT)
      https://github.com/QWaveSystems/QwaveOpenCV-Examples


       
    • By Shaun07
      Hello,
       
      I need one help regarding changing the image image type
      How can I convert grey scale image to false color image?
      Here, I have attached two images. 1. greyscale image 2. is just an example that I want to convert. (False Color). 
      Any help would be appreciate.
       
      Thanks,
      Parth Panchal 


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.