Jump to content

Search the Community

Showing results for tags 'machine vision'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Software & Hardware Discussions
    • LabVIEW Community Edition
    • LabVIEW General
    • LabVIEW (By Category)
    • Hardware
  • Resources
    • LabVIEW Getting Started
    • GCentral
    • Code Repository (Certified)
    • LAVA Code on LabVIEW Tools Network
    • Code In-Development
    • OpenG
  • Community
    • LAVA Lounge
    • LabVIEW Feedback for NI
    • LabVIEW Ecosystem
  • LAVA Site Related
    • Site Feedback & Support
    • Wiki Help

Categories

  • *Uncertified*
  • LabVIEW Tools Network Certified
  • LabVIEW API
    • VI Scripting
    • JKI Right-Click Framework Plugins
    • Quick Drop Plugins
    • XNodes
  • General
  • User Interface
    • X-Controls
  • LabVIEW IDE
    • Custom Probes
  • LabVIEW OOP
  • Database & File IO
  • Machine Vision & Imaging
  • Remote Control, Monitoring and the Internet
  • Hardware

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Personal Website


Company Website


Twitter Name


LinkedIn Profile


Facebook Page


Location


Interests

Found 13 results

  1. Hello everyone, We have started with the development of our own embedded MIPI camera module for the Raspberry Pi with an industrial image sensor. Currently we are in the specification phase and we are interested in your preferences so we will develop a product that does fit your requirements. If you have some time, please consider filling in this 4-question questionnaire.
  2. Hello, I'm trying to implement an object tracking in LabVIEW. I've written a VI (attached) to detect and locate the centroid of elliptical shapes (because my circular subject can possibly be rotated and appear as an ellipse). However, the detection is not successful on every frame. For this reason, I would like to use the object detection method I've written as a ROI reference for an object tracker, whereby the object detector runs every 10-20 frames or so to make sure the tracker has not accumulated too much error. Any advice on how I can do this? Thanks! Additionally, any cr
  3. Hello, Here, I have attached the VI in which I want to do auto exposure and set that exposure value. Basically this is program for laser beam analysis on live image. Basically I want to set exposure time accordingly laser beam intensity. If anyone previously worked on it then please help me with that. I am new to image processing. Other image I attached, that's the part I want to implement with this program. Help me out to solve this problem. Thanks Thanks, Basic_Camera_Live_Beam_Information.vi
  4. Hello, Research purpose, I am using Imagining Source (DMK 33Ux249) camera to capture the laser beam. I am trying to write a code for auto exposure according to power level of laser beam. I used Vision acquistion for capturing live video image and tried to open the front panel of the vision acquistion. But, I couldn't figure out how to set the exposure level automatically. Basically whole task is, 1. Capturing live image 2. set the exposure time according to laser beam profile 3. Remember the exposure the time and set again according to next frame or beam profile.
  5. Hello All, I am trying to removing the background and normalizing the image data. I have an image which I attached here. All I want as end result of normalized image with no background. At last I want to check the beam profile before and after. Is anybody previously worked on it? Any VI? Any help would be appreciate. Thanks in Advance
  6. Hi, I am trying to use image convolution inside FPGA. My Image size is around 6kx2k. The convolution is applied properly until 2600 pixels in x resolution. After that, the values seem to miss previous row data. In Detail: As convolution is matrix operation, image data needs to be stored for the operation. But it seems there is an inadvertent error storing only 2600 pixels per row inside FPGA. And hence the filtered output is calculated assuming these pixels to be 0. I have tried with different image sizes, different convolution kernels, and also in
  7. Hi! I have a video of a laser beam spot. The images contain noise. I need only the central bright portion of the image as the other circular portions are noise. After eliminating the noise I have to find the area of the spot by counting the bright pixels in the image.I have tried to execute the same but I find the output image after processing to be faulty.Ideally it should in black and red as I have done threshold. Can you help me with this ? Also I need help with counting the bright pixels to find area . I have tried converting the image into 1D array and then tried to add arr
  8. Freelance/consultancy help needed at our Southern California location. I have a project where I need to take an existing LabVIEW solution and adapt it to make the application FDA 21 CFR Part 11 compliant. The application is a machine vision solution using the Vision Development Module. The vision part is working well and doesn't need to be replaced. The system uses machine vision to inspect pharmaceutical injection components. I'm looking for an engineer with experience in developing, qualifying and validating Part 11-compliant systems, including the addition of audit trail and
  9. I would like to try and upgrade my current CCD (pretty close to 640x480) to a higher res model (1080p). Here are my two requirements: It has to work LabVIEW/Vision software (could interface through GIG-E/IP or whatever) It has to fit inside of a tube about this size: (marker for scale) http://i.imgur.com/9i8nPRE.jpg The housing tube can be increased in size up to about a 1.25" diameter the tube runs back 2' until it opens up into an open space where we have a gigabit switch. The camera could just have a telescoping lens or be small enough to fit in a tube that size with the cables runnin
  10. Hello. I'm doing a project in which I want to create this pattern that I attached. The pattern of nine points have to know its coordinates. If anyone can give me any idea how to do this in labview, I would appreciate Thank you regards
  11. Hi, If some can provide me with the code for color quantization,it will be very greatful for me. http://en.wikipedia.org/wiki/Color_quantization Thank you
  12. HI, After finding the dominant color in an image(attached the image file),using the dominant color from the orginal image,i have to create color mask image as per the technique from this pdf i had attached ,can you please help me out in performing a color mask to an image usiing dominant color. Thank you, Madhubalan A fast MPEG-7 dominant color extraction with new similarity.pdf
  13. Good afternoon, everybody. I'm new here and actually working in a Localization Algorithm using LabView. I tried to extract walls contour from a simple floor plan to use it in my algorithm later. At first I thought doing it manually but then I realized there's a lot of high level VI's in Labview for computer vision. In example, "Extract contour" would give me the coordinates found in an image, so I wrote a simple piece of code to test it. But first, I need to select a ROI. I used "Rectangle to ROI" and added the imputs programmatically, but the Extract Contour VI doesn't accept it as a vali
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.