Jump to content
ASalcedo

Change resolution image snaped by gigE camera

Recommended Posts

Hello to all.

First of all thanks a lot for reading this post and being able to help.

I have noticed the next problem:

My application has a camera, this camera takes images (snap mode) and application process them (detect some edges in the image). It works fine.

But when I make the Display bigger my application takes longer to process images (and for me that is crucial). I think that this happens because my application in this case has to process a bigger image (bigger display = bigger image??)

So maybe if My camera takes images with lower resolution I solve the problem.

So how can I change image resolution captured by my gigE camera? In NI MAX or in Labview?

Thanks a lot!

 

Share this post


Link to post
Share on other sites
1 hour ago, ASalcedo said:

But when I make the Display bigger my application takes longer to process images (and for me that is crucial). I think that this happens because my application in this case has to process a bigger image (bigger display = bigger image??)

The application has to render a larger number of scaled pixels onscreen. The performance drop may be particularly noticeable on computers with weaker graphic cards (e.g. motherboards with integrated graphic chipsets) which defer part of the rendering computations to cpu.

Quote

So maybe if My camera takes images with lower resolution I solve the problem.

If you process smaller images, your processing time may also be proportionally smaller. Additionally, image transfer and memcopy times will be smaller. But images at lower resolution contain less information. You are the one who has to decide how small the resolution can be, in order to still derive meaningful results from the analysis.

If the bottleneck is rendering, you could constrain the GUI so that the display never gets too wide, or the zoom factor never too high. Another popular approach is to process every image from the camera, but to display only one image every so many.

Quote

So how can I change image resolution captured by my gigE camera?

Depends on the camera. Look for image size, ROI (AOI), binning.

Quote

In NI MAX or in Labview?

If you can with MAX, you can with LV (GigE, almost surely Genicam -> IMAQdx properties). [with the advantage that in MAX you can save the default setting]. If you can't with MAX/LV, it might be that a camera SDK allows control of additional features, check your camera documentation.

Edited by ensegre

Share this post


Link to post
Share on other sites

First of all thanks a lot for replying!

2 hours ago, ensegre said:

The application has to render a larger number of scaled pixels onscreen. The performance drop may be particularly noticeable on computers with weaker graphic cards (e.g. motherboards with integrated graphic chipsets) which defer part of the rendering computations to cpu.

Okey so the problem is normal. Perfect!

2 hours ago, ensegre said:

If you process smaller images, your processing time may also be proportionally smaller. Additionally, image transfer and memcopy times will be smaller. But images at lower resolution contain less information. You are the one who has to decide how small the resolution can be, in order to still derive meaningful results from the analysis.

If the bottleneck is rendering, you could constrain the GUI so that the display never gets too wide, or the zoom factor never too high. Another popular approach is to process every image from the camera, but to display only one image every so many.

User has to visualize a big display so the perfect solution here is to process every image from the camera but just display only one image every so many.

My question here is the next.  I have a ROI property node of my display. I can process the image with that ROI even if I don't wire the final image processed to display?

2 hours ago, ensegre said:

Depends on the camera. Look for image size, ROI (AOI), binning.

My camera is a basler acA1300-30gm.

I have used this vi imaq_setimagesize  . But the problem is that I can't notice if it really changes the resolution. Have I to run this vi afeter each time camera snaps a photo or just once (just only one after I run ImaqCreate)?

 

Thanks again.

Share this post


Link to post
Share on other sites
3 minutes ago, ASalcedo said:

My question here is the next.  I have a ROI property node of my display. I can process the image with that ROI even if I don't wire the final image processed to display?

You are the master of your code, you can do what you want. Perhaps you're asking - if I get a full image from my camera, can I extract a ROI with IMAQ - short answer, yes, http://zone.ni.com/reference/en-XX/help/370281P-01/imaqvision/region_of_interest_pal/. But you may also want to look into getting only a ROI from the camera, to reduce the payload. To do that, you send settings to the camera, and get only that part of the image; you can't expect to draw something on a LV widget and magically have the camera know about it. Unless you code for that.

Quote

I have used this vi imaq_setimagesize  . But the problem is that I can't notice if it really changes the resolution.

I think you are confusing the IMAQ image buffer size with the actual content acquired by the camera and transferred to the computer. IIRC the IMAQ images auto-adapt themselves in size if they are wired to receive image data different than the sizes they are set to.

Of course you can also get your images at high resolution and resample them, but that adds processing time.

Quote

Have I to run this vi afeter each time camera snaps a photo or just once (just only one after I run ImaqCreate)?

You may have to grasp how IMAQ images are handled btw - normally transformation are never in place, they require the preallocation of a source and of a destination buffer.

Share this post


Link to post
Share on other sites
7 hours ago, ensegre said:

The application has to render a larger number of scaled pixels onscreen. The performance drop may be particularly noticeable on computers with weaker graphic cards (e.g. motherboards with integrated graphic chipsets) which defer part of the rendering computations to cpu.

I believe LabVIEW always uses the CPU for rendering everything, but if you're talking about windows-level compositing and all that then yes, I could see a really junk graphics card (like on a server-class machine) offloading that work to a CPU as well. Any recent integrated gpu should have enough power to do basic desktop stuff though, or so I would expect.

 

I had a similar issue with a large number of quickly-updating images, never really came up with a solid solution. Binning (cuts resolution in half for display) and casting (cuts data from 16 bits/pixel to 8) the image helps but as mentioned that increases processing time.

Quote

takes longer to process images (and for me that is crucial)

Is this a real-time target?

Share this post


Link to post
Share on other sites
14 hours ago, smithd said:

I had a similar issue with a large number of quickly-updating images, never really came up with a solid solution. Binning (cuts resolution in half for display) and casting (cuts data from 16 bits/pixel to 8) the image helps but as mentioned that increases processing time.

But if binning and casting take shorter time that having a large and big resolution image it helps me.

By the way, how can I do binning and casting? In NI MAX?

14 hours ago, smithd said:

Is this a real-time target?

My application runs in a industrial PC and the camera has to take 4 image every 130ms and process them in 200 ms. It runs in windows 7.

Share this post


Link to post
Share on other sites

Binning can be done at the camera in which case you should be able to see it somewhere in max. But if you do it there its binned when captured, so all processing would operate on the binned version. There are i believe noise advantages to binning on the camera.

 

to just do this for display you'd use the imaq cast (convert to u8 or i8, be sure to bit shift by 4 or 8 depending on the source bit depth) and then bin using imaq resample with zero order sampling and x1=x0/2, y1=y0/2.

Edited by smithd

Share this post


Link to post
Share on other sites
4 hours ago, smithd said:

to just do this for display you'd use the imaq cast (convert to u8 or i8, be sure to bit shift by 4 or 8 depending on the source bit depth) and then bin using imaq resample with zero order sampling and x1=x0/2, y1=y0/2.

Thank you a lot.

Could you post me a little example in LV using imaq cast and resample as you said?

One more thing. have I to do this every time that camera snaps a image? so just after that?

Thanks again.

Edited by ASalcedo

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Similar Content

    • By Elancheran
      Hi Everyone,
            I am trying to play the video in reverse decrementing the frame number in IMAQ Read Frame function. Its working but the result is very choppy as every frame takes significant time to load, but when I just increment the frame number and play the video forward, its executing without any problem. I have attached the VI and info regarding the video, could you guys please let me know why I am having problem when I am trying to display the video in the reverse order.
      Playing AVI file.vi

    • By hutha
      New Toolkit : OpenCV wrapper for Raspberry Pi (LinuxRT) for LabVIEW
      Great ! learning platform student and maker for learning machine vision application with LabVIEW.
      It's working with LabVIEW Home 2014 edition and  required LINX 3.0 toolkit.
      You can run NI-VISION toolkit with Raspberry Pi board too.
      1.QwaveCameraCV is a LabVIEW Camera Driver library for Raspberry Pi 3B/3B+ (LinuxRT)
      https://github.com/QWaveSystems/QwaveCameraCV


      2.QwaveOpenCV is OpenCV wrapper functions for LabVIEW for Raspberry Pi 3B/3B+ (LinuxRT)
      https://github.com/QWaveSystems/QwaveOpenCV


      3.QwaveOpenCV Examples using OpenCV (C/C++) and NI-VISION for Raspberry Pi 3B/3B+ (LinuxRT)
      https://github.com/QWaveSystems/QwaveOpenCV-Examples


       
    • By _Mike_
      Hello!
      I am running a system with GigE camera with NI framegrabber  in continuous acquisition mode - in which I change some attributes every now and then.
      So far I have been using following scheme to assert I am getting correct image after attribute change:
      change attribute read "Last buffer number" property -> n run IMAQdx get image.vi set to "buffer' and parameter n+2 (+2 being due to the fact that i assumed attribute change arrives to camere during running exposure that will be unaffected by attribute change - hence i needed to discard "next" buffer) Unfortunately I still every now and then acquired image that obviously was aquired with previous attributes (e.g. I've dramatically increased.decreased exposure rate, while acquired image was very similar to previously acquired one). Guessing that it may have something to do with buffers I have increased "magic number" from +2 to +3. It indeed helped, but after longer usage I have discovered, that it only reduced the frequency of the error.
      Hence I needed to design more "bulletproof" solution that will satisfy my timing requirements (stoping and starting acquisition is out of question as it takes more than 100ms what is unacceptable for me)
      I would like to:
      change attribute acquire information from camera that will allow me to fetch image acquired with changed attribute For such purpose I have discovered IMAQdx events - especially "Attribute updated" and "frameDone". Unfortunately, I could not find any detailed documentation about those functions. Therefore I would like to ask you for help to determine when 'Attribute updated" is triggered. Is it when:
      driver receives the command and pushes it to camera? (pretty useless for me as I cannot relate it to any particular exposure) camera confirms getting it (then assuming it arrives during an ongoing exposure, I'll discard "next' image and expect second next to be correct camera confirms applying it (then I assume that next image should be obtained with correct settings) camera confirms it managed to acquire a frame with new parameter (pretty unlikely scenario - but the i'd expect correct image to reside in "last" buffer) Could you help me determine which case is true? Also should I be concerned about the fact that there may be a frame "in transition" between camera and IMAQdx driver which will desynchronize my efforts here?
       
    • By Shaun07
      Hello,
       
      I need one help regarding changing the image image type
      How can I convert grey scale image to false color image?
      Here, I have attached two images. 1. greyscale image 2. is just an example that I want to convert. (False Color). 
      Any help would be appreciate.
       
      Thanks,
      Parth Panchal 


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.