Jump to content

Change resolution image snaped by gigE camera


ASalcedo

Recommended Posts

Hello to all.

First of all thanks a lot for reading this post and being able to help.

I have noticed the next problem:

My application has a camera, this camera takes images (snap mode) and application process them (detect some edges in the image). It works fine.

But when I make the Display bigger my application takes longer to process images (and for me that is crucial). I think that this happens because my application in this case has to process a bigger image (bigger display = bigger image??)

So maybe if My camera takes images with lower resolution I solve the problem.

So how can I change image resolution captured by my gigE camera? In NI MAX or in Labview?

Thanks a lot!

 

Link to comment
1 hour ago, ASalcedo said:

But when I make the Display bigger my application takes longer to process images (and for me that is crucial). I think that this happens because my application in this case has to process a bigger image (bigger display = bigger image??)

The application has to render a larger number of scaled pixels onscreen. The performance drop may be particularly noticeable on computers with weaker graphic cards (e.g. motherboards with integrated graphic chipsets) which defer part of the rendering computations to cpu.

Quote

So maybe if My camera takes images with lower resolution I solve the problem.

If you process smaller images, your processing time may also be proportionally smaller. Additionally, image transfer and memcopy times will be smaller. But images at lower resolution contain less information. You are the one who has to decide how small the resolution can be, in order to still derive meaningful results from the analysis.

If the bottleneck is rendering, you could constrain the GUI so that the display never gets too wide, or the zoom factor never too high. Another popular approach is to process every image from the camera, but to display only one image every so many.

Quote

So how can I change image resolution captured by my gigE camera?

Depends on the camera. Look for image size, ROI (AOI), binning.

Quote

In NI MAX or in Labview?

If you can with MAX, you can with LV (GigE, almost surely Genicam -> IMAQdx properties). [with the advantage that in MAX you can save the default setting]. If you can't with MAX/LV, it might be that a camera SDK allows control of additional features, check your camera documentation.

Edited by ensegre
Link to comment

First of all thanks a lot for replying!

2 hours ago, ensegre said:

The application has to render a larger number of scaled pixels onscreen. The performance drop may be particularly noticeable on computers with weaker graphic cards (e.g. motherboards with integrated graphic chipsets) which defer part of the rendering computations to cpu.

Okey so the problem is normal. Perfect!

2 hours ago, ensegre said:

If you process smaller images, your processing time may also be proportionally smaller. Additionally, image transfer and memcopy times will be smaller. But images at lower resolution contain less information. You are the one who has to decide how small the resolution can be, in order to still derive meaningful results from the analysis.

If the bottleneck is rendering, you could constrain the GUI so that the display never gets too wide, or the zoom factor never too high. Another popular approach is to process every image from the camera, but to display only one image every so many.

User has to visualize a big display so the perfect solution here is to process every image from the camera but just display only one image every so many.

My question here is the next.  I have a ROI property node of my display. I can process the image with that ROI even if I don't wire the final image processed to display?

2 hours ago, ensegre said:

Depends on the camera. Look for image size, ROI (AOI), binning.

My camera is a basler acA1300-30gm.

I have used this vi imaq_setimagesize  . But the problem is that I can't notice if it really changes the resolution. Have I to run this vi afeter each time camera snaps a photo or just once (just only one after I run ImaqCreate)?

 

Thanks again.

Link to comment
3 minutes ago, ASalcedo said:

My question here is the next.  I have a ROI property node of my display. I can process the image with that ROI even if I don't wire the final image processed to display?

You are the master of your code, you can do what you want. Perhaps you're asking - if I get a full image from my camera, can I extract a ROI with IMAQ - short answer, yes, http://zone.ni.com/reference/en-XX/help/370281P-01/imaqvision/region_of_interest_pal/. But you may also want to look into getting only a ROI from the camera, to reduce the payload. To do that, you send settings to the camera, and get only that part of the image; you can't expect to draw something on a LV widget and magically have the camera know about it. Unless you code for that.

Quote

I have used this vi imaq_setimagesize  . But the problem is that I can't notice if it really changes the resolution.

I think you are confusing the IMAQ image buffer size with the actual content acquired by the camera and transferred to the computer. IIRC the IMAQ images auto-adapt themselves in size if they are wired to receive image data different than the sizes they are set to.

Of course you can also get your images at high resolution and resample them, but that adds processing time.

Quote

Have I to run this vi afeter each time camera snaps a photo or just once (just only one after I run ImaqCreate)?

You may have to grasp how IMAQ images are handled btw - normally transformation are never in place, they require the preallocation of a source and of a destination buffer.

Link to comment
7 hours ago, ensegre said:

The application has to render a larger number of scaled pixels onscreen. The performance drop may be particularly noticeable on computers with weaker graphic cards (e.g. motherboards with integrated graphic chipsets) which defer part of the rendering computations to cpu.

I believe LabVIEW always uses the CPU for rendering everything, but if you're talking about windows-level compositing and all that then yes, I could see a really junk graphics card (like on a server-class machine) offloading that work to a CPU as well. Any recent integrated gpu should have enough power to do basic desktop stuff though, or so I would expect.

 

I had a similar issue with a large number of quickly-updating images, never really came up with a solid solution. Binning (cuts resolution in half for display) and casting (cuts data from 16 bits/pixel to 8) the image helps but as mentioned that increases processing time.

Quote

takes longer to process images (and for me that is crucial)

Is this a real-time target?

Link to comment
14 hours ago, smithd said:

I had a similar issue with a large number of quickly-updating images, never really came up with a solid solution. Binning (cuts resolution in half for display) and casting (cuts data from 16 bits/pixel to 8) the image helps but as mentioned that increases processing time.

But if binning and casting take shorter time that having a large and big resolution image it helps me.

By the way, how can I do binning and casting? In NI MAX?

14 hours ago, smithd said:

Is this a real-time target?

My application runs in a industrial PC and the camera has to take 4 image every 130ms and process them in 200 ms. It runs in windows 7.

Link to comment

Binning can be done at the camera in which case you should be able to see it somewhere in max. But if you do it there its binned when captured, so all processing would operate on the binned version. There are i believe noise advantages to binning on the camera.

 

to just do this for display you'd use the imaq cast (convert to u8 or i8, be sure to bit shift by 4 or 8 depending on the source bit depth) and then bin using imaq resample with zero order sampling and x1=x0/2, y1=y0/2.

Edited by smithd
Link to comment
4 hours ago, smithd said:

to just do this for display you'd use the imaq cast (convert to u8 or i8, be sure to bit shift by 4 or 8 depending on the source bit depth) and then bin using imaq resample with zero order sampling and x1=x0/2, y1=y0/2.

Thank you a lot.

Could you post me a little example in LV using imaq cast and resample as you said?

One more thing. have I to do this every time that camera snaps a image? so just after that?

Thanks again.

Edited by ASalcedo
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.