Jump to content
Sign in to follow this  
ASalcedo

Problem NI MAX frame adquisition snap

Recommended Posts

Hello to all.

I am using NI MAX to get images from a camera (snao mode and hardware trigger).

The camera is a basler 1300-30gm.

The problem is the next:

My camera snaps 4 images (1 image each 40ms).

In pylon viewer (camera oficcial software) I can see the four images correctly.

The problem is that when I am trying to see the 4 images in NI MAX sometime the camera can not take the 4th image... like there is no time to take the 4 image (or process it to visualize in NI MAX).

So maybe there is a configuration parameter in NI MAX to solve this.

Any ideas? How can I solve it?

I set up package size to 8000. Maybe package size is the problem?

Thanks a lot.

 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

  • Similar Content

    • By hutha
      New Toolkit : OpenCV wrapper for Raspberry Pi (LinuxRT) for LabVIEW
      Great ! learning platform student and maker for learning machine vision application with LabVIEW.
      It's working with LabVIEW Home 2014 edition and  required LINX 3.0 toolkit.
      You can run NI-VISION toolkit with Raspberry Pi board too.
      1.QwaveCameraCV is a LabVIEW Camera Driver library for Raspberry Pi 3B/3B+ (LinuxRT)
      https://github.com/QWaveSystems/QwaveCameraCV


      2.QwaveOpenCV is OpenCV wrapper functions for LabVIEW for Raspberry Pi 3B/3B+ (LinuxRT)
      https://github.com/QWaveSystems/QwaveOpenCV


      3.QwaveOpenCV Examples using OpenCV (C/C++) and NI-VISION for Raspberry Pi 3B/3B+ (LinuxRT)
      https://github.com/QWaveSystems/QwaveOpenCV-Examples


       
    • By SayMaster
      Hello fellow LabVIEWers,
      I am trying to get a self developed PXI chassis up and running - with some problems.
       
      As this is this my first time developing a chassis, some general questions about 3rd party chassis:
      If I want to get a 3rd party PXI chassis up and running with MAX I just need to add the right description files into the right folder? otherwise the chassis will be recognized as "undefined"? but will still work - except triggers?  
      Whats my status right now?
      Chassis is connected via PCIe 8361 - PXIe 8360 and the Connection seems to work Chassis seems to work and inserted cards get recognized by MAX and "self test" works Chassis seems to get recognized with the Keysight Resource Manager (I installed it and selected it in NI MAX) no unknown devices in windows device manager (so the NI SMBus Controller gets recognized correctly)  
      btw my actual SW installation:
      PXI Services 17.5.1
      MAX 17.5
      VISA 17.5
      DAQ 17.5
      LV 17f2


      I am open for any suggestions and tipps!
       
       
      br
      SayMaster
    • By _Mike_
      Hello!
      I am running a system with GigE camera with NI framegrabber  in continuous acquisition mode - in which I change some attributes every now and then.
      So far I have been using following scheme to assert I am getting correct image after attribute change:
      change attribute read "Last buffer number" property -> n run IMAQdx get image.vi set to "buffer' and parameter n+2 (+2 being due to the fact that i assumed attribute change arrives to camere during running exposure that will be unaffected by attribute change - hence i needed to discard "next" buffer) Unfortunately I still every now and then acquired image that obviously was aquired with previous attributes (e.g. I've dramatically increased.decreased exposure rate, while acquired image was very similar to previously acquired one). Guessing that it may have something to do with buffers I have increased "magic number" from +2 to +3. It indeed helped, but after longer usage I have discovered, that it only reduced the frequency of the error.
      Hence I needed to design more "bulletproof" solution that will satisfy my timing requirements (stoping and starting acquisition is out of question as it takes more than 100ms what is unacceptable for me)
      I would like to:
      change attribute acquire information from camera that will allow me to fetch image acquired with changed attribute For such purpose I have discovered IMAQdx events - especially "Attribute updated" and "frameDone". Unfortunately, I could not find any detailed documentation about those functions. Therefore I would like to ask you for help to determine when 'Attribute updated" is triggered. Is it when:
      driver receives the command and pushes it to camera? (pretty useless for me as I cannot relate it to any particular exposure) camera confirms getting it (then assuming it arrives during an ongoing exposure, I'll discard "next' image and expect second next to be correct camera confirms applying it (then I assume that next image should be obtained with correct settings) camera confirms it managed to acquire a frame with new parameter (pretty unlikely scenario - but the i'd expect correct image to reside in "last" buffer) Could you help me determine which case is true? Also should I be concerned about the fact that there may be a frame "in transition" between camera and IMAQdx driver which will desynchronize my efforts here?
       
    • By Shaun07
      Hello,
       
      I need one help regarding changing the image image type
      How can I convert grey scale image to false color image?
      Here, I have attached two images. 1. greyscale image 2. is just an example that I want to convert. (False Color). 
      Any help would be appreciate.
       
      Thanks,
      Parth Panchal 


    • By Shaun07
      Hello All,
       
      I am new to the camera labview programming. 
       
      For my research work, I am using camera to grab the live image. I am stuck with one problem. 
      how to treat the background?
      I tried to subtract the constant value from the entire image, but with that i am loosing few of my data.
      I know one solution, but I don't know how to implement this solution. 
      Problem: how to take the values from the 4 corner of the image and subtract those value from entire image?
       
      If anybody previously develop similar stuff then please help me out for this. 
       
      Any help would be appreciate.
       
      Thanks,
      Shaun

      removebackground.vi
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.