By apoorve kalot
just following a tutorial on YouTube [https://www.youtube.com/watch?v=Ko0heNZhRvI], for setting objects in lab-view for simulation, doing the steps as stated. as program executes, the resultant spheres, ( which has been added to the 3D picture object, are very far out zoomed ) , done the following steps to correct it, still no help
1) seen online help of using "shift + drag" to correct it, but it's still no help,
2) changed camera position to : none and auto - redraw, still no help,
since i am new to this, it is request to state the measure to correct the problem and to avoid these problems.
Along with one question : how to add Co-ordinate axises for reference to it, independent of any other objects, [ i.e. the axises shouldn't be affected by rotation, translation or objects ] in 3D picture object
edit : same question is posted as well on : https://forums.ni.com/t5/LabVIEW/camera-postion-stuck-in-3D-picture-control/m-p/3990030#M1138526
New Toolkit : OpenCV wrapper for Raspberry Pi (LinuxRT) for LabVIEW (Great ! learning platform for learning machine vision with LabVIEW)By hutha
New Toolkit : OpenCV wrapper for Raspberry Pi (LinuxRT) for LabVIEW
Great ! learning platform student and maker for learning machine vision application with LabVIEW.
It's working with LabVIEW Home 2014 edition and required LINX 3.0 toolkit.
You can run NI-VISION toolkit with Raspberry Pi board too.
1.QwaveCameraCV is a LabVIEW Camera Driver library for Raspberry Pi 3B/3B+ (LinuxRT)
2.QwaveOpenCV is OpenCV wrapper functions for LabVIEW for Raspberry Pi 3B/3B+ (LinuxRT)
3.QwaveOpenCV Examples using OpenCV (C/C++) and NI-VISION for Raspberry Pi 3B/3B+ (LinuxRT)
I am running a system with GigE camera with NI framegrabber in continuous acquisition mode - in which I change some attributes every now and then.
So far I have been using following scheme to assert I am getting correct image after attribute change:
change attribute read "Last buffer number" property -> n run IMAQdx get image.vi set to "buffer' and parameter n+2 (+2 being due to the fact that i assumed attribute change arrives to camere during running exposure that will be unaffected by attribute change - hence i needed to discard "next" buffer) Unfortunately I still every now and then acquired image that obviously was aquired with previous attributes (e.g. I've dramatically increased.decreased exposure rate, while acquired image was very similar to previously acquired one). Guessing that it may have something to do with buffers I have increased "magic number" from +2 to +3. It indeed helped, but after longer usage I have discovered, that it only reduced the frequency of the error.
Hence I needed to design more "bulletproof" solution that will satisfy my timing requirements (stoping and starting acquisition is out of question as it takes more than 100ms what is unacceptable for me)
I would like to:
change attribute acquire information from camera that will allow me to fetch image acquired with changed attribute For such purpose I have discovered IMAQdx events - especially "Attribute updated" and "frameDone". Unfortunately, I could not find any detailed documentation about those functions. Therefore I would like to ask you for help to determine when 'Attribute updated" is triggered. Is it when:
driver receives the command and pushes it to camera? (pretty useless for me as I cannot relate it to any particular exposure) camera confirms getting it (then assuming it arrives during an ongoing exposure, I'll discard "next' image and expect second next to be correct camera confirms applying it (then I assume that next image should be obtained with correct settings) camera confirms it managed to acquire a frame with new parameter (pretty unlikely scenario - but the i'd expect correct image to reside in "last" buffer) Could you help me determine which case is true? Also should I be concerned about the fact that there may be a frame "in transition" between camera and IMAQdx driver which will desynchronize my efforts here?
I need one help regarding changing the image image type
How can I convert grey scale image to false color image?
Here, I have attached two images. 1. greyscale image 2. is just an example that I want to convert. (False Color).
Any help would be appreciate.
I am new to the camera labview programming.
For my research work, I am using camera to grab the live image. I am stuck with one problem.
how to treat the background?
I tried to subtract the constant value from the entire image, but with that i am loosing few of my data.
I know one solution, but I don't know how to implement this solution.
Problem: how to take the values from the 4 corner of the image and subtract those value from entire image?
If anybody previously develop similar stuff then please help me out for this.
Any help would be appreciate.