Jump to content

Industrial MIPI camera for Raspberry Pi

Recommended Posts

Hello everyone,

We have started with the development of our own embedded MIPI camera module for the Raspberry Pi with an industrial image sensor. Currently we are in the specification phase and we are interested in your preferences so we will develop a product that does fit your requirements. If you have some time, please consider filling in this 4-question questionnaire.


Edited by GeT-Cameras
  • Like 2
Link to post
  • 1 month later...


Here are some suggestions:

-Unique serial number that can be accessed by software

-Allow possibility to have more than one lens. (Be able to have 3 cm from the lens can be good for inspection) Having low distortion lens is good to.

-Adaptable to standard lens on the market (stay low cost if possible)

-Having a microbolometer camera would be amazing.

-Having multiple choice for resolution 1MP to 24 MP

-Add distortion correction by software according to the lens

-If you ever be able to create a 3D (imply using double camera) would be amazing (one frame on 2 for each side)

-Some of your option in the survey should be all possible. (software or hardware triggered)

-Having the option to buy or not the housing

What will be the maximum length for the cable?


Link to post
  • 2 weeks later...
On 12/3/2018 at 11:32 AM, Benoit said:

What will be the maximum length for the cable?


From my limited experience with MIPI this is very critical. The differential serial lanes of a MIPI connection need all to be within a few mm of length to guarantee proper signal transmission. That means that if you design a PCB you generally have to use meander microstrips to make sure that every connection has exactly the same length. In addition the MIPI standard is designed as a chip to chip interface and is not meant to be routed through long cables. The D-PHY specification defines the maximum lane flight time to 2 ns. so that means that on an FR-4 PCB using matched microstriplines you get at most 25 to 30 cm of trace length. The typical FPC flex cable used to connect a camera module to a board does have similar electrical characteristics and is therefore not that much different. That includes the traces from the chip to the FPC cable connector, the FPC cable itself and the traces from the FPC connector to the framegrabber chip.

Link to post
  • 3 weeks later...

Hi Benoit,

Currently you can already connect all our USB3 machine vision cameras to the raspberry pi. You have then all option you require.

Currently our Mipi camera design of the Raspberry Pi is in the proof of concept stage. Our target is to have a working concept in 2 months. Then the hardware design will start. Initially we will start with a 5MP and 20MP sensor. With these sensors you can user ROI to reduce resolution so we can cover all resolutions. We will keep you posted on our progress. Concerning cable length, we will probably specify it to max 10cm. It will have an m12 mount and cs and c-mount option so you can connect all lenses.

Link to post
  • 1 year later...

Hi...i would also use this. But i will use High sensitivity sensors/cameras with the right pixel size (not too small) are in demand in the astro imagery crowd, finder scope cameras, polar alignment cameras, all sky cameras, deepsky cameras, even cameras to monitor the mount remotely.  Most use ASCOM software to run them.

surface mount assembly

Edited by BonniCase
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By Maurice Rahme
      I'm trying to implement an object tracking in LabVIEW. I've written a VI (attached) to detect and locate the centroid of elliptical shapes (because my circular subject can possibly be rotated and appear as an ellipse). However, the detection is not successful on every frame.
      For this reason, I would like to use the object detection method I've written as a ROI reference for an object tracker, whereby the object detector runs every 10-20 frames or so to make sure the tracker has not accumulated too much error. Any advice on how I can do this? Thanks! 
      Additionally, any criticism or advice on my object detection method would be appreciated. 
    • By Shaun07
      Here, I have attached the VI in which I want to do auto exposure and set that exposure value.
      Basically this is program for laser beam analysis on live image. 
      Basically I want to set exposure time accordingly laser beam intensity. 
      If anyone previously worked on it then please help me with that. I am new to image processing. 
      Other image I attached, that's the part I want to implement with this program. 
      Help me out to solve this problem. 

    • By Shaun07
      Research purpose, I am using Imagining Source (DMK 33Ux249) camera to capture the laser beam. I am trying to write a code for auto exposure according to power level of laser beam. I used Vision acquistion for capturing live video image and tried to open the front panel of the vision acquistion. 
      But, I couldn't figure out how to set the exposure level automatically.  Basically whole task is,
      1. Capturing live image
      2. set the exposure time according to laser beam profile
      3. Remember the exposure the time and set again according to next frame or beam profile.
      Is anybody previously work or have an idea to solve this issue please let me know. 
    • By Shaun07
      Hello All,
      I am trying to removing the background and normalizing the image data. I have an image which I attached here. 
      All I want as end result of normalized image with no background. 
      At last  I want to check the beam profile before and after.
      Is anybody previously worked on it?
      Any VI?
      Any help would be appreciate.
      Thanks in Advance 

    • By prabhakaran
      I am trying to use image convolution inside FPGA. My Image size is around 6kx2k. The convolution is applied properly until 2600 pixels in x resolution. After that, the values seem to miss previous row data. 
      In Detail: As convolution is matrix operation, image data needs to be stored for the operation. But it seems there is an inadvertent error storing only 2600 pixels per row inside FPGA. And hence the filtered output is calculated assuming these pixels to be 0. 
      I have tried with different image sizes, different convolution kernels, and also in different targets (cRIO 9030 and IC 3173). All results are same.
      I have attached a screenshot of FPGA VI and an example image.
      The example image shows an input image of 4000x2500 of same pixel value 16.The kernel is 3x3 of values 1 with divider=1. The RT image is processed using IMAQ convolute inside RT controller and has value 144 [(9*16)/1] for all pixels. But the FPGA processed image (zoomed in) has 144 until 2597 pixels and then 112 (7*16- showing 1 column of 2 rows missing) at 2598, 80 (5*16- showing 2 columns of 2 rows missing) at 2599 and 48 after that (missing 3 columns of 2 rows- current row is always present). This shows the data is missing from the previous rows after 2600 index.
      Is there some mistake in the code or any workaround available?

  • Create New...

Important Information

By using this site, you agree to our Terms of Use.