Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

Reading Camera Data

Status
Not open for further replies.

dknguyen

Well-Known Member
Most Helpful Member
Hello. I was just wondering if anyone could give the basic idea of how digital data is read from a camera (that outputs digitized data). What I have so far is that usually there is an I2C port to control the functions of the camera, and there is a parallel interface that outputs camera data.

-8-pin bus or so for pixel data (I presume for the 8-bits of data per pixel assuming monochrome. I'm not sure how colour data would be outputted unless it outputted the R, G, and B value or whatever 3 values for the pixel sequentially before moving onto the next pixel)

-a pixel clock which I assume clocks the individual bits that represent the data for a pixel (or maybe it is another clock that does this and this pixel clock is actually a Pixel sync pulse to indicate all current information is for the same pixel)
-a frame sync pulse which, when high, indicates all the pixel information is for the same frame
-a line sync pulse which is similar to the frame sync pulse except it indicates all the pixel information coming out is for the same line

Is this the basic idea of how it works? I've heard it's usually best to use a dedicated CPLD or FPGA to read this data straight into RAM where it can be fetched by the processor but I am not sure where to start with CPLDs or FPGAs (I'm leaning towards CPLDs since they are faster, nonvolatile, and reading stuff into a RAM doesn't seem like something so complicated you need an FPGA). THe only ones I've used are the Xilinx Spartan 3s at the university but those were somewhat of a black box. I just followed the instruction sheets. I learned more about the design theory than the actualy usage of the hardware.
 
What kind of camera? I think all the small cellphone/webcam style color CMOS cameras I've seen implement a Bayer color filter. Interleaved RGBG (or something like that) pixels - there's only one color per pixel... Converting into a RGB array is a bit computationally expensive.

One reason to use the FPGA is that you can do some processing on the FPGA to get the data rate down to something managable by the main processor. Otherwise you'll end up with images in the buffer, but nothing that can keep up with the frame rate. Alternatively if you just want to DMA a small number of frames into memory and process it in a non-realtime manner, then a CPLD would make more sense.
 
The question is so device specific it's difficult to get into a discussion of how it's done without specific and intimate knowledge of how the CMOS sensor and it's support electronics work. If this information is obtained from the CMOS sensor itself though hjames is right it's likley raw pixel data RGB pixel data has to be prococessed afterwards.
 
Hmm, the point that an FPGA can do some preprocessing is something that never occured to me.

**broken link removed**

I'm mainly trying to understand how these CMOS camera datasheets are read in general. It doesn't describe (or describe very well) the timing diagrams and the functions of the different pulses. I'm also not sure how the colour information is encoded (although if it's B/W that seems simple enough).
 
I'd be surprised if there isn't a "real" data sheet somewhere, or at least a programming/user guide - there isn't anywhere near enough information in that to actually use the chip. At least not without a lot of swearing...
 
I think it follows Channel Link specification

I read some text like so... (not on your pdf)

The Channel Link technology is integral to the transmission of video data.
Image data and image enables are transmitted on the Channel Link bus.
Four enable signals are defined as:
• FVAL—Frame Valid (FVAL) is defined HIGH for valid lines.
• LVAL—Line Valid (LVAL) is defined HIGH for valid pixels.
• DVAL—Data Valid (DVAL) is defined HIGH when data is valid.
• Spare— A spare has been defined for future use.
All four enables must be provided by the camera on each Channel Link
chip. All unused data bits must be tied to a known value by the camera.



Figures 4 and 5 of your PDF shows pretty well the timing. And the databits (10bit?) of course represent the pixel depth. With each new pixel coming on the rising clock pulse.

You can run in triggered mode = (you tell it when to expose)

or, you can run it free = (the camera free runs without you sending a trigger signal)

Does this help?

cheers!
 
Yeah, that helps. I'm just trying to get a feel for the CMOS cameras right now.

It's a bit strange though how all these cameras have 10-bit ADCs but there are only 8 parallel data pins.

I thought that data sheet was really small. I found another one that is much much longer.
**broken link removed**
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top