Previous | Next --- Slide 7 of 65
Back to Lecture Thumbnails
dvernet

Why are there fewer red-filtered pixels than blue-filtered pixels? According to this color theory slide, humans have significantly fewer S cones than L cones, so wouldn't it make more sense for a camera to take advantage of the fact that humans are better at perceiving red (much like how we use Y'CbCr to take advantage of the fact that humans are much better at perceiving differences in luminance than in color differences) by capturing more red light than blue light?

kmcrane

@dvernet: Every other row alternates between red and blue, so there are roughly the same number of red and blue sensors. The only reason it might look different on this slide is that we're near the boundary, and just looking at a small chunk of the CCD.

dvernet

Derp, of course. Thanks

lucida

Why is the quantum efficiency for capturing wavelengths corresponding to red significantly worse than that for blue and green?

whdawn

Why there are so many green-filtered pixels, is it because of Y'CbCr?

dvernet

@whdawn see my comment above and @kmcrane's response. The distribution of red, green and blue pixels is the same, it just looks different because the image doesn't show all the pixels.

whdawn

@dvernet I see, thanks!