Wow, that's interesting. I didn't know that's how CCDs worked. If I understand correctly, 1/3 of the "pixels" captured the red, 1/3 green, 1/3 blue. Does that mean the sensor now has 3x the resolution it had before?
Often it is even 50% green, 25% red and 25% blue pixels. There are different patterns, though. What you get as "megapixel" number of cameras counts subpixels individually, that is a "10MP" labelled camera does not have 10 million pixels of each color but 10 million subpixels in total.
If you have a device that can output RAWs, you can look at a RAW image using the FOSS photo development program "Darktable". Choose "photosite color" as the "demosaic" filter to show the individual color channel values (and thereby the Bayer pattern of your camera).
But yes, after removing the filter, you have three times the number of pixels but you lost color information.
Demosaicing algorithms are very good at restoring the resolution "lost" to the BFA. They can introduce some artifacts (zipper effect, "labyrinth", fringe color, to name a few) but in general, sharpness isn't lost as much as people imagine.
Nowadays they classical algorithms are being replaced by convnets that are trained on different BFA/image pairs and can get very good results -- at the cost of placing a convnet in the middle (so much higher computational cost, which can be offloaded to a GPU/AI accelerator if available).
If you want to see what a "pixel perfect" camera gives you, there are the Sigma cameras with Foveon sensors[0] or you can check the cameras that have a sensor-shift superresolution approach (some pro Olympus and Hasselblad models have this feature). Sensor-shift SR has the problem that it works best on static scenes, because it takes several images which are then later combined on a single picture, and if there's movement between the images it may introduce a few artifacts.
[0] which do full color data for every pixel, as they use silicon depth to filter wavelenth
Foveon sounds great in theory, but it doesn't deliver IMHO. It can achieve parity in terms of pixel level sharpness and color at the lower ISOs, but picture quality breaks down very quickly even at moderate ISOs.
> at the cost of placing a convnet in the middle (so much higher computational cost, which can be offloaded to a GPU/AI accelerator if available)
It also makes it harder to undo the effects of the demosaicking algorithm, which may be important if you're doing things like subpixel superresolution.
Now that's something that I haven't done, but you left me wondering... Wouldn't any demosaicing algorithm complicate things at that point? If you have a few links to share I'd like to read a bit more about it.
I always wondered if this was a good ratio. I get that green usually has the strongest signal and thus better low-light performance. For bright shots, I find that preserving higher resolution in blue results in higher perceptual resolution of the final image. You can simulate something like it by using an extreme 'night mode' more-red/no-blue display mode and watching a 4k video.
Green was chosen because it's to what the human eye is most sensitive. Look at Fuji's X-trans[0], and there are also RGBW arrays that prioritize dynamic range.
All in all, the BFA is "good enough" most of the times. For the use cases where it isn't, you're either:
* Budget constrained and can't really afford not using BFA
* Able to (pay for and) use either a color wheel in front of your sensor, or go with prism + triple sensor.
* Bite the bullet and go with a "strange" color array. You'll probably need to work on the software side for demosaicing to get proper support and fix eventual artifacts.
[0] even more green! 20/36 photosites are green, 8 red 8 blue
[1] with W being white, meaning no color filter or "panchromatic cell". In theory this helps on dim light conditions.
The requirements for sensors is different than displays. I guess I was asking why the non-rectangular displays didn't have more blue than green or red as that would improve perceived resolution more.
> I always wondered if this was a good ratio. I get that green usually has the strongest signal and thus better low-light performance.
More importantly, green is close to what your eyes perceive as luminance. This is important because you can perceive a lot more luma detail than chroma detail. This is why things like 4:2:2 sampling work.
If you read Bayer's original patent, he proposed using Y Cr Cb (luminance, colour part red, colour part blue) instead of GBR filters[0]. This would people optimal from a computer science perspective. Sadly it doesn't work physically. Sensing negative-blue and negative-red can't really be done with a simple filter.
Since the "pixels" are either formed at every intersection of 4 photosites (overlapping each other) or by interpolating data for each color to include the "missing" photosites (which is effectively the same), the megapixel count should fairly accurately represent both the number of photosites and the number of pixels in the output image.
I'm not exactly sure how the edge pixels are treated, but the difference in number between pixels and photosites should be on the order of a few thousand at most.
Ah yes. I had forgotten about this. I believe there are also extra pixels at the edge of some sensors that are unexposed, and just used for calibration purposes.
Are you saying that one photosite might be included in more than one pixel and therefore the overall pixel count is roughly equal to the number of photosites?
I'm saying that each photosite definitely is included in more than one output pixel, and I'm also saying that the number of output pixels should be about the same as the number of photosites.
This is obviously capturing less information than if you had a completely separate set of photosites for each pixel, but the megapixel count of cameras is nevertheless accurate.
Modern cameras sometimes come with a "pixel shift" function, which uses the image stabilization system to take 4 images each shifted one photosite from the others to construct an image where each pixel contains the information of 4 independent photosites with no sharing between the pixels.
The resolution of the final image is the same as a normal image, but the result is much clearer, and far less likely to suffer from blue/red moire.
Bayer filters are 50% green, 25% red, 25% blue for consumer devices.
The reason is that green actually captures much more of the luminance information, and our eyes have a much better luminance resolution than color resolution.
Tangentially, it's why the so-called YUV 420 (chroma subsampling) is so effective, where it's effectively encoding Y (luminance) data for every pixel (in a block of 4), but U/V (chrominance) only for every pair (or quad, someone correct me) of pixels.
There are examples online of pictures [1] with their luminance resolution decreased: you can immediately see the pixelation, and of their chrominance resolution decreased: you can barely tell the difference.
Because their patents ran out before all the complementary technologies were economically feasible (and CMOS sensors were invented by someone else by that point anyway). It's actually a terrible example of the disruption dilemma.
Edit: And there was really no feasible transition path for them anyway. The business depended on skimming a little bit for every photo taken. The main selling point of digital cameras was that you could take unlimited photos at no extra cost.
Customers aren't stupid. Even with patents, if you make the camera more expensive to account for the lost revenue on film and processing chemicals, people aren't going to buy it.
YUV/YCbCr 420 means that there is one set of chroma samples (Cb+Cr) for each 2x2 block of luma samples (pixels).
Often, the chroma samples fall on pixels in even rows and on even lines. So pixels in odd rows or on odd lines, have to borrow (interpolate) their chroma values from neighbouring pixels.
It always had the same resolution, it's just that beforehand you had to process it down by 3x to get a colour image. What it has now is more range especially in the non visible range.
Not specific to CCDs, CMOS sensors also have Bayer filters. Actually, fancy cameras with CCDs skip bayer filters all together by using prisms to split light: https://en.wikipedia.org/wiki/Three-CCD_camera
this is very exciting and i wonder if a similar process could be applied to consumer DSLR/MILC cameras. would love to shoot some high quality video in uv/ir
It's certainly possible, there's already a company[1] that sells cameras with this modification performed. And a few cameras that come from the factory with no Bayer filter, like the Leica Monochrom, but all the ones I know of are very expensive.
This reminds me of how the Chinese figured out that a laser could burn through the glass and ablate the glue holding the backs on the newer iPhones, allowing them to be replaced far more easily.
While neat technique, you could just buy monochrome camera also. Astrophotography community in particular seems to like them so that might be good keyword to search for.
True, astrophotographers like monochrome camera because you can prioritize gathering brightness signal over color signal, so you get more detailed photo; you can also use narrowband filter and image under full moon or in inner cities.
However, astrophotographers also complain about the price premium of monochrome camera. Given the same sensor, the monochrome version is typically 20% - 30% more expensive than the color version, which is counterintuitive - you don't need to put the Bayer filter on! So if we can perfect the technique to debayer color sensor, the astrophotographer community would be elated.
> the monochrome version is typically 20% - 30% more expensive than the color version, which is counterintuitive...
The market for monochrome sensors is very tiny compared to the rest of the commercial products. Every phone now has 2 or more cameras on it, and there are billions of those.
Any changes to the manufacturing steps means more setup and effort. Different test procedures, quality control, documentation, etc.. That is all overhead, to be absorbed by a relatively small production volume.
I'm surprised it is only a 30% premium, I'd have expected higher actually.
I'm an avid astrophotographer, and the prices for cooled mono and cooled color cameras are the same. If you compare a dedicated, cooled astro camera to a consumer DSLR then yes, they are more expensive. But apples to apples they are exactly the same price.
Actually, looking live the mono version is a bit cheaper:
> because you can prioritize gathering brightness signal over color signal, so you get more detailed photo
I wonder how long until phone cameras are purely monochrome, and apply ML to add the "correct" color in post-processing.
Actually, wasn't there some phone a few years ago with one high-res black-and-white sensor and one low-res color sensor, and it combined them through some tricky to produce a sharp color image?