Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The bit about the green over-representation in camera color filters is partially correct. Human color sensitivity varies a lot from individual to individual (and not just amongst individuals with color blindness), but general statistics indicate we are most sensitive to red light.

The main reason is that green does indeed overwhelmingly contribute to perceptual luminance (over 70% in sRGB once gamma corrected: https://www.w3.org/TR/WCAG20/#relativeluminancedef) and modern demosaicking algorithms will rely on both derived luminance and chroma information to get a good result (and increasingly spatial information, e.g. "is this region of the image a vertical edge").

Small neural networks I believe are the current state of the art (e.g. train to reverse a 16x16 color filter pattern for the given camera). What is currently in use by modern digital cameras is all trade secret stuff.



> Small neural networks I believe are the current state of the art (e.g. train to reverse a 16x16 color filter pattern for the given camera). What is currently in use by modern digital cameras is all trade secret stuff.

Considering you usually shoot RAW, and debayer and process in post, the camera hasn't done any of that.

It's only smartphones that might be doing internal AI Debayering, but they're already hallucinating most of the image anyway.


Sure - if you don't want to do demosaicing on the camera, that's fine. It doesn't mean there is not an algorithm there as an option.

If you care about trying to get an image that is as accurate as possible to the scene, then it is well within your interest to use a Convolutional Neural Network based algorithm, since these are amongst the highest performing in terms of measured PSNR (which is what nearly all demosaicing algorithms in academia are measured on). You are maybe thinking of generative AI?


At least in broadcast/cinema, no one uses CNN for debayering, because why would you?

In cinema, you just use a 6K sensor and use conventional debayering for a perfect 4K image. Even the $2000 Sony FX-30 ships with that feature nowadays. Combined with a good optical low pass filter, that'll also avoid any and all moiré noise.

In broadcast, if you worry about moiré noise or debayering quality, you just buy a Sony Z750 with a three-chip prism design, which avoids the problem entirely by just having three separate full-resolution sensors.


Yes, people usually shoot RAW (anyone spending this much on a camera knows better) - but these cameras default to JPEG and often have dual-capture (RAW+JPEG) modes.


To be clear, they default to JPEG for the image preview on the monitor (LCD screen). Whenever viewing an image on a professional camera, you’re always seeing the resulting JPEG image.

The underlying data is always captured as a RAW file, and only discarded if you’ve configured the camera to only store the JPEG image (discarding the original RAW file after processing).


> Whenever viewing an image on a professional camera

Viewing any preview image on any camera implies a debayered version: who says is it JPEG-encoded - why would it need to be? Every time I browse my SD card full of persisted RAWs, is the camera unnecessarily converting to JPEG just to convert it back to bitmap display data?

> The underlying data is always captured as a RAW file, and only discarded if you’ve configured the camera to only store the JPEG image (discarding the original RAW file after processing).

Retaining only JPEG is the default configuration on all current-generation Sony and Canon mirrorless cameras: you have to go out of your way to persist RAW.


The cameras typically store a camera display sized preview JPEG in the raw files.

> we are most sensitive to red light

> green does indeed overwhelmingly contribute to perceptual luminance

so... if luminance contribution is different from "sensitivity" to you - what do you imply by sensitivity?


Upon further reading, I think I am wrong here. My confusion was that I read that over 60% of the cones in ones eye are "red" cones (which is a bad generalization), and there is more nuance here.

Given equal power red, blue, or green light hitting our eyes, humans tend to rate green "brighter" in pairwise comparative surveys. That is why it is predominant in a perceptual luminance calculation converting from RGB.

Though there are much more L-cones (which react most strongly to "yellow" light, not "red", also "much more" varies across individuals) than M-cones (which react most strongly to a "greenish cyan"), the combination of these two cones (which make ~95% of the cones in the eye) mean that we are able to sense green light much more efficiently than other wavelengths. S-cones (which react most strongly to "purple") are very sparse.


This is way over simplifying here but I always understood it as: our eyes can see red with very little power needed. But our eyes can differentiate more detail with green.


Is it related to the fact that monkeys/humans evolved around dense green forests ?


Well, plants and eyes long predate apes.

Water is most transparent in the middle of the "visible" spectrum (green). It absorbs red and scatters blue. The atmosphere has a lot of water as does, of course, the ocean which was the birth place of plants and eyeballs.

It would be natural for both plants and eyes to evolve to exploit the fact that there is a green notch in the water transparency curve.

Edit: after scrolling, I find more discussion on this below.


Eyes aren't all equal. Our trichromacy is fairly rare in the world of animals.


I think any explanation along those lines would have a "just-so" aspect to it. How would we go about verifying such a thing? Perhaps if we compared and contrasted the eyes of savanna apes to forest apes, and saw a difference, which to my knowledge We do not. Anyway, sunlight at the ground level peaks around 555nm, so it's believed that we're optimizing to that by being more sensitive to green.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: