Jump to content

Dichroic Prisms and Filters Less Precise than Color Masks?


Peter Moretti

Recommended Posts

  • Premium Member

It seems that prism designs which split light into R, G and B wavelengths should have an advantage over Bayer mask designs. More light and co-sited colors seem hard to beat. However the needs for more sensors, very precise lenses and finely aligned prisms are noted as technical hurdles (esp. w/ large sensors).

 

But is there another drawback to the beam splitter approach, namely how precisely the beam is split? For example, in the three prisms designs that I've looked at, they contain a filter that reflects red and transmits green. But such a dichroic mirror should have a transmission and reflectance overlap around its crossover frequency. I.e, around the border between red and green, some red is transmitted and some green is reflected.

 

Which has me wondering, are masks more precise than prisms and mirrors when it comes to separating colors? For example, if a Bayer mask designates that green ends at 570nm an red begins at 570+nm, can this separation be very precisely made so that green transmission is ~100% right up to 570nm and ~0% above 570nm and red transmission is ~0% at 570nm and ~100% above 570nm?

 

And is such additional precision an advantage of the mask approach?

 

Thanks much!

Link to comment
Share on other sites

  • Premium Member

No, actually, dichroics can be engineered far more precisely than chemical dye filters.

 

With dye filters, you can get a pretty good steep slope on red, less so on green, and blue is really bad. The issue is something to do with how the electrons work in chemical bonds, not an easy nut to crack. This is the reason for the orange look in negative and IP/IN film. By making yellow and orange where they don't have green and blue, those layers "borrow" some of the sharpness from the red end of the spectrum to help out the blue end. IIRC, this was a Kodak invention, soon after WWII.

 

This is also why you lose more light in filters that raise the color temperature rather than lowering it. The blue you need to go from tungsten to daylight costs a couple stops, while the orange to go the other way is only 2/3 of a stop.

 

 

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
No, actually, dichroics can be engineered far more precisely than chemical dye filters.

 

With dye filters, you can get a pretty good steep slope on red, less so on green, and blue is really bad. The issue is something to do with how the electrons work in chemical bonds, not an easy nut to crack. This is the reason for the orange look in negative and IP/IN film. By making yellow and orange where they don't have green and blue, those layers "borrow" some of the sharpness from the red end of the spectrum to help out the blue end. IIRC, this was a Kodak invention, soon after WWII.

 

This is also why you lose more light in filters that raise the color temperature rather than lowering it. The blue you need to go from tungsten to daylight costs a couple stops, while the orange to go the other way is only 2/3 of a stop.

 

 

 

 

 

-- J.S.

 

John,

 

Thanks very much. This might seem like a very basic ?, but what do the masks or prisms do with the spectrums that are not red, green or blue? I understand that R, G and B are primary colors and can be combined to make a wide range of other colors called a gamut.

 

But is that really the same as saying that if you take only the R, G and B wave lengths from an incoming beam of visible light and recombine R, G and B, the result is exactly the same as original incoming beam?

 

This doesn't make sense to me because there are seven colors in the visible spectrum: violet, indigo, blue, green, yellow, orange, red.

 

Violet: 380–420nm

Indigo: 420–450nm

Blue: 450–495nm

Green: 495–570nm

Yellow: 570–590nm

Orange: 590–620nm

Red: 620–750nm

 

Now I can understand violet and indigo being grouped with blue. And maybe red and orange are grouped together. But where does yellow go? Is it part of green? Part of red and orange? Is it excluded?

 

I've been reading about color theory, but am still very confused, obviously, LOL!

Link to comment
Share on other sites

  • Premium Member
But is that really the same as saying that if you take only the R, G and B wave lengths from an incoming beam of visible light and recombine R, G and B, the result is exactly the same as original incoming beam?

 

No, it won't be exactly the same in the absolute physical sense. But it can be made to look the same to human observers.

 

If we shine an HMI light through a purple gel and measure how much light we get at each wavelength from 400 to 700 nm, we can make a graph that'll be a wavy jaggy sort of line, perhaps like part of one of those stock market graphs you see in newspapers. Being purple, we'd have high ends and a dip in the middle. Then we could take three LED's, red, green, and blue, and we could adjust their currents until the combined light from them looked the same to us as the gelled HMI. If we graph the light from the combined LED's, the curve would be very different, with two high humps for red and blue, and a low one in the middle for green. But to our eyes, they'd look the same.

 

That red, green, and blue are primary colors is because of the way our eyes work. Pigeons, for instance, divide the visible spectrum into five primaries. Most colorblind people see the world via two primaries, like early two color movies. There are some who are missing more than one primary, and see a monochrome world.

 

 

 

-- J.S.

Link to comment
Share on other sites

This doesn't make sense to me because there are seven colors in the visible spectrum: violet, indigo, blue, green, yellow, orange, red.

There are only seven colours because Isaac Newton believed that seven was some kind of universal number - seven notes in the musical scale, seven planets (as known then) etc. He had no theories of colour mixing, that came later. So he looked at the spectrum that he formed with a prism, and divided it up into seven areas. It was his way of pleasing both God and Science.

 

There is a load of stuff about the three primary colours, and an alternative theory that has four primaries - red, green, blue, and yellow - that works better in explainng colour blindness. It brings in everything from physics and physiology biochemistry to psychology and linguistics. But this isn't a book :blink:

 

In the human eye, we have three types of colour detector (rods). Each is sensitive to a different but overlapping range of wavelengths, but they peak at fairly precise wavelengths that we see as red, green and blue. When we look at those pure wavelengths, that is the colour we see. A single wavelength of 650nm will trigger a response just in the red detector, and we will see "red". 520nm would just trigger the green detector. But since the rods' sensitivities do overlap, the in-between wavelengths will trigger more than one detector. So with, for example 583nm (sodium yellow), we get a response in both the red and the green detectors, and interpret that combined signal as "yellow".

 

The important thing here is that we see any colour purely as a combination of signals from the three receptors. So if we mixed wavelengths of green and red light, we'd get responses in both red and green detectors, which is what we just interpreted as yellow. The wavelngths are different, but the results as we see them are indistinguishable.

 

Most natural surfaces reflect a very broad range of wavelengths, but some wavelengths more than others. However, once the light is in the human eye, it's all reduced to three signals from three types of colour detector.

Link to comment
Share on other sites

  • Premium Member

IIRC, it's the cones that are for color (photopic) vision, and the rods are for monochrome low light level (scotopic) vision.

 

Human vision has three primaries, except for a few very rare cases of four color (tetrachromatic) vision, and, of course, the colorblind. Making an imaging system with more than three primaries gets us a wider gamut than we can reproduce with three. The four color tests I've seen can deliver some saturated teals that current production systems can't, but the skin tones are the same because they're within both gamuts.

 

The interesting thing about the photopigment genes is that red and green are both on the X chromosome, while blue is on chromosome 7. As a result, red-green colorblindness is more common in men (IIRC, about 7%) than in women (IIRC, about 0.5%) Everybody gets two chances of getting a good blue gene, but guys only get one chance at red and green.

 

Anyhow, what's relevant to the original question in all this is that just like the output curve we could make for the purple gelled HMI in the first example, we can derive curves for the sensitivities of the three types of cones wavelength by wavelength. What happens when you look at the purple light is that its curve gets multiplied by each of the cone sensitivity curves, and the area under the resulting curves is the amount of red, green, and blue that gets reported to your brain. The sensitivity curves are what mathematicians call "weighting functions", and the areas are weighted integrals. So, you can have different light source curves that produce the same weighted integrals.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member

Thanks guys. I very much appreciate your help! I believe I understand the function of cones in the eye as they relate to what we call primary colors.

 

What I still don't understand is the wavelength delineations used for R, G and B in prisms and masks designs. Most succinctly put, is there a wavelength gap between G and R?

 

Take for instance a Bayer mask. Is G 495nm to 570nm and R 620nm to 750nm? Or is G let's say 495nm to 600nm and R 600nm to 750nm?

 

In the first case, G and R correspond to what we generally recognize as green and red. But wouldn't there be a region between 570nm and 620nm at which the sensor is completely blind? So wouldn't a beam of yellow only light (e.g. 600nm) be unable to pass through the mask, and hence never reach the sensor?

 

In the second case, G and R would not have a wavelength gap between them. But G and R would extend past what we call green and red; each would include a portion of the yellow spectrum.

 

Thanks again!

Link to comment
Share on other sites

  • Premium Member
Most succinctly put, is there a wavelength gap between G and R?

 

Well, it's much worse than you imagine. There are both overlaps and gaps. There are basically no wavelengths at which any dye filter gives you 100%, and none at which they give you 0%. How much they filter out varies nanometer to nanometer, which is why we draw graphs. I looked around for some filter transmissivity graphs on the internet, and didn't find anything useful. There's a bunch of numerical data, though. The pictures for dye filters are usually sorta like these human eye sensitivity curves from Wikipedia, only worse:

 

http://en.wikipedia.org/wiki/File:Cones_SMJ2_E.svg

 

These curves from Wikipedia are what the mathematicians call "normalized" -- which means that all the values for a given color are multiplied by a constant chosen to make the peak equal 1.0. The constant is different for each of the different colors. Filter curves (which I know I've got in the old Trimble book in storage someplace) usually are absolute rather than normalized, and the peak for reds gets quite high, and for blues depressingly low.

 

Yes, it's amazing that with this crazy looking s--t we can actually see. ;-)

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member

Hey John,

 

So if I follow you correctly, the poor blue performance of some video cameras is not caused by a lack of silicon's sensitivity to the blue spectrum. Rather it's caused by dye filters doing a poor job passing blue while occluding other wavelengths.

 

This seems to make sense to me, as I vaguely remember reading that silicon is actually quite sensitive to blue.

Link to comment
Share on other sites

  • Premium Member
So if I follow you correctly, the poor blue performance of some video cameras is not caused by a lack of silicon's sensitivity to the blue spectrum. Rather it's caused by dye filters doing a poor job passing blue while occluding other wavelengths.

 

Yes, that and the camera designer's choice to go with a less saturated blue primary as an engineering tradeoff to get more sensitivity.

 

 

 

 

-- J.S.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...