Jump to content

Colorimetric Camera Proposed at Electronic Imaging 2023


Recommended Posts

I am one of the founders of a startup that has developed the LMS camera. Unlike standard RGB cameras, the LMS camera mimics the spectral sensitivities of the Long, Medium and Short (LMS) cones of the retina, allowing it to capture accurate colors in any lighting, including mixed lighting. Additionally, it has a little over 1 stop dynamic range advantage and a little under 1 stop sensitivity/ISO advantage over the standard RGB Bayer cameras.

Our team recently presented a paper on the LMS camera at the Electronic Imaging 2023 conference. You can find more information about our technology and the paper here.

As the LMS camera has several potential applications, we are keen to gauge the interest of the cinematography community to see if this is an application we should target first. Please take a moment to check out our technology and let us know your thoughts.

Link to comment
Share on other sites

Here's the motivation for our tech., if you don't want to click through to our web page:

RGB filters were first designed decades ago and have only been fine tuned since. Back then we did not have powerful computers, or the image processing understanding, to mimic our eyes.

The main problem in directly sensing accurate reds is that its frequency response is negative over a part of the spectrum. No physical filter can have a negative response, and the only way to achieve negative response is via color correction.

Sensing LMS and then converting to RGB gives you the correct spectral response, but will result in high noise with the simple processing available decades back when the RGB camera architecture was designed.

Modern computers and our algorithms make the LMS camera possible.

Link to comment
Share on other sites

  • Premium Member

I guess my fear would be that what you're calling an LMS camera is likely to use very unsaturated filters on its sensor, given that the spectral sensitivities of human cone cells are themselves fairly broad. This is already something of a problem in modern cinematography cameras. Many of them already use unsaturated filters in order to maximise spatial resolution and sensitivity, which can make it difficult for the electronics to figure out what colour things are. Blue LEDs, for instance, are very saturated at about 440 to 445nm, and many modern cameras misreport them as purple, especially when at or near overexposure.

This is supposed to be a triplet of red, green and blue LEDs. The red clips to white and it shouldn't, but the blue looks purple.

Light-emitting diode - Wikipedia

Can the new design help with this?

 

Link to comment
Share on other sites

Hi Phil,

You have made a couple of astute observations. It is indeed accurate that LMS filters have broad spectral sensitivities similar to the retinal cones, but their spectral sensitivities become narrow after color correction. RGB filters undergo a similar transformation, but to a lesser extent. I have attached a set of graphs that illustrate this transformation.
 

Your observation regarding blue-purple confusion is also correct. Standard RGB sensors often struggle to accurately sense blue, unlike the LMS sensor, which closely resembles our eyes.

The primary issue with existing cameras is their image processing pipelines, which cannot apply strong color correction without amplifying noise significantly. As a result, color filter designers face a tradeoff between color accuracy and SNR. However, our research has demonstrated that this is a false dichotomy.

wimage.png

Link to comment
Share on other sites

I read the white paper. This looks to be a dramatic leap in getting more accurate color out of standard sensors. Even if this technology had reduced sensitivity (although it appears to be the opposite) I, and I think many others, would prefer more accurate colors over just about anything else. Arri beat Red at the high end cinematography game basically due to better color over everything else. People (including myself) still use tungsten lights due to better color, despite the many inconveniences.

The genius of this technology is that it uses the same univariance that our brain uses to determine color. [For those that haven't heard about the principle of univariance, this video explains it very well: https://video.byui.edu/media/The+Principle+of+Univariance/1_11gv9jhz . ]

My criticism is calling this a camera. You don't currently appear to have a camera, you don't currently appear to have a sensor, and you don't currently appear to have the LMS color filter manufacturing figured out. This appears to be a breakthrough, but I think it's going to be a little while before we see it implemented. It looks like you are trying to market your technology in the hope that other companies will license it. (I hope it does get picked up right away, though.)

Also, in the white paper, why was a 20 year old Nikon D100 sensor used as the bayer pattern quantum efficiency to compare against? There is no explanation of why that sensor was chosen. 

Link to comment
Share on other sites

Thank you for your kind words. I have a concurrent thread running in reddit cinematography where people are dismissing the importance of color accuracy. We indeed do not have a camera yet and I am trying to figure out which markets to address.

We picked the old Nikon D100 simply because it's simulation model was readily available for isetcam - the standard research software for modeling cameras. We compared the D100's quantum efficiency curves with a few modern CMOS sensors to ensure they were roughly similar.

Link to comment
Share on other sites

Here is my unsolicited advice: find a company / sensor whose quantum efficiency best matches your easy to manufacture (gaussian) LMS filters and go from there. This tech should dazzle right away in order to gain traction.

There are currently many hybrid cameras that can shoot photos and videos well, so there is overlap in the low to mid photo/video world. Because of that, it feels like more people are camera/mount agnostic. 

Back in 2010, the Panasonic GH2 was released and it was huge at the time, because it was one of the first lower cost cameras that could do really good 24 fps. Many people gravitated towards that camera and the m4/3 mount it came with because it had something that no other camera had. 

If this new CFA is truly evolutionary, then it may not matter what camera it is in, as people would likely gravitate towards that system.

The mid level also might be a good target as the high end is likely using more finely tuned bayer CFAs that may be hard to compete with in a first generation CFA / sensor combo.

Just my 2 cents, though.

Thank you for the D100 explanation.

Link to comment
Share on other sites

  • Premium Member

high end cinema cameras are such a niche market that they would likely only generate losses and take way too much developing time compared to other markets. And people tend to post grade the image so much that I am very sceptical if there would be any real benefit of using a more colour accurate sensor.

One market where colour accuracy MIGHT actually be usable is markets where images are meant to look as "natural" as possible at least to a point, especially if the image would be easy to grade to save post time. So for example nature documentary programming and other types of factual programming, higher end video shoots, commercials and so on. Photographers could have lots of use for more accurate colours as well.  So I would not bother with high end cinema cameras but instead concentrate on the mid level market where there is lots more value and practical applications for such a technology.

Sensitivity is very important though and no one would buy if dynamic range and sensitivity is not on par or better than with current bayer technology as most of these markets are all about limited lighting budgets or natural light

  • Like 1
Link to comment
Share on other sites

  • Premium Member
2 minutes ago, aapo lettinen said:

 

One market where colour accuracy MIGHT actually be usable is markets where images are meant to look as "natural" as possible at least to a point, especially if the image would be easy to grade to save post time. So for example nature documentary programming and other types of factual programming, higher end video shoots, commercials and so on. Photographers could have lots of use for more accurate colours as well.  So I would not bother with high end cinema cameras but instead concentrate on the mid level market where there is lots more value and practical applications for such a technology.

Sensitivity is very important though and no one would buy if dynamic range and sensitivity is not on par or better than with current bayer technology as most of these markets are all about limited lighting budgets or natural light

basically meaning that if Sony is not interested in it, then there is not much market for the technology as they make almost all of the sensors suitable for such applications and quite a big share of the camera bodies too

Link to comment
Share on other sites

  • Premium Member

To sum this up, I think people are likely to be a bit cautious about this simply because we've heard a lot about alternatives to Bayer's mosaic in the past, and the results have been pretty much a wash. Filters described as emerald, yellow, cyan and colourless white have all been tried and in the end the engineering compromise almost always ends up as a balance between unsaturated filters (higher sensitivity, lower noise, poorer colour rendering) and more saturated filters (better colours, but less sensitive and noisier).

Small differences can be shown in theory but in practice it seems that a given area of silicon has a certain amount of performance, and within sensible limits the amount of picture quality which can be extracted from it is close to a zero-sum game, perhaps within a stop or so.

I think this needs to be demonstrated practically, and that could be done using an existing sensor design with new filters. That's still a big deal and I understand that it's probably difficult to make that happen, but that's probably what it would take.

Link to comment
Share on other sites

  • Premium Member
1 hour ago, Phil Rhodes said:

I think this needs to be demonstrated practically, and that could be done using an existing sensor design with new filters. That's still a big deal and I understand that it's probably difficult to make that happen, but that's probably what it would take.

taking a good quality b/w sensor, using some pixel binning to get "larger virtual pixels" and making a larger add-on colour filter on top of it should work as a demonstration without costing too much

Link to comment
Share on other sites

9 hours ago, aapo lettinen said:

Sensitivity is very important though and no one would buy if dynamic range and sensitivity is not on par or better than with current bayer technology as most of these markets are all about limited lighting budgets or natural light

LMS provides a little over 1 stop improvement in dynamic range and a little under 1 stop improvement in sensitivity. 1 stop is quite interesting for small aperture phone and consumer cameras.

How important is 1 stop for cinematography?

 

Link to comment
Share on other sites

23 hours ago, Joshua Cadmium said:

Even if this technology had reduced sensitivity (although it appears to be the opposite) I, and I think many others, would prefer more accurate colors over just about anything else.

LMS does have better sensitivity and dynamic range, which is why our eyes have so evolved.

While camera vendors work hard to sense colors as our eyes do, I do not think evolution paid a high price to achieve our particular color perception. Had our eyes sensed colors the way today's cameras do, we would have been just as successful as a species as we are today. Cameras do not have higher metamerism than our eyes, just different.

Evolution picked the LMS design for its higher SNR and dynamic range.

Link to comment
Share on other sites

  • Premium Member
6 hours ago, aapo lettinen said:

taking a good quality b/w sensor, using some pixel binning to get "larger virtual pixels" and making a larger add-on colour filter on top of it should work as a demonstration without costing too much

The problem with doing that is that the colour filters are put on modern camera sensors as part of the manufacturing process, usually when the sensors are still on the wafer they were built on. The same photolithographic techniques which are used to create the circuitry are used to deposit and etch away the colour filters, so once you've finished manufacturing the sensor, mounted and encapsulated it and turned it into a deployable component, there isn't really any way to go back and put a filter array on it. You'd have to go to a sensor manufacturer and commission that manufacturer to make one of its existing sensors with your filter array, and you'd have quite a bit of development to do before you had filter materials with the optical properties you wanted that were compatible with that process. And then you have to build that sensor into some sort of camera, even just a test rig that could fire up the sensor and take stills, that you could actually test, with all the considerations of matching the sensor to an appropriate lens. And then you'd have to write the demosaic software to give you a finished image, which is where the real magic would happen.

In short, it's not a trivial thing to do, and it would require some significant funding, but that's what it would take to test this out.

P

Link to comment
Share on other sites

  • Premium Member
47 minutes ago, Tripurari Singh said:

LMS provides a little over 1 stop improvement in dynamic range and a little under 1 stop improvement in sensitivity. 1 stop is quite interesting for small aperture phone and consumer cameras.

How important is 1 stop for cinematography?

 

1 stop is useful for low light /no lights shooting if it does not have any drawbacks but it does not make that big of an difference in the end. There is lots of modern cameras which have way better low light capabilities than most cinematographers even want to regularly use if there is any possibility to use any lighting at all so it only makes a small difference for low budget / no budget indie and documentary/factual content and sometimes a feature film can shoot one or two shots at such a high ISO that one needs the extra stop if it's available. For cellphones and reality stuff should be useful though as people want to shoot those in total darkness if there is something "interesting" they could see with their bare eyes.

I am regularly shooting material for "very low budget cinema release" type of end result at 12800 ISO, sometimes using only couple of led tubes dimmed down to few % and that is with a lower end camera + recorder combo... most people hesitate to use over 2000 or 3200 ISO with any video/"cinematography" camera for "cinematography use" if there is lights available so I think a small sensitivity boost would not make any difference at all in "cinematography use" when most of the modern cameras, even lower end ones, are either sensitive enough or even too sensitive for "normal cinematography applications" and people would not actually use the features in the end

Link to comment
Share on other sites

  • Premium Member
15 minutes ago, Phil Rhodes said:

The problem with doing that is that the colour filters are put on modern camera sensors as part of the manufacturing process, usually when the sensors are still on the wafer they were built on. The same photolithographic techniques which are used to create the circuitry are used to deposit and etch away the colour filters, so once you've finished manufacturing the sensor, mounted and encapsulated it and turned it into a deployable component, there isn't really any way to go back and put a filter array on it. You'd have to go to a sensor manufacturer and commission that manufacturer to make one of its existing sensors with your filter array, and you'd have quite a bit of development to do before you had filter materials with the optical properties you wanted that were compatible with that process. And then you have to build that sensor into some sort of camera, even just a test rig that could fire up the sensor and take stills, that you could actually test, with all the considerations of matching the sensor to an appropriate lens. And then you'd have to write the demosaic software to give you a finished image, which is where the real magic would happen.

In short, it's not a trivial thing to do, and it would require some significant funding, but that's what it would take to test this out.

P

simulated filter array by taking a existing b/w sensor and stacking some filter materials in front of it to simulate how they could behave colour-wise.  Just something to show the filtering technology before actually making a test sensor using it. Funding is always an issue so any crude physical prototype at all would help raise some money for making actual test sensors to be able to build a test camera sometime later.

So just making a crude technology demo so that people would believe it is something worth investigating and testing further

Link to comment
Share on other sites

Color filters are typically deposited during manufacturing, but it's usually done at a separate facility from the one that produces the silicon sensor. It's a simple and inexpensive process. Our partner companies have handled this for us in the past.

However, this time we're faced with the challenge of using new color filter materials. We've spoken with a leading manufacturer of color filters, and they don't anticipate any technical difficulties in producing the filters we need. Of course, they'll need a business case before proceeding, and that's what I'm currently working on.

Our specialty is in image processing, including the critical demosaicking process. We've developed and tested this part through simulations. Because processing has been the bottleneck in the past, we believe we've de-risked the LMS design. The rest should mostly be straightforward engineering work.
 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...