Jump to content

Gamut of Reversal Stock


Recommended Posts

Chaps,

 

working a lot with Ektachrome 100D (7285) and Velvia 50D, I am realizing all the time what an incredible color space these modern daylight reversal stocks have. When scanning them, it most often gets obvious that their gamut is way out of range of what the scanning CCD/CMOS can actually reproduce. Clipping in some channels appears all the time and very easily.

Beside the challenge of reproducing frames of this stock with an appropriately wide gamut -- are there ICC profiles for Kodak/Fuji stocks available?

This just so much asks for proper color management, but gamut mapping is not really possible without such profiles and a defined Rendering Intent.

 

I haven't found anything on the Kodak website. Am I missing something?

 

(And yes, projecting such wide colorspace film is always the best choice, but still the limited scanning abilities are [or should be considered as] an issue.)

Link to comment
Share on other sites

I haven't found anything on the Kodak website. Am I missing something?(And yes, projecting such wide colorspace film is always the best choice, but still the limited scanning abilities are [or should be considered as] an issue.)

 

Hi Friedmann,

 

this is an interesting question.

 

The technical sheets provided by Kodak allow one to reconstruct the colour signal propogation in terms of light frequencys - but of course, what matters is the frequency of the light source used during scanning. Without a profile for that it's not possible to complete the entire pipeline.

 

But lets follow the diagrams in the tech sheets for 100D as far as that takes us.

 

In the Kodak technical sheet ( http://motion.kodak.com/motion/uploadedFiles/TI2496.pdf ) there are two diagrams on page 4:

 

F002_1045AC

F002_1046AC

 

The first describes the response of each layer in the emulsion to the incomming light. So for example, the "yellow forming layer" is at a peak around 440nm. But 440nm is blue light? Well, of course, the film is reversal meaning that it eventually undergoes reversal of this negative respsonse. So where there is a peak in the "yellow forming layer" there will be a corresponding trough/valley in the reversed result, ie. for blue light (440nm) there will be next to no yellow dye in the result. As there should be.

 

The second describes the frequency of light that each dye absorbs. From the Kodak tech sheet:

 

The curves depict the spectral absorptions of the dyes formed when the film is processed. They are useful for adjusting or optimizing any device that scans or prints the film.

 

The graph to which this comment refers describes what frequency of light each dye absorbs at a particular density. For example, from the chart the yellow dye, at peak density, can be read as absorbing a narrow frequency of about 440nm. Which would be correct, because the yellow dye absorbs (blocks) blue light, therefore letting through yellow light.

 

So in principle, we could digitise these graphs and prepare a lookup table that maps incomming frquencys to outgoing frequencys. Since the curves are not exactly the same, there is a bias there, which a lookup table can capture. It would be good if Kodak published a list of numbers in addition to the graphs. Much easier to enter into the computer.

 

Anyway what we don't have is a description of the light source used during digitisation (what an old sounding word that is) - what frequencys it generates. Without such we don't know how much each dye layer is being illuminated.

 

But we can measure the scanner's light source (without any film) in terms of the RGB values it gives when digitised. And using any tech specs about the camera we can map those RGB back to frequencys, back to dyes, back through the lookup table, back to the original frequencys ... and tweak accordingly (correct for the bias in the lookup table).

 

Can't forget the 80A filter.

 

Carl

Link to comment
Share on other sites

You are right, the spectral density/sensitivity charts give a hint. And they do show that Kodak is actually measuring this :)

I just wonder why they do not provide ICC profiles. I know I have ICC profiles for some Fuji Photo Paper.

 

Regarding the "scanner light" -- its not just the light but also the gamut of the CCD (or CMOS) that matters. I mean that is why color management exists -- but you need a profile for each device/part/media used in your workflow. I do have profiles for my scanner, my camera, my monitors. I do not have a profile for any film stock.

 

I just wonder why this is not common. Digitising analogue material is happening often, so providing icc profiles for the analogue media would definitely help. Well, usually the analogue media's gamut is smaller than the digital ones, but definitely not in the case of slides and reversal film.

Link to comment
Share on other sites

You are right, the spectral density/sensitivity charts give a hint. And they do show that Kodak is actually measuring this :)

I just wonder why they do not provide ICC profiles. I know I have ICC profiles for some Fuji Photo Paper.

 

Regarding the "scanner light" -- its not just the light but also the gamut of the CCD (or CMOS) that matters. I mean that is why color management exists -- but you need a profile for each device/part/media used in your workflow. I do have profiles for my scanner, my camera, my monitors. I do not have a profile for any film stock.

 

I just wonder why this is not common. Digitising analogue material is happening often, so providing icc profiles for the analogue media would definitely help. Well, usually the analogue media's gamut is smaller than the digital ones, but definitely not in the case of slides and reversal film.

 

Yes - the Kodak spec sheets are an ICC profile of sorts, but insofar as they are embodied in a graphic, they are not very accesible ICC profiles - and their accuracy is unknown. To what accuracy did the graphic designer translate the data? A good profile would be one that provided a lookup table or equation for generating the curves. Note that the dye curves are smooth, which I assume is because each dye is a constant - so they could be represented in terms of a mathematical equation for generating the curve. But a lookup table would probably be the most likely way the sensitivity chart would need to be represented.

 

Perhaps this thread is the beginning of a project to write up the ICC profile for the pipeline.

 

I'll need this same info in due course, myself. For my own work. I purschased a roll of 100D a few weeks ago (and one of the last rolls of PlusX by the way).

 

The intersting thing here is that in the absence of accurate profiles (unless someone reveals otherwise) the only way to put such together is do the measurements/experiments ourselves. But gosh - experimenting with Super8 around here can be interpreted by special people as if we were experimenting with nuclear energy.

 

:)

 

Carl

Link to comment
Share on other sites

What we can do is make the assumption that the graphics in the Kodak pdf are exact. And then test that assumption.

 

The first task is to take the charts into a vector program and hand trace the curves. We can make the assumption that the centre of each physical line in the graphic is the location of where a precise sample point should go, with an error margin of +/- half the thickness of the line (because one will be hand placing the vector points).

 

We can then draw a spline through those points and tweak the spline until it matches the centre of the curve across the entire curve.

 

On top of that can be put a GUI for looking up particular signals in the digital curves and testing those curves against known signals.

 

And then publish the results in terms of standards such as ICC profiling.

 

Carl

Link to comment
Share on other sites

Well, I can help you with drafting the Plus-X color space ;)

 

I guarantee you that your EOS will not be able to capture all of E100D's gamut without multiple exposures and some merging work. My colorimeter ran into bounds as well as my 5Dmk2 did.

 

Your approach to "trace" the curves to get to a lookup table sounds promising, but its too mathematical for me. I'd be glad and curious though to see the icc profiles and test what they would do in a color managed workflow.

Link to comment
Share on other sites

There is an incredible amount of irony in trying to set up an ICC profile for 8mm film that is not processed to spec. Even Kodak standards dictate plus or minus a sixth of a stop tolerance (0.05), but film that is processed in incorrect chemistry would need it's own unique ICC profile then again, as would anything cross processed, expired film, or even film that has suffered slight fading.

 

 

That being said, I think it would be a good profile.

 

I don't have any experience in this area, Friedemann, but I do have, somewhere on this machine, a profile for Kodak color photographic paper. If you take into account the differences between reflectance/transmittance densitometry, and compare the characteristic curve to the ICC profile, maybe you'd be able to have something to go on when looking at the characteristic curves of reversal film. . .

 

I'll see if I can find this over the weekend. Good luck with this, and let me know if you ever come up with one for some of the negative stocks.

Link to comment
Share on other sites

Your approach to "trace" the curves to get to a lookup table sounds promising, but its too mathematical for me.

No probs. I'll do that. We can then connect the film profile to your scanner profile (and mine) and see what happens.

 

C

Link to comment
Share on other sites

Karl, I totally agree -- this film would certainley need to be developed as accurate and perfect as possible to E-6 compliance. I have recently tested a lot of pro labs and was blown away by the differences in the results (all test cartridges had the exact same content and where the exact same charge, exposed in the same hour).

 

BTW, I think the ICC profile will not differ for 8 resp. 16mm as long as the master roll was the same. Actually 16mm (or even 35) stock might be much easier to handle with some of the needed measurement devices. :)

 

ICC profiles for negative stock are an interesting idea that kinda turn my brain into a pretzel. I am currently not sure if the icc profile should represent the colors achievable after printing (and thus inverting again) or before. Probably the latter, otherwise it would need the mix of two profiles. Actually such "inversion" icc profiles should be possible. This test jpeg with an odd icc embedded (try it in different viewers) almost does this.

Link to comment
Share on other sites

Note that Kodak is one of the founding members of the ICC.

 

We need to keep in mind that film can't be directly defined in terms of ICC profiles. An ICC profile preserves a rendering intent as a profiled signal undergoes viewing in different output devices.

 

To put it another way - the transfer of light, to film, back to light is not under digital control. We can't use an ICC profile to manage this transformation. We can use longer time in the stop bath etc. But not the ICC profile.

 

So the task is not to describe film in terms of ICC profiles per se, but to define an ICC profile that says how particular light, filtered by a particular film, onto a CMOS sensor, and to a digital file, should be rendered when viewed on a particular output device.

 

The answer to this question becomes the profile.

 

Carl

Link to comment
Share on other sites

To clarify the situation with an example.

 

Lets suppose you offer a service whereby you process film according to a particular recipe in which the film comes out "too red". The question here is what determines that description of "too red". Is it the process? Well no. What if it is meant to look like that? If so then the image is not "too red". If the way it looks is the way it is meant to look then the rendering intent has been satisfied and the profile is simply: render these RGB values the way you are already doing so.

 

But if the rendering intent is that it should not have as much red as it does (when viewed on a particular output device) then it needs a different profile. It needs a profile that says: reduce the red by a certain ammount. Or flip the greens to blue. Or whatever.

 

The profile can't define the rendering intent. It is the rendering intent which defines the profile. As the digital file moves from one output device to another the profile ensures the rendering intent is preserved.

 

Carl

Link to comment
Share on other sites

Carl, I am not a super-guru on color management, but isn't the basic idea of individual profiles for each step/device/media that they do NOT interfere or influence each other?

 

Two things:

1.) Of course the stock needs to be developed in a defined environment. As Karl lined out, it needs to be as close to reference E6 as possible. Since processing is rather reproducible, the development will not bring massive deviation.

2.) Anything else (like projection/scanning light spectogram, CMOS gamut etc.) should not be part of that profile. Au contraire, everything else needs its own profile.

 

I have seen ColorSync Profiles for Monitors, Scanners, Cameras, Printers and various kind of papers. Just as those profiles for papers represent what colors the paper can actually show and what not, we need profiles for the film stock. What is its Dmax, how red can its red be etc.

Link to comment
Share on other sites

1.) Of course the stock needs to be developed in a defined environment. As Karl lined out, it needs to be as close to reference E6 as possible. Since processing is rather reproducible, the development will not bring massive deviation.

 

The ICC profile is designed to allow creative work produced in one context to look the same in another context.

 

For example, if I paint a picture on my computer and save it as a digfital file, I could find it looks completely different when opened on another computer system. The data itself hasn't changed but the way in which the data is rendered has changed.

 

But with ICC managed data this can be managed in such a way that it doesn't happen. When working within an ICC managed workflow your rendering intent becomes known because the conditions in which you are creating the picture (looking at a computer screen and selecting icons) have been previously established in a profile. As that data moves around, the embedded profile can connect with other device profiles, and ensure the data is rendered the same when viewing it in another context.

 

2.) Anything else (like projection/scanning light spectogram, CMOS gamut etc.) should not be part of that profile. Au contraire, everything else needs its own profile.

 

The predigital pipeline of film/processing/projection doesn't exactly fit the ICC model of a managed pipeline. You can model the film/processing/projection pipeline but the only way to manage variations in that pipeline is to do so manually. Or stick to a fixed pipeline (reference E6 etc). But the rendering intent remains undefined.

 

A model is needed when transferring information from the non-digital domain of film into an ICC managed pipeline. The model needs to take into account the purposes of ICC on the one hand and the information entering that pipeline on the other. Of particular importance is the rendering intent. It is not yet defined.

 

So we have to establish that model and rendering intent. One way of doing so is to a. assume a reference E6 process - and b. assume the intent is how that referenced process looks when projected in a dark room. But we still don't have a profile until we define how that intent will be accomplished. Once defined in terms of light, CMOS etc. we can encode such as a device profile that allows the transferred data to pass through viewing nodes and render as intended.

 

The simplest way of doing this to just manually adjust the render of the data to fit the intended colour using an appropriate ICC profile tool.

 

 

 

Carl

Link to comment
Share on other sites

The simplest way of doing this to just manually adjust the render of the data to fit the intended colour using an appropriate ICC profile tool.

 

Note that this is just a one off proceedure to create the profile. It is not something you have to do every time. Once the profile is established all subsequent film signals entering the digital domain, via this profile, will have the same default look (+/- any minor variations in processing)

 

Now the "rendering intent" - in the ICC sense of the term - is not about the above process as such, but about the rendering intent of a colorist working on the results of the above process.

 

The "rendering intent" used to establish a profile in the first place is simply to "bootstrap" the profile to a default intent.

 

After that, a colourist, working on the data, using ICC software (where the software uses the created profile), defines what ICC means by "rendering intent".

 

Now the colorist may not do anything. Eventually the data will be used for a render back to film, or to a photographic print, or YouTube, etc. And if those rendering devices are ICC enabled the signal will reproduce as the colourist intended - and if the intention of the colorist is to do nothing to the data then the data will render according to the default intent established when the profile was originally created.

 

Carl

 

Carl

Link to comment
Share on other sites

Hi Friedemann,

 

There is a summary of creating ICC profiles here:

 

http://www.color.org/creatingprofiles.xalter

 

The last paragraph of which is:

 

For developers, there are a number of publicly-available source code libraries, such as SampleICC, which include tools to generate profiles. SampleICC includes iccCreateCLUTInputProfile, a utility that will generate an ICC profile from the data for a 3D look-up table.

 

The 3D look-up table mentioned above is discussed here:

 

http://en.wikipedia.org/wiki/3D_LUT

 

This LUT is basically what I've been discussing - defining a mapping between input data (eg. raw image data from a particular capture system) and how it should render by default on a particular output device (eg. a computer screen).

 

Once this LUT to ICC profile is done, then all subsequent data through that same profiled input system can then be viewed on any ICC profiled output device, and look the same.

 

So the task boils down defining this LUT, which is to say: mapping the light>filter>film-processing>light-film-projection>cmos>raw-data to a particular rendering intent (be it ICC or some other colour management system).

 

Carl

Link to comment
Share on other sites

I have seen ColorSync Profiles for Monitors, Scanners, Cameras, Printers and various kind of papers. Just as those profiles for papers represent what colors the paper can actually show and what not, we need profiles for the film stock. What is its Dmax, how red can its red be etc.

 

All of these are digital devices. Paper might seem odd calling a device, but it's just the rendering context for printers: the printer's screen.

 

But film is just data. In the same way that you can't observe data (on it's own), you can't observe film on it's own. You can only observe how film is being rendered by a device such as a projector/screen, or sunlight/retina, or LED/CMOS. Digital image data can't be observed on it's own. You can only see it in terms of how it is being rendered by a device such as Photoshop.

 

So you have to look at the way the film is being rendered by a particular device. In our case the device is one we have assembled oursleves. We are the manufacturer. And like manuafcturers we can elect to create a profile for our device that conforms to colour management systems such as ICC.

 

Now as far as I can tell the Diffuse Spectral Density Chart is effectively the output 'profile' for the film in terms of light. And the Spectral Sensitivity Curves is the input 'profile'. The Spectral Density Chart shows the range of light frequencys that each dye, at a certain density, absorbs. Or put another way, it shows which light frequencies are transmitted.

 

spectrum.gif

 

For example, lets suppose we have an area of the film in which the "magenta dye" density is 0.5 and that the other dyes are near zero density, then according to the graph (if I'm reading it correctly) that area of film would therfore absorb light of a frequency of between about 500 and 600 nm (a green colour) and therefore transmit the remaining freqencies: 0 to 500nm (blue), and 600 to Infinity nm (red), the sum of which is magenta (red + blue = magenta).

 

We need to characterise what range of light, transmitted by the film, is captured by the CMOS to RGB device given a known illuminate and the response of the CMOS to that illuminate. We need to marry all this information into a single profile because there is no need to do otherwise. The system is a fixed system. The profile will characterise the system. If the profile is ICC compliant then we don't have to worry about characterising any other ICC compliant parts of the system, (downstream from this component), as they will already have a profile.

 

Carl

Link to comment
Share on other sites

Thank you so much for all this insight, Carl. I read it all carefully and learned and understood a lot. You are explaining really well!

 

One thing remains unclear to me: How can we assure that our capturing device won't clip the film's signal? Isn't it our retina that has the widest gamut? At least, our eyes are always the final reception device (and highly subjective). I know we can't see IR but a CMOS usually does, so our retina's gamut is not always superior. But since reversal film is made for straight projection, it is designed for the human retina's gamut (different than e.g. Kodak's VNF or Vision stocks). So ideally, our digital retina equivalent should differentiate at least as much as human retina (and brain) can. This has to be assured, otherwise we would or could lose data.

You know that there are shades of color that can not be represented by RGB but only by CMYK. Film uses the latter, all sensors I am aware of use a variation of RGB. How can this be solved? By multiple scans in subsets of the spectrum?

Link to comment
Share on other sites

I was able to find an ICC profile for Kodak Supra Endura (color paper recently replaced with a newer version, but the tech sheet is probably still on the Kodak website).

 

It's for OUTPUT, not imput, but if you look at the characteristic curves and compare it to the ICC profile, maybe you can reverse engineer it.

 

Give me your e-mail address (you can PM it to me if you wish) and I will send you the profile. It's quite small but I obviously can't upload it here. . .

Link to comment
Share on other sites

One thing remains unclear to me: How can we assure that our capturing device won't clip the film's signal? Isn't it our retina that has the widest gamut? At least, our eyes are always the final reception device (and highly subjective). I know we can't see IR but a CMOS usually does, so our retina's gamut is not always superior. But since reversal film is made for straight projection, it is designed for the human retina's gamut (different than e.g. Kodak's VNF or Vision stocks). So ideally, our digital retina equivalent should differentiate at least as much as human retina (and brain) can. This has to be assured, otherwise we would or could lose data.

You know that there are shades of color that can not be represented by RGB but only by CMYK. Film uses the latter, all sensors I am aware of use a variation of RGB. How can this be solved? By multiple scans in subsets of the spectrum?

 

Our eyes/brain are the final recipient but keep in mind that reversal is designed to be projected by a tungsten light illuminate - so although our eyes can see a wide range of colours there isn't that wide a range being projected. The available range of colours is defined by the illuminate (the projector bulb) - not the film. The film does nothing other than remove light from the illuminate. The remaining light is less than the full range issued by the illuminate.

 

So all we need to do is record the illuminate we're using and see how the CMOS/RGB is responding to the illuminate. Regarding RGB. When I talk about RGB I'm simply talking about the three data channels in which the CMOS response is recorded rather than "colour" as such. It is just three numbers per sensor pixel. One can call them XYZ or ABC. For the moment we can just think of them as having no meaning other than they represent what the CMOS/Camera chip wrote to memory in response to the illuminate.

 

Now we can assign a meaning to those numbers by noting how the numbers differ from each other with respect to our illuminate. Ideally the three numbers should be the same. If not then it means the CMOS is more sensitive to one component of the light than another. And this is where clipping can occur.

 

Lets suppose the numbers are 8 bit (for arguments sake) and that R=255, G=200 and B=128 (to be dramatic).

 

Then it means if we were to increase the exposure in order to bring out more blue, the red will clip because it has nowhere to go beyond 255.

 

One solution is to digitally increase G and B (rather than increase exposure) but then we're not exploiting the full range of the sensor for those channels.

 

The correct solution is the one you've suggested - multiple exposure. You don't necessarily need to use filters because you can just copy the "good channel" from each exposure into a new file.

 

The only thing left to do after that is perhaps tweak the curves on each channel.

 

Carl

Carl

Link to comment
Share on other sites

The available range of colours is defined by the illuminate (the projector bulb) - not the film

The projector bulb alone has a wider spectrum than my eye, right? It does emit IR for example.

The film does nothing other than remove light from the illuminate. The remaining light is less than the full range issued by the illuminate.

..but the remaining light might still have a color that is not reproducible by an RGB model.

When I talk about RGB I'm simply talking about the three data channels in which the CMOS response is recorded rather than "colour" as such. It is just three numbers per sensor pixel. One can call them XYZ or ABC

This still implies that the CMOS's gamut can capture all colors that "Tungsten Bulb minus Film" can put out. I still doubt that.

Then it means if we were to increase the exposure in order to bring out more blue, the red will clip because it has nowhere to go beyond 255.

See this image:

2-clipping-20100721-142805.jpg

This is an unprocessed Raw File with 14 bit per color channel of a single E100D frame (exposed and developed perfectly). It is post-debayering but shows one thing for sure: The red is so red that it causes clipping. At the same time, the shadows in all three channels clip already. If you underexpose this further to show all red details, you lose even more signal in the shadows. If you over-expose to rescue the shadows, your red will clip horribly.

 

Interestingly, this frame looks just perfect when projected. You can see a lot more detail in the shadows and full detail in every single blossom. It just comes to my mind that not only the gamut, but also the dynamic range of modern reversal film is a problem for even the best CMOS in the world.

 

As we agree, only multiple merged exposures can help here.

 

The only thing left to do after that is perhaps tweak the curves on each channel

 

What? Isn't tweaking curves here a too early modification of the signal? And shouldn't "tweaking curves" happen solely by the "conversion" to gamut of the next piece in the chain?

 

BTW, a really interesting read:

http://www.gamutvision.com/docs/camera_scanner.html

Link to comment
Share on other sites

The curve tweaking (apart from a quicky image) is done in the context of ICC profile creation. The data itself is not altered. The data stays the same as it propogates through the chain of renderers. But each renderer needs to know how to render the data - and that is what you establish when "tweaking the curves" in the context of ICC profile creation.

 

The illuminate does emit IR - that's true but we can ignore that. The CMOS response to IR will be very low - otherwise we'd see it in the CMOS generated digital image (infrared photography!).

 

The CMOS sees what it can see. I wasn't suggesting it could see the full range of the illumnate. The purpose of recording the illuminate is, to see what it does see. If it can't see the full range of frequencies produced by the illuminant in a single exposure then multiple exposures (at different exposure levels) is the solution.

 

There could still be frequencys it can't see no matter how many exposures you take. Not much we can do about that. :)

 

Now looking at your captures it becomes obvious that the intensity range of the illuminate (per channel) must exceed the range the CMOS can capture in a single exposure. The only solution there is multiple exposure and merging of the data to a higher bit level data structure.

 

Carl

Link to comment
Share on other sites

According to the author of Gamutvision:

 

It's not easy to calculate (or even define) total camera gamut, but we can make an important observation: digital cameras can detect all visible colors (roughly 380-720 namometers), and since each monochromatic (pure spectral) color appears to have a unique (R:G:B) ratio, all visible colors can be distinguished, and are potentially inside the camera's gamut. But the actual camera gamut depends on details of the implementation that can't be extracted from the ICC profile.

 

So assuming that's correct it is only the dynamic range - rather than frequency response - that need concern us. How many exposures are needed to capture the dynamic range? I'm not entirely convinced one needs more than one exposure? If you drop the exposure time, ie. bringing the red channel down a little from it's maximum are details in the blacks really going out of scope or is it just that you can't see image details on your computer monitor?

 

A single computer monitor image is a poor representation of the actual data. You need to vary the curves (temporarily) to see if the lower exposure version really is losing details in the dark end.

 

Carl

Link to comment
Share on other sites

You know the more I think about it the more I realise we're misreading the histrogram.

 

The y-axis of the histogram is the number of pixels with the corresponding x-axis value.

 

Which means the histogram appears to be suggesting that 100% of pixels (y-axis) have a value where red is near or at maximum.

 

Which is impossible.

 

So it's simply that the histogram has been normalised.

 

We're conflating the meaning of the x and y axis.

 

Carl

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...