Jump to content

Recommended Posts

Supervising colorspaces in different parts of the chain.

As I understand it,

Digital Cinema Projections are in P3 colorspace,

and TV/DVD/Web is rec.709 colorspace.

Then, as for the workflow.

  1. Images are captured in colorspace X (depending on the camera/stock)
  2. In the DI, images are converted to another colorspace, lets just say ACES
  3. In the master the images are then converted into either P3 or Rec.709 depending on the format.

My questions are,

 

1. How do we know which colorspace we are exporting to? E.g. What colorspace does H.264, ProRes422 (HQ) or ProRes4444 have? Can you recommend any places to read up on this?

2. And how do we ensure to keep colors within the intended colorspace, as to not lose out of gamut colors when exporting?

 

3. I often use a Kodak2383 emulsion LUT when grading which clips out of gamut colors and remaps them to the closest color of same luminance. I suppose the gamut / colorspace is calculating what can be printed to Kodak2383 film. Do you know of any way that I can see / test how wide this colorspace is compared to P3 and rec.709?

 

4. I work with dailies & grades in photoshop as references for the colorists. Can you recommend a website where to read about color-management in photoshop as to stay within gamut, and

 

5. Would you recommend any specific image format. DPX, TIFF or PNG etc. for colorspace and accurate color rendition?

Thanks guys,

Will

 

Link to comment
Share on other sites

  • Premium Member

 

 

Digital Cinema Projections are in P3 colorspace,

and TV/DVD/Web is rec.709 colorspace.

Broadly. Computer monitors are sRGB devices. The colour primaries are the same as rec. 709, but the luma handling is slightly different. Generally it will be watchable; rec709 stuff shown on the web will look too bright in the shadows.

 

Images are captured in colorspace X (depending on the camera/stock)

In the DI, images are converted to another colorspace, lets just say ACES

In the master the images are then converted into either P3 or Rec.709 depending on the format.

Yes, that would be fairly normal. For TV, though, it's just as common to shoot 709, grade it as 709 and output it to 709. Often cameras shoot various log formats and colour correctors handle it in various ways internally.

 

What colorspace does H.264, ProRes422 (HQ) or ProRes4444 have?

Anything you like. The XAVC-I codec used by cameras such as the Sony FS7 is fundamentally H.264 and that camera can record SLog3/SGamut3, or 709, or via a user LUT, or whatever you want. Codecs just store pixel values; what those pixel values represent is up to you. Theoretically this should be encoded into the file, but it often isn't, and if you're using some user LUT to shoot with, the ball is entirely in your court. Write it on the slate.

 

And how do we ensure to keep colors within the intended colorspace, as to not lose out of gamut colors when exporting?

Competent postproduction personnel!

But in all seriousness, if you're shooting raw, or log, or some other wide-luma-range format, part of the grading process is to work all that information down into the distribution format, so that's your job if you're grading it. Considering that your final export will almost certainly be for web or broadcast, you will always lose colours if you've shot log or raw, since almost all log or raw formats have a wider gamut than 709 or sRGB. The trick is how to do that without the loss being objectionable, which is a matter of skill and taste.

 

I often use a Kodak2383 emulsion LUT when grading

I suspect from the name that this is a look-emulation LUT for prettiness not a technical one, but it should still have been supplied with information about what input data types it expects and what it will output given that.

 

A LUT does not imply any particular colourspace, although it should usually have been designed for one. It should have been supplied with information about what it expects to receive, and what comes out of it

5. Would you recommend any specific image format. DPX, TIFF or PNG etc. for colorspace and accurate color rendition?

Like moving-image codecs, they don't particularly care what you put in them. DPX probably has the most facilities for film and TV work, with regard to indicating in the file what luma encoding and gamut should be assumed when interpreting the data. It's also the least universally readable. PNG is only good to 8 bit, but that may be fine for distributing quick offline grade demos. And it's readable over the internet by everyone, which may be more significant in your application.

P

Link to comment
Share on other sites

Digital Cinema Projections are in P3 colorspace,

 

The color space in the DCI Specification (version 1.2, March 2008) contains all colors, organized in the very same XYZ space as the CIE defined in 1931. Therefore a DCP, which encodes X'Y'Z' (which become XYZ by applying gamma 2.6), can include all colors. If you're thinking that ACES was needed for that, look back at DCI.

 

But, there are no projectors capable of displaying all colors. So DCI specified a minimum gamut for projection.

 

Minimum Color Gamut enclosed by white point, black point and

Red: 0.680 x, 0.320 y, 10.1 Y

Green: 0.265 x, 0.690 y, 34.6 Y

Blue: 0.150 x, 0.060 y, 3.31 Y

 

Therein you see the P3 primaries (bolded).

Eventually DCI ceded this specification to SMPTE. In SMPTE's recommended "D-Cinema Quality" RP 431-2:2011, the same P3 primaries appear (bolded):

 

7.8 Color Gamut

The cube in XYZ space defined by the black point and the following points expressed in Y,x,y

Reproduction of colors with a Y value above 48 cd/m2 shall not be required.

Red (11.90597, 0.6800, 0.3200)

Green (34.63657, 0.2650, 0.6900)

Blue (3.805772, 0.1500, 0.0600)

Virtual White (50.34832, 0.3190, 0.3338)

 

There are interesting differences between the two quotes.

First, the word "Minimum" has disappeared in the SMPTE version. This might imply that a projector that displays more than the color gamut, i.e., more of what can be encoded in the DCP, is disobeying the SMPTE recommendation. If so it removes an uncomfortable open-endedness in the DCI Specification, but also reduces the incentive for improved projectors.

 

Second, SMPTE calls the gamut a cube, but how can a cube be defined by just five points? SMPTE knows they are describing the output of a projector with three separately modulated primaries. The sixth point is the sum of the Red and Green primaries; seventh point the sum of the Red and Blue; eighth point the sum of the Green and Blue. (SMPTE didn't have to describe the white point because it is the sum of the Red and Green and Blue.) Eight points can describe a cube. But it's not a cube in XYZ space because the lines from the black point to the Red, Green, Blue points in XYZ space are not equal in length nor are they perpendicular. The gamut is some parallelepiped in XYZ space. SMPTE confused the clean and complete XYZ space with their expedient P3 space.

 

Third, SMPTE dumped DCI's (and also its own RP 431-2 (2006)) rather outré white point (x=0.3140, y=0.3510, which can be roughly calculated by summing the given primaries in the DCI quote above) for a saner white point x=0.3190, x=0.3338. Yet the idea of a correct white point for a DCP is dubious. The "natural" white point for XYZ space is x=0.3333, x=0.3333. Many other white points may be chosen.

 

The purpose of this long post was just to emphasize the difference between the XYZ color space of the DCP and the P3 space of DCP projectors. A video can be encoded in P3 primaries, but why bother? To save some bits versus XYZ encoding? The P3 will have to be reencoded as XYZ in the DCP.

Edited by Dennis Couzin
Link to comment
Share on other sites

Phil, you mention that image containers don't care much about what you put in them, do you then know how come colors shift when exporting? I've had as much as 20° hue shifts, luminance shifts as well as saturation shifts when exporting to JPEG, PNG etc. from photoshop. It feels like some gamut remapping is going on, but I can't say. ProRes422(HQ) usually deliver very satisfactory and precise colors.

Dennis, I suppose it would be a good idea to aim for a P3 gamut before reencoding to XYZ. Working with the full XYZ colorspace and exploiting the wider gamut would be counter productive if projectors cannot project those colors. Wouldn't you agree?

Link to comment
Share on other sites

  • Premium Member

Keep in mind that digital cameras don't necessarily have an infinitely large color gamut anyway.

 

But generally for features you'd color-correct for the slightly larger P3 color space and then make a Rec.709 version from that.

 

I think ACES basically works with whatever color range the camera can deliver in the biggest bucket it can hold it in, but I'm not sure, and the idea is that you can then output a P3 or Rec.709 version from that.

 

In practice, you have to pick a color space to color-correct for, but the idea behind ACES is that you'd have a master where nothing was thrown away in terms of information that the camera could capture, in case in the future you had to color-correct again for a new standard like Rec.2020:

 

http://en.wikipedia.org/wiki/Rec._2020

Link to comment
Share on other sites

  • Premium Member

I just wrote an article all about ACES which I hope will be published soon, but briefly:

 

It defines a colourspace, that is, a triangle on a CIE diagram that covers the entire visual range. Because the visual area is not a triangle, it is not possible to define any triangle (because there are three RGB components) that covers all of it and which actually has real colours at its points. In fact, the ACES blue actually has a negative CIE coordinate (it's not just blue, it's actively unyellow!). Because of this it is intrinsically impossible to make an ACES display, and it will always have to be remapped into another colourspace for display. This is not the first time this has been done; the scRGB colourspace (a Microsoft/HP collaboration) and Kodak's ProPhoto RGB system also have non-realisable primaries. Other wide-gamut systems, such as Rec. 2020, have chosen to keep using real RGB primaries as the image can then - at least in theory - be directly displayed. In practice almost all (reasonable) devices do active colour management anyway.

 

The idea behind ACES is to be able to encode absolutely everything that could possibly exist, and then various acquisition and display devices will sit somewhere inside it. Because of this, the values need to be encoded with more precision and range than conventional 8 or 10-bit systems, and 16-bit half floats are used. To drift briefly into a bit of computer science, a 16 bit half float is a binary encoding of a number which trades precision for range as the value represented gets bigger, and as such is actually quite well suited to representing linear-light imaging values while only being twice the size of 8 bit data.

 

 

 

Phil, you mention that image containers don't care much about what you put in them, do you then know how come colors shift when exporting?

 

In short, because the software doing the encoding or decoding is shifting the colours, or either adding or misinterpreting markers indicating what the data in the file represents.

 

The container format is one thing, the codec is another (it's possible to create ProRes AVI files, and Quicktime will play them!). The data that gets given to the codec is out of its control.

 

 

 

I've had as much as 20° hue shifts

 

That's fairly unusual, I'd be interested in seeing how that happened.

 

 

 

luminance shifts

 

...which on the other hand is fairly common, alas. The most common circumstance is that one piece of software misinterprets the output of another as being either full range (0-255 in an 8-bit file) or studio range (16-235), or vice versa. Canon DSLRs were notorious for provoking this problem, although their output was actually correct and it was the NLEs that were making the mistake.

 

 

 

saturation shifts

 

May be a side-effect of contrast changes due to full/studio range problems.

 

P

Link to comment
Share on other sites

That's a good point. I guess the best we can do is to learn the limitations and faults of the software we use and find the best option to work with. Thanks for the DPX recommendation! I will run some tests.

On the 20° shift --
It is very unusual. I've only had it happen once. Well actually it was 19°. This image. The color in the low left corner came out as a hue 27° when I did the photoshop test. After exporting and reimporting it read a hue 8°

post-66473-0-71705700-1420218597.png

Link to comment
Share on other sites

Dennis, I suppose it would be a good idea to aim for a P3 gamut before reencoding to XYZ. Working with the full XYZ colorspace and exploiting the wider gamut would be counter productive if projectors cannot project those colors. Wouldn't you agree?

 

 

It depends on what DCP projector makers do. Four and five primary projectors are inevitable. How can SMPTE make its P3 recommendation stick when home theatres are projecting larger gamuts? This diagram, based on CIE 1976 u'v', gives a better impression of what's missing from P3 than the usual CIE 1931 xy diagram.

 

 

Keep in mind that digital cameras don't necessarily have an infinitely large color gamut anyway.

 

The idea that digital cameras "have" a color gamut, and especially that some fine cameras have extra-large gamuts, is a confusion.

 

Digital cameras have the raw RGB data of their sensors, based on the RGB spectral sensitivities and the dynamic range. The camera makers, or whoever can access the raw data, can milk it to include reproduction of very high saturation colors. It's not like consumer digital cameras don't "see" such colors. Almost every camera's sensitivities include almost the whole visual spectrum, and, except toward violet (like below 460 nm) and toward far red (like above 640 nm), they can distinguish all individually produced wavelengths (by their differing R:G:B ratios). Also they can sometimes distinguish a spectral color, like a 500 nm blue-green, which is way outside everybody's gamut, from a 95% saturated version of that hue, also way outside everybody's gamut.

The key to understanding limited camera gamut is in that word "sometimes". Actual colors (except from lasers) consist of wide ranges of wavelengths, and unless the RGB spectral sensitivities are very smart, and the milking very appropriate, the camera will reproduce many actual colors poorly. Indeed the camera will make some colors that should look the same different and some colors that should look different the same. Otherwise a 3D LUT could straighten out the camera's color reproduction perfectly. So with every claim of color gamut a camera-maker owes a vouch of color accuracy. "Accuracy" is a tricky notion. A camera may purposely warp the visual color space, since a picture is seen differently from a scene, but warping isn't scrambling.

 

The relation of color gamut and high bit depth and low noise is very interesting. The spectral sensitivities of the human eye are fairly simple one-peak curves that are not difficult for a sensor maker to copy. But, two of the human eye's three sensitivities are so near together as to make engineering problems for cameras. IBM dared do it with their Pro/S3000 camera from around 2000. How camera makers choose their spectral sensitivities is a dark secret -- but the sensitivities themselves are easily measured and can't be kept secret. In all, the color gamut claimed is as large as the manufacturers dare to milk from their RGB data without embarrassing levels of color error.

Edited by Dennis Couzin
Link to comment
Share on other sites

Phil, I'm afraid I don't have the psd file anymore, but I can tell you what I did. I chose an array of warm hues from greenish yellow to magenta'ish red, as you see the numbers above. Then I measured the color hue in degrees. I then darkened the color in a gradient with levels using the midtone marker, I expect it would be the same as using the gamma wheel in a colorgrading software -- And used the color picker to measure the resulting color. When I exported I tried different image containers, jpeg, png etc. And this one came out looking the most like what I had worked on in photoshop. I don't remember which export settings. But when I took it back in the hue had changed 19°

Dennis, those are some good points. I've never dug down that deep into the science behind it. I guess I tend to focus on the more practical side. It's an interesting read

Link to comment
Share on other sites

I just wrote an article all about ACES which I hope will be published soon, but briefly:

It defines a colourspace, that is, a triangle on a CIE diagram that covers the entire visual range....

 

XYZ color space also defines a triangle on the CIE diagram that covers the entire visual range. No simpler virtual primaries can do this than those with chromaticities: x=1,y=0; x=0,y=1; x=0,y=0; corresponding to XYZ. I regard ACES as an engineering monstrosity conceived in jealousy over DCI's adoption of XYZ color space.

 

4bua0d9t4d6hush6g.jpg

 

The achievement of ACES was to reduce the red triangle of XYZ space to the blue triangle of ACES space. What is that worth? Bandwidth is scarce in the camera and in the distribution media, but not in the intermediate video processings where ACES is aimed. Is the difference between the red triangle and the blue triangle worth all the transformations back and forth between CIE XYZ space ACES space larding SMPTE 2065-1-2012?

Six pages of the 23 page SMPTE 2065-1-2012 are filled with six columns of 7 decimal numbers all trivially derivable from the CIE tristimulus functions. The document says that ACES sensitivities r-bar, g-bar, b-bar are linear combinations of the CIE x-bar, y- bar, z-bar, but omits to state the transformation. Here it is:

 

0.0127366

0

-0.0000012

-0.0033790

0.0093583

0.0006692

0

0

0.0086872

 

This little matrix makes those six pages unnecessary, since the CIE functions are widely available. The bottom row of the matrix shows that ACES b-bar is exactly the same as CIE z-bar, while the top row shows that ACES r-bar is very slightly different from CIE x-bar. It's ACES g-bar that's quite new. CIE y-bar has the beautiful advantage of representing luminance. What does ACES' messy and idiosyncratic "color space" offer?

 

CIE XYZ space has been around since 1931. The x-bar, y-bar, z-bar functions for the 2° observer are that old. There are rumblings within the CIE to finally replace them. Were this to happen, two of the three ACES primaries, and all the matrices in SMPTE 2065-1-2012, would become obsolete. But workers in CIE XYZ space would hardly notice the change (just as colorimetrists now switch effortlessly between the CIE 2° and 10° observers).

Edited by Dennis Couzin
Link to comment
Share on other sites

  • 2 weeks later...

In the 24 January post I called ACES an engineering monstrosity. Yesterday in a different strand I called it idiotic. It gets worse and worse the more you look at it. Now it turns out that the one ACES accomplishment, finding a small triangle to surround the chromaticities, was done wrong.

 

FIRST:

The primaries pictured below form a triangle 4% smaller than ACES' primaries' do. They're ugly, but no primary is uglier than ACES Blue at x=0.00010, y=-0.07700. ACES must have made that 0.00010 just to make their triangle a smidge smaller. Fools, for missing the substantially smaller triangle.

 

s1r15w659u3bewr6g.jpg

 

SECOND:

The very idea of measuring areas in the chromaticity diagram to appraise wasted color encoding is mathematically wrong. The chromaticity diagram is an x,y diagram. If the color encoding were x,y,Y then the areas would indicate the relative multitudes of code values. But the DCI coding is XYZ instead. Consequently the measure of how many code values are used for which parts of color space can't be done directly on the chromaticity diagram. Do this. Based on three perpendicular axes X,Y,Z form a unit cube. This cube represents all XYZ codings. Lay the x,y chromaticity diagram on the XY plane where the cube rests. Construct another plane in the plane where X+Y+Z=1. Project the chromaticity diagram upwards and find its image on that new plane. Now from the X=Y=Z point project that image into the cube. The volume of the cube within that projection measures the part of the encoding that makes colors. Measured this way, XYZ encoding is much more efficient -- less of it failing to make colors -- than by the invalid area comparisons on the chromaticity diagram.

 

To do this volume job to evaluate ACES encoding is even messier. Make ACES RGB the three perpendicular axes. Then transform the CIE color volume described above into those ACES coordinates.

 

Too few people understand the geometrical relation of the CIE chromaticity diagram to the XYZ color space. The ACES authors' even lack understanding of the chromaticity diagram. It was crazy to rest ACES color space on the purple-line of the chromaticity diagram which is inherently fugitive.

 

THIRD:

The volume method reveals what fraction of the color encoding is completely wasted on non-colors, but color science suggests that it is just as wasteful if there are regions of color space where the color encoding is too fine, i.e., much finer than human color discrimination, and of course it's bad if there are regions where the color encoding is too coarse. So the real measure of the efficiency of a color encoding is its accord with human color discrimination. XYZ and RGB of one or another kind are so hugely non-uniform in that perceptual sense as to defeat trivial geometrical measures. It is possible that a YUV type color encoding could do better.

Edited by Dennis Couzin
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...