Jump to content

Is it possible to get wide gamut color out of Scanstation scans (color primaries wider than sRGB/REC709 locations)??


Larry Baum

Recommended Posts

  • Premium Member
1 hour ago, David Mullen ASC said:

I'm not sure many real-world colors fall far outside of P3/Rec.709 unless you are shooting fireworks and neon signs, etc. Wasn't P3 based on film print color gamut anyway?

That's what I'm also very confused. 

I guess his idea is that because digital cinema imagers can deliver a wider gamut, why can't film scanners.

Link to comment
Share on other sites

  • Site Sponsor
1 hour ago, Tyler Purcell said:

That's what I'm also very confused. 

I guess his idea is that because digital cinema imagers can deliver a wider gamut, why can't film scanners.

Yeah i am a bit confused too.

As far as I understand it film scanners generally do deliver wide gamut and I don''t see how in the imager/lamp or lin-log or demosaic math the gamut would be intentionally limited.

This is particularly true with a "True RGB" scanner that uses a monochrome sensor and multi flash R,G,B IR or a 3-line array.

I know the LEDs used in the lamp on all of the newer scanners have specific qualities and center point wavelengh for each color which are then modulated for intensity for each R,G,B channel. The basic idea is to set each channel's intensity to just below clipping on a clear part of the base as that gets the basic color balance and most DR set to the particular film stock.

Both cmos and ccd sensors are linear devices which usually have 12-bit A to D (some are 14bit or even 16bit) so when scanning negative the linear response of the sensor has to be encoded into Log as part of the process, positive films are scanned linear. Multi flash gets you 2-bits more precision and can help overcome the noise floor a sensor might have.

It would seem to me that any gamut limitation would be in the area of the spectral response of the LEDs used in the lamp and the color dyes used in the CFA Bayer mask on the sensor. I doubt any scanner manufacturer is choosing LEDs with limited spectral response. The color cross-talk between channels on a Bayer sensor can be considerable and a matrix or profiled 3D Lut is probably in the pipeline to deal with the characteristics of the color sensor used and there may be some limitations there in comparison to a True RGB scanner.

 

  • Like 1
Link to comment
Share on other sites

On 9/14/2022 at 4:03 AM, Jon O'Brien said:

I've noticed that some in the stills world can concentrate on technical aspects of image that in the cine world could be considered overly fine and rarefied details of image science that few have the luxury to contemplate or study in-depth. After all, the individual pictures flick by at the rate of 24 or more per second. There's much to be done and considered and only so much time to give to the niceties of the single image. I'd say in motion pictures we tend to go with what 'just looks good' and if that means a simple colour science and a simple means of acquiring the end result then that's the way it just is. If I'm way out of line that's fine, just tell me. But I was struck with this thought while reading this thread.

Yeah, it's not that sort of stuff that I am talking about though. Color gamut IS the sort of thing you could easily notice at 24fps. I mean in the extreme form, if you slide each color primary over to the white point you get B&W and that surely looks different than something shot in color with REC709 primaries or whatever.

Lots of other stuff has been mentioned but that stuff is all NOT what I was getting at.

 

Link to comment
Share on other sites

On 9/14/2022 at 10:53 AM, David Mullen ASC said:

I'm not sure many real-world colors fall far outside of P3/Rec.709 unless you are shooting fireworks and neon signs, etc. Wasn't P3 based on film print color gamut anyway?

P3 would be plenty decent enough (although it does still clip some real world stuff, although it's not too bad, although some flowers still handly clip it noticeably, but few displays other than a few really high level pro ones can even display more to much more than P3 yet anyway, so only really matter in a long term preservation sense for more mainstream future tech and not practically relevant at this moment) and certainly for typical print film that is good enough, since as you say, yeah P3 was based somewhat on typical motion picture print film gamut so if you have P3 or something very close, you more or less have all to almost all your print film could contain no matter what was shot on it and how.

If I thought it was giving something along the lines of P3 that would be perfectly fine, since that would be able to deliver everything I expect from the film scans.

The problem is that it seems to be REC709 and NOT P3 or something else wide. And yeah certainly for plenty of stuff even that won't matter too much, but if you get into fall foliage, tropical waters, emeralds, flowers, 80s clothes or any clothes with rich saturation, fireballs, some birds, brightly colored sports cars, sunrises, sunsets, etc. then it can easily get noticeably clipped off by REC709.

Heck even some animated stuff if they wanna go wild (it depends a lot doesn't some does) some colors look easily beyond REC709, wild reds, oranges, purples, cyans, yellows, greens, etc. in some of that stuff.

Just on an orange/red bit on some of the frames I see the gamut pegged all along the edge of the gamut, hard clipped along the edge, so those frames obviously had more than REC709 on them. On other frames it is all easily within REC709. It all depends on what is in the scene.

 

 

On 9/14/2022 at 12:54 PM, Tyler Purcell said:

That's what I'm also very confused. 

I guess his idea is that because digital cinema imagers can deliver a wider gamut, why can't film scanners.

 

Not really, I know that film can deliver a wider gamut than REC709 and I know that even my own personal film stills scanner can deliver a wider gamut than REC709 and I know that some even far cheaper totally avg Joe consumer level stills film scanners can deliver a wider gamut than REC709 so I'd naturally be surprised if it were to turn out that there is no way to set something like a Scanstation 6.5K to get greater than a REC709 gamut (hopefully there is or somehow there is something about the files I am somehow missing or not understanding).

 

Edited by Larry Baum
Link to comment
Share on other sites

On 9/14/2022 at 3:01 PM, Robert Houllahan said:

Yeah i am a bit confused too.

As far as I understand it film scanners generally do deliver wide gamut and I don''t see how in the imager/lamp or lin-log or demosaic math the gamut would be intentionally limited.

This is particularly true with a "True RGB" scanner that uses a monochrome sensor and multi flash R,G,B IR or a 3-line array.

It wouldn't be. At that stage it would surely be in some very wide gamut referenced form.

On 9/14/2022 at 3:01 PM, Robert Houllahan said:

It would seem to me that any gamut limitation would be in the area of the spectral response of the LEDs used in the lamp and the color dyes used in the CFA Bayer mask on the sensor. I doubt any scanner manufacturer is choosing LEDs with limited spectral response. The color cross-talk between channels on a Bayer sensor can be considerable and a matrix or profiled 3D Lut is probably in the pipeline to deal with the characteristics of the color sensor used and there may be some limitations there in comparison to a True RGB scanner.

 

Yeah, it would be very surprising they would use such limited filters though and yeah I doubt they'd use some weird low spectrum LEDs in it.

The only thing I could think of is that, for some reason, they decided to take the native scan data referenced to the primary locations of the scanner (which are surely very wide gamut) and then convert it down to sRGB/ REC709 primary locations maybe solely thinking of targeting the old broadcast standard or making it automatic that any program will load them normally without havinbg to have color management? But it would seem crazy restrictive to not have a toggle option somewhere to just leave it full wide gamut. Of course, maybe there is and it is just well hidden or strangely labelled and the scan operator just didn't have it set to allow wide gamut even though it can deliver that.

Either that or somehow there is something very different about these files that I am somehow missing being used to handling digitally shot RAW or stills film scanner RAW or that the print is way more saturated than I recall and it's actually NOT loading in correctly into programs and they are just defaulting to sRGB/REC709 since the DPX16 have no colorspace metadata and the DNG, for whatever reason, where given an sRGB colorspace data and the sRGB/REC709 that looks fairly normally is actually both somewhat undersaturated and with the colors somewhat twisted and tinted.

 

 

Edited by Larry Baum
Link to comment
Share on other sites

  • Premium Member
1 hour ago, Larry Baum said:

Not really, I know that film can deliver a wider gamut than REC709 and I know that even my own personal film stills scanner can deliver a wider gamut than REC709 and I know that some even far cheaper totally avg Joe consumer level stills film scanners can deliver a wider gamut than REC709 so I'd naturally be surprised if it were to turn out that there is no way to set something like a Scanstation 6.5K to get greater than a REC709 gamut (hopefully there is or somehow there is something about the files I am somehow missing or not understanding).

Of course film can, but where are you seeing that wide gamut? 

Honestly, have you seen it? Can you quantify it out outside of reading some numbers on a file? 

It's nice to know something can be better, but does anyone care? 

Link to comment
Share on other sites

Question,

I'm not familiar with CinemaDNG file types. This 'forward' matrix, if it's baked in the metadata... are you sure this isn't a standard XYZ to 709 linear transform - as if it's in the metadata and applied on ingesting in a 32-bit floating point engine if you used a linear transform from 709 to AP0/1 or XYZ it would have no loss - I think. 

The reason why I say this is that there is no standard 709 transform. A colour correction matrix from RGB primaries is, of course, partially dictated by RGB primaries and partially dictated by XYZ/709 RGB values of the inherent illuminant (I assume D50 in your use case? - not sure why it wouldn't be D65?). 

Could you post the forward matrix? The ideology of reviewing a matrix and assuming it's compressing X to Y is convoluted - at least in my head, it is. 

I'm unfamiliar with scanning film completely. But, let's say for a minute, negate the primaries of film and let's assume that the scanner is using a standard D65 illuminant with an adequate SPD. 

Let's also assume that the scanner is using a bayer filter - that doesn't abide by the Luther-Iver condition (I would've thought personally a scanner would, as the downsides of abiding by the CIE 1931 CMF colour target filtration scheme are noise and latitude - I would've thought that scanning a negative itself would partially negate that and then just throw a shit ton of light at it, but ah well maybe they couldn't be bothered designing a new sensor? - #FilmIsDead, I'm joking... maybe).

So with this, we have the RGB primaries of the Bayer CFA in the scanner, our standard illuminant (D65) and our output Primaries XYZ/709. 

This matrix shouldn't match any other matrix. It shouldn't look like any matrix you've seen (unless you're a colour scientist used to calibrating cameras - for which I apologise) as it's partially - primarily dictated by the RGB primaries of the scanner - I've spent some time calibrating phone cameras to XYZ - even though the final spectral locus/primaries are defined due to the variance in input RGB primaries the values are never the same - usually drastically different!

Also if the debayer has taken place prior to ingest - I believe (taking reference from SMPTE RDD31:2014) that the colour correction matrix from the tristimulus RGB primaries of the bayer CFA would've already been executed. 

 

 

 

 

Edited by Gabriel Devereux
Link to comment
Share on other sites

21 hours ago, Gabriel Devereux said:

Question,

I'm not familiar with CinemaDNG file types. This 'forward' matrix, if it's baked in the metadata... are you sure this isn't a standard XYZ to 709 linear transform -

 

I thought it was for taking the raw scanner/camera data and converting it to the CIE XYZ D50 intermediate space where most color engines seem to process everything.

If it was just taking the raw scanner/camera data and converting it straight to sRGB/REC709 then that would be another matter and the RAW data could then still be wide gamut and that certainly would be nice! I hope I am just being dumb and interpreting that matrix at the wrong end.

One weird thing is that the regular color matrix values in the metadata are just the CIE XYZ identity which doesn't seem correct at all. And the programs that default to using those matrices first and reverse engineering a forward matrix from them display the image in beyond wild crazy colors insane saturation and tint so I don't think those can be correct.

I thought I was looking at it properly though to read the RAW primary locations of out of, which were quite close, but just a trace off from sRGB/REC709. Unless it was meant to be crunching it down to that. But when I look at the ForwardMatrix someone showed for a DSLR it was very different than the one in this film and when I did the same thing to it, it popped out very wide gamut primary locations.

Also, the fact that if simply told Photoshop to assign sRGB to the DPX16bit linear scan and it looked what seemed to be reasonably as expected made me also think the data was stored in reference to sRGB/REC709 primaries. If I tell Photoshop to assign sRGB to a DSLR jpg that I saved in say AdobeRGB or even more ProPhotoRGB then it looks very off when it loads. Same if I say load a DNG into RawTherapee from this, well first it uses the COlorMatrix and gets crazy results, but if I tell it to assume the data is stored in reference to sRGB/REC709 primaries it looks kinda normal (unless the film is more saturated than I think and this is jsut some undersaturated somewhat off look).

Another problem is if say the files really did have wide gamut primary referened data in them, then if the DPX16 files has zero metadata about the color gamut at all then how in the world would any program know what to do with it? I mean for DSLR that actually is the case but then each raw converter software has to have a custom database, look up the camera name in the metadata and then apply whatever they found by testing the proper primary locations to be. But the DPX are not even really raw files and Photoshop has no idea what to do with them. And in ACR for the DNG, if I load in images and use working space ProPhotoRGB, wide enough to handle anything it could reaosnbly have in the data, and then raw proof it down to sRGB/REC709 primaries it doesn't show any clipping warnings, which does happen if I do that with a known wide gamut image (granted many frames won't ahve wide gamut info, but a few of these were pegged all straight along the edge of sRGB/REC709 which means the film should have gone beyond and so should the file if it really was in there I'd think).

 

 

 

21 hours ago, Gabriel Devereux said:

 

as if it's in the metadata and applied on ingesting in a 32-bit floating point engine if you used a linear transform from 709 to AP0/1 or XYZ it would have no loss - I think. 

The reason why I say this is that there is no standard 709 transform. A colour correction matrix from RGB primaries is, of course, partially dictated by RGB primaries and partially dictated by XYZ/709 RGB values of the inherent illuminant (I assume D50 in your use case? - not sure why it wouldn't be D65?). 

 

yeah it is slightly off from a pure sRGB/REC709 D65 matrix which I was guessing perhaps because it is D50 based although I didn't do that math to see if that accounts for the little bits of difference or not

 

21 hours ago, Gabriel Devereux said:

Could you post the forward matrix? The ideology of reviewing a matrix and assuming it's compressing X to Y is convoluted - at least in my head, it is. 

CFA Pattern 2 : 0 1 1 2
Linearization Table : (Binary data 23879 bytes, use -b option to extract)
Black Level : 0
White Level : 65535
Color Matrix 1 : 1 0 0 0 1 0 0 0 1
Analog Balance : 1.03499997 0.9990000131 1.210325574
As Shot White XY : 0.3456999958 0.3585000039
Calibration Illuminant 1 : D50
Colorimetric Reference : 1
Forward Matrix 1 : 0.4360747041 0.3850649001 0.1430803985 0.2225044967 0.7168785933 0.06061689931 0.01393220016 0.0971044973 0.7141733173
CFA Pattern : [Red,Green][Green,Blue]

 

 

21 hours ago, Gabriel Devereux said:

I'm unfamiliar with scanning film completely. But, let's say for a minute, negate the primaries of film and let's assume that the scanner is using a standard D65 illuminant with an adequate SPD. 

Let's also assume that the scanner is using a bayer filter - that doesn't abide by the Luther-Iver condition (I would've thought personally a scanner would, as the downsides of abiding by the CIE 1931 CMF colour target filtration scheme are noise and latitude - I would've thought that scanning a negative itself would partially negate that and then just throw a shit ton of light at it, but ah well maybe they couldn't be bothered designing a new sensor? - #FilmIsDead, I'm joking... maybe).

So with this, we have the RGB primaries of the Bayer CFA in the scanner, our standard illuminant (D65) and our output Primaries XYZ/709. 

This matrix shouldn't match any other matrix. It shouldn't look like any matrix you've seen (unless you're a colour scientist used to calibrating cameras - for which I apologise) as it's partially - primarily dictated by the RGB primaries of the scanner - I've spent some time calibrating phone cameras to XYZ - even though the final spectral locus/primaries are defined due to the variance in input RGB primaries the values are never the same - usually drastically different!

it looks pretty similar to what I think is a matrix taking data stored in reference to maybe D50 sRGB/REC709 primaries and putting it in XYZ D50 space

it looks very different from the few DSLR matrices I've seen

21 hours ago, Gabriel Devereux said:

Also if the debayer has taken place prior to ingest - I believe (taking reference from SMPTE RDD31:2014) that the colour correction matrix from the tristimulus RGB primaries of the bayer CFA would've already been executed. 

 

 

 

 

 

Link to comment
Share on other sites

On 9/11/2022 at 12:00 PM, Larry Baum said:

So for all these reasons I feel like the scanner seems to be most likely only giving sRGB/REC709 standard gamut locked color primary referenced output and clipping what the film is capable of (in some frames you even see the gamut plot just straight line up against the edge of sRGB/REC709 gamut which means the original film surely extended past that gamut).

Does anyone have any idea what could be going on?

I'm not 100% sure what's going on, but I would like to know as a friend of mine is investigating his one. Every output appears to limit the gamut in a slightly different way (Prores, DPX, DNG, etc) and you would need to write a custom debayering algorithm/app to properly test DNG. You can test yourself, but it sounds like you already have (have a few frames scanned to Proes XQ and then to DPX and DNG and compare). Also scan some calibration film.

Make sure your operator is using the latest software version for capture. Lasegraphics have fixed some colour issues (or claimed that they did anyway) in software. Also make sure filtering is set to 0 (that's the setting that artificially sharpens the scan the default is 0.3 I think and it makes grading a scan more difficult). Unfortunately the scanners can have bugs and one of the ways some operators work around bugs is using old software versions, I know it sounds weird but sometimes a feature breaks with the latest version of the software and it's specific to the machine (i.e. doesn't affect all other scanstations).

  • Like 1
Link to comment
Share on other sites

  • Site Sponsor
On 9/17/2022 at 3:36 AM, Larry Baum said:

Another problem is if say the files really did have wide gamut primary referened data in them, then if the DPX16 files has zero metadata about the color gamut at all then how in the world would any program know what to do with it?

I question your workflow from this.

Motion Picture scans generally do not have any color gamut assigned, you take a DPX or TIFF scan and put it into Resolve or Baselight etc. and then work in the color space you want to work in, the scanner does not assign a color space. So you can take the 16bit TIFF sequence and run in ACES or BT2020 etc. and off to the races you go....

DNG is not really a format that any motion picture post uses to work in, so I have a few questions.

1. Are the cDNG files directly from the Scan Station?

2. Are the cDNG files from a Negative scan or Print positive?

As far as I understand the Scan Station can make cDNG files but they are just the unprocessed data from the Sony Pregius IMX342 sensor so a negative scan will not be encoded into LOG nor will the cDNG file appear loaded into a piece of software as a positive so you will have to do that transform.

The LED Lamphouse is set to the just below clipping balance of the film stock per color channel which is your primary light source for scanning.

The Sony Pregius IMX342 sensor has a CFA (Color Filter Array) built by Sony and the color dyes are not something a scanner manufacturer can choose, these are off the shelf machine vision cameras used by LaserGraphics / Xena / Kinetta / VarioScan etc. So the scanner manufacturer has to do some math in the Debayer and encoding and if you scan from a color sensor system to cDNG you are likely missing any color science the scanner manufacturer does to write to DPX or TIFF or ProRes and also the cDNG precludes using a 2-Flash HDR process etc.

So if I were trying to figure this out I would drop all the still processing apps and work in Resolve or some other system for motion picture work and go from DPX or some other motion picture file format.

  • Like 1
Link to comment
Share on other sites

12 hours ago, Robert Houllahan said:

I question your workflow from this.

Motion Picture scans generally do not have any color gamut assigned, you take a DPX or TIFF scan and put it into Resolve or Baselight etc. and then work in the color space you want to work in, the scanner does not assign a color space. So you can take the 16bit TIFF sequence and run in ACES or BT2020 etc. and off to the races you go....

But whatever loads them needs to know what the R, G, B values stand for. Where on the CIE plot is the red primary that the data is in reference to, to make it easy assume an 8bit per channel (and yes I know it's more), so what would 255,0,0 represent? It totally depends upon on what reference primary red it's based upon. To properly work in a display referred or standard type colorspace the program needs to know how to translate it into that. With a RAW file from a DSLR it will be stored with reference to whatever the sensor+color filter array end up making it be. If you try to load it into a program that doesn't recognize the camera it was shot on, it won't load properly. It needs to know the exact camera model and then go to a look up table (since for strange and unfortunate reasons the DSLR makers don't just list the info publicly and don't store it in the metadata) that they had to come up with themselves by studying the properties of the sensor and more it's color filter array and then treat the data with reference to the primary locations for that particular camera model. They transform it from that into CIE standard color management base D50 XYZ space. And then in the programs you can chose whatever working space you want, for stills, it's more stuff like sRGB or ProPhotoRGB (for video more often REC709 or P3 or REC2020 or use an ACES workflow) and then it transforms that data into that space and everything is done in that space (it may need a further transform to the display's exact space if not the same as the working space, in video people often try to set the display to the same as the working space, in stills that is less often the case and the working space is often larger than the display's max space).

But the thing is a program needs to know how to interpret the data. If not it will almost certaintly end up showing the image with weird tints and either under or oversaturated.

 

 

12 hours ago, Robert Houllahan said:

DNG is not really a format that any motion picture post uses to work in, so I have a few questions.

1. Are the cDNG files directly from the Scan Station?

I believe so. That was the impression I got, but I was not there.

12 hours ago, Robert Houllahan said:

2. Are the cDNG files from a Negative scan or Print positive?

Finished, balanced, print positive.

12 hours ago, Robert Houllahan said:

As far as I understand the Scan Station can make cDNG files but they are just the unprocessed data from the Sony Pregius IMX342 sensor so a negative scan will not be encoded into LOG nor will the cDNG file appear loaded into a piece of software as a positive so you will have to do that transform.

Yeah negative scans have to be inversed.

There did seem to be a PrintLOG option as well, it definitely used a different sort of tone curve.

But the actual goal in my case was just the regular Print mode.

12 hours ago, Robert Houllahan said:

The LED Lamphouse is set to the just below clipping balance of the film stock per color channel which is your primary light source for scanning.

The AnalogBalance 3 entry matrix in the metadata I'm guessing instructs how to handle this and get the data treated with channels in the original balance, taking advantage of the extra DR storing it this way allows, I believe.

12 hours ago, Robert Houllahan said:

The Sony Pregius IMX342 sensor has a CFA (Color Filter Array) built by Sony and the color dyes are not something a scanner manufacturer can choose, these are off the shelf machine vision cameras used by LaserGraphics / Xena / Kinetta / VarioScan etc. So the scanner manufacturer has to do some math in the Debayer and encoding and if you scan from a color sensor system to cDNG you are likely missing any color science the scanner manufacturer does to write to DPX or TIFF or ProRes and also the cDNG precludes using a 2-Flash HDR process etc.

I would have thought so, but surprisingly, it appears to not be the case.

The cNDG doesn't seem to preclude 2flash HDR. It even sets the metadata to claim that the DNG is stored in display referenced form and not scene referenced form (which is what DSLRs and digital video cameras do).

 

12 hours ago, Robert Houllahan said:

So if I were trying to figure this out I would drop all the still processing apps and work in Resolve or some other system for motion picture work and go from DPX or some other motion picture file format.

It could be that some software recognizes the IMX342 and does have the native primary locations and uses that.

But the thing again is, if I simply force a more still oriented program to treat the color gamut as if it was sRGB/REC709 then it loads in looking with the same color tint/saturation as it does in Resolve or PremierePro for the DPX16 samples. If Resolve had a special table for how to handle Scanstation and IMX342 and the data was in wide gamut reference of the native capability of that camera then telling Photoshop or whatnot to act as if the data is stored in reference to sRGB/REC709 primaries should make it load looking pretty undersaturated or at least more undersaturated, and very likely with a noticeably different tint as well, but instead it loads with almost identical saturation and the tint shift is very minor. I mean it ends up looking extremely close, almost identical to how the DPX16 automatically loads into Resolve when doing that. If I do the same thing with a wide gamut image from a DSLR it loads way differently when one program assigns it the wrong color space reference.

 

 

 

Link to comment
Share on other sites

  • Site Sponsor
18 hours ago, Larry Baum said:

But whatever loads them needs to know what the R, G, B values stand for. Where on the CIE plot is the red primary that the data is in reference to, to make it easy assume an 8bit per channel (and yes I know it's more), so what would 255,0,0 represent?

I think this assumption is incorrect the color space for a film deliverable grade is chosen and the film scan is a RGB value (linear or log) with 0-1024 values for each color in 10bit. As long as the scan is not clipped either in the shadows or the hilites the color balance can be set to the desired look of the colorist. Same with Arriraw or other formats they are color space agnostic.

  • Upvote 2
Link to comment
Share on other sites

On 9/20/2022 at 10:02 PM, Robert Houllahan said:

I think this assumption is incorrect the color space for a film deliverable grade is chosen and the film scan is a RGB value (linear or log) with 0-1024 values for each color in 10bit. As long as the scan is not clipped either in the shadows or the hilites the color balance can be set to the desired look of the colorist. Same with Arriraw or other formats they are color space agnostic.

The data has to be stored relative to something though doesn't it? I'm still not understanding how now. Either simply whatever the filters on the scanner's camera are or something else if they decide to remap it. At the end of the day, it's basically just a DSLR inside the scanner in a sense, taking a digital picture of each frame. Any DSLR RAW photo stores it's data relatively to the raw performance of the sensor + color filters. Any program like say ACR in Photoshop that loads it properly goes to a custom table that the software maker creates to look up what they found the proper primary locations to be (would be nice if the camera makers just gave it in the metadata but they seem to wanna be all secretive).

It would seem strange and annoying if the motion picture scan machines were to scan it relative to the scanner's primary locations and then not put that data in metadata and then you have to hope some software happens to have the proper internal table to handle it (which would even be fine, although it kinda seems like this is not something the softwares have in these cases, perhaps they do for Resolve and Cintel since that is their own scanner, it seems like maybe they do understand that scanner and load custom wide gamut primaries for that, maybe, haven't tested it yet, hard to find info, after tons of searching a little bit of info makes it seem like maybe that is the case) one just randomly apply color gamuts and see which one sort of looks semi-reasonable as the starting point. And most grading tools are not really meant to correct for wrong assumptions of where color primaries are located I don't believe and I think might be tricky to get to match exactly if you have to pick something that isn't exactly right at all. Maybe they figured since motion picture scanning is so obscure compared to DSLR work or digital motion picture work or stills scanning that a lot of software won't have the proper look up info so they just internally scale it all to REC709 gamut so anything can just load it as default with no mistakes, but that would seem quite a shame to lose those colors that print film can handle that are beyond REC709.

Or maybe they expect one to have some test sample of colors for every stock they use (not easy if it's old stock in particular) or to use a spectro to measure various colors on a projected print and find out what they are and then compare to values in the scan and then custom build your own color table and apply your own custom input LUT to the scan data. But that's a helluva an undertaking and not even so easy to do super well, when it seems it should be able to be handled so much more trivially easy. I thnk I've heard of studios doing stuff like that back in the early days, but it would seem like we should be way past that.

Maybe for a negative if you didn't ahve the primaries chosen eactly, it's not quite so bad as no final look has been chosen (but it would still help! and could still introduce a lot of tricky issues) but for a finished print with all the colors already locked in it's really strange seeming.

But also strange is that if you use Resolve and set Custom under part of the color management panel and then you can select the input color space, which I think is you manually telling Resolve in what color gamut+curve the data is stored as (you select both separately if you wish) and it allows you do this when loading DPX16 format and you select something really wide like Arri Wide it just looks insane, wayyyy insanely over-satured and twisted in color. If you select a more mild wide gamut like P3, which should be somewhat close to a print gamut, I have to say it just kinda looks oversaturated to me. I don't have the print on hand to compare with, but it seems hard to believe it looks quite THAT intense. OTOH, if I tell it to treat the data in the DPX16bit linear Print file as REC709 it kinda looks about like what I believe the saturation on the print actually is. If it's stored in wide gamut I'd have thought that would make it look quite bland and maybe noticeably twisted in color and that P3 or something larger might make it look more ballpark. So the data just keeps seeming like it's awfully close to REC709 gamut. I'll check into more in case I made a mistake and somehow it turns out to be closer to P3, which would be good, but it seems like otherwise.

 

Link to comment
Share on other sites

On 9/14/2022 at 4:03 AM, Jon O'Brien said:

I've noticed that some in the stills world can concentrate on technical aspects of image that in the cine world could be considered overly fine and rarefied details of image science that few have the luxury to contemplate or study in-depth. After all, the individual pictures flick by at the rate of 24 or more per second. There's much to be done and considered and only so much time to give to the niceties of the single image. I'd say in motion pictures we tend to go with what 'just looks good' and if that means a simple colour science and a simple means of acquiring the end result then that's the way it just is. If I'm way out of line that's fine, just tell me. But I was struck with this thought while reading this thread.

Well, cinematographers are usually an anal bunch, just like large format still photographers. And they have to be. The more exacting they are, the better the product usually is. Unless time is of the essence, and they fail to produce in the time allotted. Such as newsreel or war cinematographers that work on the fly and can't afford to be very anal. And when I say anal...I mean analysis. But sometimes you can get stuck in analysis paralysis. So good to be balanced.

I'm still trying to catch up with the forum. I haven't read the rest of the replies. I hope to eventually see some images here illustrating some of the text. All this text means nothing to me without illustrations.

Link to comment
Share on other sites

  • Site Sponsor
19 hours ago, Larry Baum said:

The data has to be stored relative to something though doesn't it? I'm still not understanding how now. Either simply whatever the filters on the scanner's camera are or something else if they decide to remap it. At the end of the day, it's basically just a DSLR inside the scanner in a sense, taking a digital picture of each frame. Any DSLR RAW photo stores it's data relatively to the raw performance of the sensor + color filters. Any program like say ACR in Photoshop that loads it properly goes to a custom table that the software maker creates to look up what they found the proper primary locations to be (would be nice if the camera makers just gave it in the metadata but they seem to wanna be all secretive).

Well actually not really, especially on "real" RGB scanners which shoot each color with a monochrome sensor and RGB+IR LED lamp pulses, these are mapped into a Cineon Log curve to make 10bit RGB DPX frames. So there is no "Raw" file to be demosaiced as each channel is a full scan record. With a 16bit DPX or Tiff the linear data is mapped into the 16 bits from the sensor, the Arriscan for example uses a 14 bit ALEV monochrome sensor and in 2-Flash HDR it is a full 16 bits of data per channel for each RGB color. So it pulses the lamp R+ R- G+ G- B+ B- (and IR if a dirt map is to be made) in 2-flash HDR 16 bit mode, which runs at about 3 FPS scanner speed. The important thing is not to clip the file, i.e. to get all the density range on the film into a digital container without losing any detail in the shadows or hilites.

Scanners with CFA cameras (Scan Station Kinetta etc) basically mimic to the best degree possible the operation of a "true" RGB scanner by setting each RGB LED lamp pulse to just below clipping on the clear part of the film base, then they do either a Matrix or 3D LUT in the scanner to fix the color channel cross-talk from the sensor's CFA dyes and that is generally then mapped into a RGB file like DPX or ProRes4444.

I can ask some high end colorists who work on Marvel films etc. but in general there is no setting color space from a film scan the color space is set really by the display device and space you are grading in, not by the scanner.

  • Upvote 1
Link to comment
Share on other sites

13 hours ago, Robert Houllahan said:

Well actually not really, especially on "real" RGB scanners which shoot each color with a monochrome sensor and RGB+IR LED lamp pulses, these are mapped into a Cineon Log curve to make 10bit RGB DPX frames. So there is no "Raw" file to be demosaiced as each channel is a full scan record. With a 16bit DPX or Tiff the linear data is mapped into the 16 bits from the sensor, the Arriscan for example uses a 14 bit ALEV monochrome sensor and in 2-Flash HDR it is a full 16 bits of data per channel for each RGB color. So it pulses the lamp R+ R- G+ G- B+ B- (and IR if a dirt map is to be made) in 2-flash HDR 16 bit mode, which runs at about 3 FPS scanner speed. The important thing is not to clip the file, i.e. to get all the density range on the film into a digital container without losing any detail in the shadows or hilites.

Scanners with CFA cameras (Scan Station Kinetta etc) basically mimic to the best degree possible the operation of a "true" RGB scanner by setting each RGB LED lamp pulse to just below clipping on the clear part of the film base, then they do either a Matrix or 3D LUT in the scanner to fix the color channel cross-talk from the sensor's CFA dyes and that is generally then mapped into a RGB file like DPX or ProRes4444.

I can ask some high end colorists who work on Marvel films etc. but in general there is no setting color space from a film scan the color space is set really by the display device and space you are grading in, not by the scanner.

Robert, when you say 2-flash; do you have the base exposure plus 2 other exposures?  Are you ending up with 3 exposures of +1, 0 and -1 in the HDR mix?

If so, do you have control over the under and overexposure, so you can dial in +1.5 and -1.5 instead of +1+ or -1 if need be or is the 2-flash fixed in exposure?

Edited by Daniel D. Teoli Jr.
Link to comment
Share on other sites

  • Site Sponsor

The Arriscan does 2 exposures in HDR mode the exposure time is set by the base calibration and you cannot change the times on that machine. The Xena Monochrome you can set the exposures for both the first and second flash and the LaserGraphics Director (Now a 13.5K machine) can do 3 exposures.

Link to comment
Share on other sites

13 hours ago, Daniel D. Teoli Jr. said:

Robert, when you say 2-flash; do you have the base exposure plus 2 other exposures?  Are you ending up with 3 exposures of +1, 0 and -1 in the HDR mix?

It's 2-flash per emulsion layer. So monochrome film it's 2 physical flashes, for colour film it's 6 physical flashes scanning at 2K/3K. If you're scanning at 4K/6K that uses microscanning where the sensor in the camera is shifted a tiny amount to use 4 captures to make a native 6K capture. So with microscanning and colour film and 2-flash HDR it's 24 separate captures and the Arri does it at some absurdly fast speed like 3 frames per second or something, which is bonkers when you think about how many captures that involves.

  • Like 2
Link to comment
Share on other sites

On 9/28/2022 at 8:32 PM, Robert Houllahan said:

Well actually not really, especially on "real" RGB scanners which shoot each color with a monochrome sensor and RGB+IR LED lamp pulses, these are mapped into a Cineon Log curve to make 10bit RGB DPX frames. So there is no "Raw" file to be demosaiced as each channel is a full scan record. With a 16bit DPX or Tiff the linear data is mapped into the 16 bits from the sensor, the Arriscan for example uses a 14 bit ALEV monochrome sensor and in 2-Flash HDR it is a full 16 bits of data per channel for each RGB color.

That has nothing to do with what color primaries things are in reference to though. I mean some of the Fuji DSLR were true RGB and they still had to have their RAW files opened with something that knew what color primaries to reference. And my Nikon 9000 ED stills scanner is true RGB per pixel and it's output all appears to need to be in reference to some primaries.

I mean think in 1D and think of money. Say you give me $100,000 USD and then I say 'scan'/'convert' it to $100,000 Digital Dollars. And then Digital Dollars are not USD, not Pounds, not Lira, not Euros, etc. But you can't say oh you can take the $100,000 Digital Dollars and put them in whatever 'format' you want directly. I mean you can, but if I then called it Lira and gave you $100,000 Lira I think you might be a bit upset. You need to know what the Digital Dollars were in reference too. They were actually in reference to USD in this case, but if you don't account for that and then apply a conversion rate, it doesn't work out so well for one party.

 

On 9/28/2022 at 8:32 PM, Robert Houllahan said:

 

So it pulses the lamp R+ R- G+ G- B+ B- (and IR if a dirt map is to be made) in 2-flash HDR 16 bit mode, which runs at about 3 FPS scanner speed. The important thing is not to clip the file, i.e. to get all the density range on the film into a digital container without losing any detail in the shadows or hilites.

Scanners with CFA cameras (Scan Station Kinetta etc) basically mimic to the best degree possible the operation of a "true" RGB scanner by setting each RGB LED lamp pulse to just below clipping on the clear part of the film base, then they do either a Matrix or 3D LUT in the scanner to fix the color channel cross-talk from the sensor's CFA dyes and that is generally then mapped into a RGB file like DPX or ProRes4444.

 

On 9/28/2022 at 8:32 PM, Robert Houllahan said:

I can ask some high end colorists who work on Marvel films etc. but in general there is no setting color space from a film scan the color space is set really by the display device and space you are grading in, not by the scanner.

But supposing the scanner camera+filters had primaries really close in to the center of the CIE XY chart and you then viewed the data on a P3 or REC2020 monitor directly. It would like insanely over-saturated and the color balance would likely have a very strange twist to it.

Or suppose the scanner camera+filters had primaries pretty far out, beyond P3, around REC2020 or something and you then just viewed it on a monitor set to REC709 gamut, it would look pale, washed out and likely with a strange twist to the color balance.

That is what happens if you just take RAW data from a DSLR and don't apply any knowledge about what the camera's color primaries were. So that is not done. The software companies find out what the primaries are and then treat the RAW data in reference to those primaries and then convert it to a standard intermediate color management CIE XY D50 space and then from there they translate it to whatever working space you want to work in and then that is translated once again to whatever your display is set to.

 

Link to comment
Share on other sites

On 9/28/2022 at 8:32 PM, Robert Houllahan said:

I can ask some high end colorists who work on Marvel films etc. but in general there is no setting color space from a film scan the color space is set really by the display device and space you are grading in, not by the scanner.

Thanks!

(The thing is you need to know what the scan data is stored in reference to to be able to properly translate it into whatever working space you chose. You could chose P3 and set a display to P3 but then if the data is NOT in P3 the software needs to know what it is in so it can translate it to P3. What if one (or all) of the primaries are in very different locations than the ones in P3. If you just treat it like P3 data the color balance will be all twisted in weird ways and the saturation for each channel will be way off. I just don't yet see how a motion picture scan can somehow magically not need this initial translation step that all other visual data seems to. From DSLRs it does (which is sort of what is inside of the modern motion picture scanners), from digital motion picture cameras it does, from stills film scanners it seems to. So far it almost seems like Lasergraphics already pre-does this step and just puts it in reference to REC709 color primaries (although not REC709 curve) at least for the Scanstation, perhaps not for the Director, or maybe it is actually closer to P3, but what exactly is it then and why do they not put it in any metadata and why does Resolve seem to default to loading DPX16 and DNG from it looking like it assumes it is REC709 and not some custom scene (scanner) referenced color gamut?)

 

Link to comment
Share on other sites

  • 2 weeks later...
  • Premium Member

I hadn't had time to peruse this thread until today but it's quite an interesting question. I hesitate to say too much having not seen the files and not knowing much about how the scanner was set up. Even so, much as I think he was misunderstood early in the thread, I see what Larry means by his observation that the colours in the scanned files seem to clip against the edge of the sRGB/709 gamut. That's a reasonable concern and my immediate reaction would be to assume it's clipping the colour gamut.

One issue which springs to mind here is that this is, we're told, a scanner using RGB illumination and a monochrome sensor. Much as that's great with respect to getting no-compromise cosited RGB information, using LEDs to do this does raise a couple of concerns. There's a decent red emitter available, but most (not all) LED blue is more royal blue than the deep indigo we might prefer. Green-emitting diodes, in particular, are notoriously not a very deep green. I have never looked into this formally, but I would speculate fairly confidently that there are deeper greens in film negative than the green of the sort of LED that is likely to be used in this sort of application.

This would intrinsically clip the colourspace of the scanner in a way that you can't really do much about and that would be a shame.

Alternatives would be to use a more full-spectrum light source and dichroic filters, which would end up looking rather like the part of a contact or optical printer that responds to colour timing numbers. I can fully understand that the designers of these things might prefer to avoid the cost, bulk and maintenance issues of a complex piece of mechantronics like that, and so an easier approach might be to use phosphor-converted white-light LEDs and filter their output green. This would be very inefficient, optically, but for the sake of a scanner that might be a worthwhile tradeoff.

As I say, I'm speculating wildly here and much of what I've said might be utterly wrong, but feeble LED greens affect colour mixing lighting devices too, where they make it difficult to generate deep cyan and yellow-orange-red colours, which is exactly the concern Larry reports. The ability to render the deep, powerful turquoise of a tropical sea over white sand is often given as a motivation for the development of the Rec. 2020 colour gamut, which can handle those colours in a way that 709/sRGB just can't. 

For what it's worth, the filters on a Bayer or other CFA sensor also tend to be rather feeble and that's one reason Bayer cameras struggled for years to produce workably accurate colour, until people got smart enough to apply a lot of matrixing. This probably wouldn't ywork so well with RGB LED scans because the problem isn't solely that the green is unsaturated - it's pretty saturated - it's just too yellow. There is not much ability to fix this sort of thing in post.

Does any of that make sense to anyone?

Link to comment
Share on other sites

  • Site Sponsor
4 hours ago, Dan Baxter said:

Robert, interesting though your vid is, it has noting to do with the question. Bayer scanner question not RGB.

Well the conversation strayed allot and I think people might like to see how a machine like this does sequential RGB HDR color Pin Registered scans.

so there is the Arri 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...