-
Posts
1457 -
Joined
-
Last visited
Profile Information
-
Occupation
Digital Image Technician
-
Location
Melbourne, Australia
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
-
Alexa 35 Really DOES Have 17 Stops of Dynamic Range
Carl Looper replied to Tim Tyler's topic in ARRI
So many stops it almost exceeds the dynamic range of the test system. Any more stops and you would be using the Alexa 35 to evaluate the test system. Ha ha. -
There is no suggestion in the posted description of cineon encoding that positives (as distinct from negatives) would be encoded in linear rather than log. Be it the density of negatives or the density of positives, density is defined as the log of reciripical transmittance. Why log encoding? Because, per bit, log encoding is more accurate than linear encoding. There is a claim that positives are encoded in terms of linear values. This may very well be true, but there is nothing in the posted description of cineon to suggest this is the case. So why post it? DPX (based on Cineon) supports both linear and log encoding (among others) so its not out of the question that DPX encoding of positives might be done in linear format. Either way it would be good to know what the actual practice is. By way of analogy, it would be good to know whether a list of numbers representing distances were provided in millimeters or inches.
-
The various non-linear ways in which film might be encoded is about how to maintain accuracy when otherwise required to reduce the file size. For example, if the target file size is to be 10 bits (for whatever reason such as cost), the worst approach would be to truncate the linear sensor bits. As prevoiusly mentioned, more efficient (in terms of accuracy) is to compute the log of the linear sensor data, and truncate that. Of course, if cost is not a concern then saving the linear sensor data will be the most accurate. But if cost is a concern (which, for most mere mortals, it will be) then 10 bit gamma encoding will be more accurate than 10 bit log encoding, and 10 bit log encoding will be more accurate than 10 bit linear data. C
-
My understanding of cineon was that it stored densities, which is log data (the log of inverse transmittance). Not linear data. And that this was how prints were encoded as much as negatives (but not requiring as many bits). However it's a moot point. What were interested in is how the scanstation Prores encoder stores data as this will be (I assume) more efficient than DPX (especially if the DPX is truncated DPX).
-
Unless the transforms used by a scanner (along with a spectral plot of the sensor sensitivity) are made available, a developer writing software to process the scans has no choice but to characterise the scanners encoder, eg. with a LAD test or some other understood test strip. How else would they otherwise know how to read the data and render it in the context of a given display? Certainly there will be existing solutions that do this, but if they are any good they will have done a characterisation of the scanner's particular encoder.
-
I would have assumed the Prores encoder would use gamma encoding, rather than storing linear or log data. Or is that not correct? Othewise dropping linear data down from 12/14/16 to 10 bits (dropping the least significant bits) is the worst thing one could do. Poor mans compression. Better is to firstly transform the linear sensor data to log, before digitising the result as 10 bit data. Even better is to perform gamma encoding prior to 10 bit digitisation. But only if the gamma function is known, otherwise the relationship between the original linear data and the gamma encoded values is severed. In other words, if the Prores encoder is performing gamma encoding (as would be sensible) it would be good to know which gamma transform it was using. Given the gamma transform, the original linear signal can be reconstructed (to a good approximation) using the inverse transform. And given a reconstruction of the linear data, the original light can be reconstructed from a plot of the sensors spectral curves. If the prores encoder otherwise remains opaque, a log file would be the next best thing. Does the scan station DPX encoder support log encoding? Or just linear?
-
Future of Cinematography! What’s next?
Carl Looper replied to Saikat Chattopadhyay's topic in General Discussion
Photography (by which I also mean cinematography) is an art in its own right, quite separate from arts such as painting or drawing from which animation and visual effects emerge. The first known photochemical image was a reproduction of an engraving, but it's not considered the first photograph. It was when the camera was pointed out of a window that the first photograph is deemed to have occurred. Photography, as an art in its own right, is born in this moment. Photography in a virtual production studio means that a good proportion of the image is no longer in the hands of the photographer. Its in the hands of artists in other fields such as those who work in gaming. So I envision a rebellion, where photographers just return to the world outside the studio, and express the power of that world.- 32 replies
-
- 3
-
-
- virtual production
- technology
-
(and 1 more)
Tagged with:
-
Linear encoding is probably the safest option for a scan (involving the least number of assumptions in downstream processing) but it is also the most inefficient in terms of file sizes. Any "corner cutting" on such, (e.g. dropping bits) is going to compromise the data faster than if the scan was encoded in any of the non-linear ways it might be encoded. The most conservative non-linear encoding will be linear to Log encoding. Dropping bits in a log stream will have less affect on integrity than if dropping bits on linear data. Log encoding is also the first (and simplest) attempt to transform linear data according to a human perceptual understanding of light. Following on from log encoding (in terms of efficiency) is gamma encoding. A standard that has remained strong over the years is the one standardised in Rec.709. Indeed newer standards, such as Rec.2020, maintain the same gamma encoding as Rec.709. The main change between 709 and 2020 is the green primary - a change required to keep the resulting expanded colour space within the bounds of visible light (which avoids bit waste). Kodak LADs are good for calibrating printer lights, but as far as I know, they are not available for 16mm. I wish they were ! Nevertheless the basic idea behind a LAD is a good one. A LAD can be spliced to the head of a print, providing the means by which to characterise whatever transfer function is being used by a scan. Once a scanners transfer function has been characterised, software can be written to emulate, on a computer screen, what the film would look like when projected on an analog projector. Any deviation from an ideal result can then be translated into printer point corrections to be made on the printer. This workflow does not imply some sort of machine is making colour decisions. It just provides the means by which a colourist, using their eyeballs and experience, can make decisions translatable back into printer configurations (such as printer point adjustments).
-
So just to clarify, in case my previous response remains confusing. So in my case, the question of how a scan describes the light transmitted by the film is with respect to the question of how a new print, with new printer light settings, can be made from the original neg, i.e. for projection in an analog film projector. Nothing to do with how a scan might be graded, or otherwise escape such grading.
-
So a projection print is improved by adjusting printer points on the printer. The traditional devices used for assessing an answer print are colour analyser cameras and densitometers. One could also use a scan of the print for this. But not if the transforms used to encode the scan are opaque. This is not about how a scan "looks" but what the resulting numbers on file mean, i.e in terms of the physical light transmitted by the film. If the files are encoded in terms of known standards, such as Rec.709, or Rec.2020, or any other known standard, it is then simple enough to grade the print, and compute the required compensation for the printer lights. I only mention Rec.2020 as it's gamut triangle is a reasonable match to print stock gamut, without bit wastage. One could, of course, use an even larger colour space, if at the expense of larger file sizes.
-
The lack of technical documents regarding how ScanStation scans a film when particular setting are applied, and that operators of ScanStations appear to have no idea what you are talking about when talking about colour spaces, means that one is left to hypothesise what the result of a scan might be. In practice, of course, scans can be eyeball adjusted in software such as Resolve, and most colours will be reconstructable through that process. It's when otherwise using a scan of a film print to assess that film print (e.g. ahead of making a better print) that the specifics of the encoding become important. One might very well take a scan through Resolve and make a beautiful digital result from it, but how to otherwise use that grading information to make a new projection print, is entirely lost if a scan house (or Lasergraphics) can't or won't describe the scan in terms of the transforms it uses to encode the scan. That all said, a combination of speculation (hypothesising) and tests of such would eventually solve for such lack of information.
-
In terms of capturing (to file) the full chromacity range of the film, an encoding in terms of Rec.2020 would be better than Rec.709. This requires at least 12 bits per colour component. The opto-electric transfer function (or "gamma") of Rec.2020, is exactly the same as Rec.709. If Rec.2020 otherwise achieves a wider gamut encoding, it's only because it uses more bits. To put it another way, (ignoring the change of hue for the primaries) a 10 bit version of Rec.709 encoding will have a larger triangle in chromacity space than an 8 bit encoding. The difference in primaries between Rec.709 and Rec.2020 just means the shape of the gamut triangle remains within the visible chromacity range, i.e that the bits required to encode the full chromacity range of the film don't have bits wasted on invisible chromacities. In principle, (apart from said waste), a high bit encoding, using Rec.709 primaries, can be transcoded to Rec.2020 ( in downstream processes), without any chromacity loss. In practice, the waste factor is important as it affects file sizes. It is more optimal to alter the primaries (in particular green) to keep file sizes in check. Which is what Rec.2020 does.
-
I would conclude from the discussion that the scanstation software doesnt provide for selecting an encoding colour space larger than the Rec.709 triangle. I imagine the sensor would be capable of doing so were it programmed to do so, but given the apparent absence of such an option in scanstation software (based on commentary by those who run scanstations) its a reasonable assumption to make that it only encodes in terms of the Rec.709 colour space.