Jump to content

Carl Looper

Basic Member
  • Posts

    1,462
  • Joined

  • Last visited

Profile Information

  • Occupation
    Digital Image Technician
  • Location
    Melbourne, Australia

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The difference between 144 deg and 135 deg is a difference of only 0.09 stops. Slightly less than 1/10th of a stop. In other words: insignificant. The more significant difference is between 135 deg and that which a light meter assumes (180 deg). This is a difference of 0.4 stops. Prism loss is another 0.3 stops. Together they come to 0.7 stops (or 0.6 stops difference if you want to belive a 144 deg shutter). Rounded to the nearest 1/3rd of a stop (which is common practice) both 135 deg and 144 deg arrive at the same compensation: 2/3rds of a stop compensation. As Tuohy pointed out, the simplest solution is to rate the film as 2/3rds stops slower. So rate 100 ISO film as 64 ISO. And then use light meter as is (as if camera had a 180 deg shutter without a prism) 500 > 320 250 > 160 100 > 64 50 > 32
  2. The amount of penumbra, as we travel along the edge of a blade remains the same area (for a given diaphragm and lens). This would mean that if both edges of a blade met at the centre of rotation, image locations closer to the centre of rotation would experience a longer time in the penumbra region compared to locations further away. I suspect the different angles of the blade are to compensate for this, ensuring a more even exposure from one side of the frame to the other.
  3. In relation to the bigger question regarding whether wide gamut colours are stored, the answer is mostly yes and a tiny no. As far as as a downstream decoder is concerned (such as a wide gamut display) you could feed that display *any* three numbers, and the display would emit a wide gamut light corresponding to those numbers. However the display renderer would only do that if it understands the provided numbers correspond to a wide gamut colour. If a renderer does not know what the numbers represent, it is standard for a renderer to assume the numbers represent an sRGB colour, and to emit light of the corresponding colour. Or when otherwise effectively told (by the forward matrix) that the numbers represent an sRGB colour, it will do the same. If the forward matrix is otherwise updated to represent the correct sensor to XYZ D50 transform (without in any way altering the stored pixel numbers) then a wide gamut display will correctly emit the wide gamut colour of the light that illuminated that sensor. C
  4. I should add that if you already know the forward matrix is just an srgb to XYZ D50 matrix, then the experiment is unnecessary, as you will will already know that a subsequent transform, using a XYZ_D50-to-sRGB matrix, will just reconstruct the original values. Which is precisely what happened. But at the time of writing up the experiment, I didn't know the forward matrix was an srgb to XYZ D50 matrix. I just used the forward transform as the required first step in a camera to colour space transform. I just happened to use sRGB as my target colour space, so I could compare the filmstrip against the colours as rendered on an sRGB screen. It was only after doing the experiment that I discovered the numbers used for rendering were the exact same numbers as the original data, i.e that the forward matrix must be an srgb to XYZ D50 matrix. The eyeball test proved the forward matrix was wrong. The identical numbers just proved the the matrix was not a camera to XYZ D50 matrix (unless there is such a thing as an sRGB sensor (ha ha), or that the Sony Presgius is such a sensor (ha ha).
  5. So I scanned some print film (a test strip) to DNG on the ScanStation. The forward matrix in the metadata of the DNG file turned out to be an sRGB to XYZ D50 matrix: 0.4361 0.3851 0.1431 0.2225 0.7169 0.0606 0.0139 0.0971 0.7142 See http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html bottom of page for an independant derivation of an sRGB to XYZ D50 matrix. You can see that the above forward matrix (from the DNG file) is exactly the same as Lindbloom's matrix (if at slightly less precision). The forward matrix in a DNG file should provide a downstream DNG renderer with how to map raw camera sensor data (in the DNG file) to XYZ D50 values (ahead of conversion to whatever colour space the renderer is targeting). To have an sRGB to XYZ D50 matrix as the forward matrix would only make sense if the DNG file contained sRGB values (instead of camera sensor values). Either the camera sensor data has actually passed through a conversion to sRGB, prior to DNG encoding (possible I guess), or the forward matrix in the DNG file is just wrong (perhaps copied and pasted from some default tag set). As an experiment I wrote a low level renderer that took the raw image data from the DNG file, and applied the provided forward matrix to the raw data. I then transformed the supposed XYZ D50 results to sRGB colour space. Eyeballing the results (on an sRGB screen) against the original filmstrip (held in front of the same screen), convinced me the forward matrix must be incorrect. The cyan range in the results were not too bad, but the yellow was way out. A way to determine a more reliable forward matrix is elaborated in this article. https://www.strollswithmydog.com/determining-forward-color-matrix/ Carl
  6. So many stops it almost exceeds the dynamic range of the test system. Any more stops and you would be using the Alexa 35 to evaluate the test system. Ha ha.
  7. There is no suggestion in the posted description of cineon encoding that positives (as distinct from negatives) would be encoded in linear rather than log. Be it the density of negatives or the density of positives, density is defined as the log of reciripical transmittance. Why log encoding? Because, per bit, log encoding is more accurate than linear encoding. There is a claim that positives are encoded in terms of linear values. This may very well be true, but there is nothing in the posted description of cineon to suggest this is the case. So why post it? DPX (based on Cineon) supports both linear and log encoding (among others) so its not out of the question that DPX encoding of positives might be done in linear format. Either way it would be good to know what the actual practice is. By way of analogy, it would be good to know whether a list of numbers representing distances were provided in millimeters or inches.
  8. The various non-linear ways in which film might be encoded is about how to maintain accuracy when otherwise required to reduce the file size. For example, if the target file size is to be 10 bits (for whatever reason such as cost), the worst approach would be to truncate the linear sensor bits. As prevoiusly mentioned, more efficient (in terms of accuracy) is to compute the log of the linear sensor data, and truncate that. Of course, if cost is not a concern then saving the linear sensor data will be the most accurate. But if cost is a concern (which, for most mere mortals, it will be) then 10 bit gamma encoding will be more accurate than 10 bit log encoding, and 10 bit log encoding will be more accurate than 10 bit linear data. C
  9. My understanding of cineon was that it stored densities, which is log data (the log of inverse transmittance). Not linear data. And that this was how prints were encoded as much as negatives (but not requiring as many bits). However it's a moot point. What were interested in is how the scanstation Prores encoder stores data as this will be (I assume) more efficient than DPX (especially if the DPX is truncated DPX).
  10. Just because there is a claim that cineon encoders stored linear data for positives doesn't mean a Prores encoder does. Or am I missing something?
  11. Unless the transforms used by a scanner (along with a spectral plot of the sensor sensitivity) are made available, a developer writing software to process the scans has no choice but to characterise the scanners encoder, eg. with a LAD test or some other understood test strip. How else would they otherwise know how to read the data and render it in the context of a given display? Certainly there will be existing solutions that do this, but if they are any good they will have done a characterisation of the scanner's particular encoder.
  12. I would have assumed the Prores encoder would use gamma encoding, rather than storing linear or log data. Or is that not correct? Othewise dropping linear data down from 12/14/16 to 10 bits (dropping the least significant bits) is the worst thing one could do. Poor mans compression. Better is to firstly transform the linear sensor data to log, before digitising the result as 10 bit data. Even better is to perform gamma encoding prior to 10 bit digitisation. But only if the gamma function is known, otherwise the relationship between the original linear data and the gamma encoded values is severed. In other words, if the Prores encoder is performing gamma encoding (as would be sensible) it would be good to know which gamma transform it was using. Given the gamma transform, the original linear signal can be reconstructed (to a good approximation) using the inverse transform. And given a reconstruction of the linear data, the original light can be reconstructed from a plot of the sensors spectral curves. If the prores encoder otherwise remains opaque, a log file would be the next best thing. Does the scan station DPX encoder support log encoding? Or just linear?
  13. Photography (by which I also mean cinematography) is an art in its own right, quite separate from arts such as painting or drawing from which animation and visual effects emerge. The first known photochemical image was a reproduction of an engraving, but it's not considered the first photograph. It was when the camera was pointed out of a window that the first photograph is deemed to have occurred. Photography, as an art in its own right, is born in this moment. Photography in a virtual production studio means that a good proportion of the image is no longer in the hands of the photographer. Its in the hands of artists in other fields such as those who work in gaming. So I envision a rebellion, where photographers just return to the world outside the studio, and express the power of that world.
  14. Linear encoding is probably the safest option for a scan (involving the least number of assumptions in downstream processing) but it is also the most inefficient in terms of file sizes. Any "corner cutting" on such, (e.g. dropping bits) is going to compromise the data faster than if the scan was encoded in any of the non-linear ways it might be encoded. The most conservative non-linear encoding will be linear to Log encoding. Dropping bits in a log stream will have less affect on integrity than if dropping bits on linear data. Log encoding is also the first (and simplest) attempt to transform linear data according to a human perceptual understanding of light. Following on from log encoding (in terms of efficiency) is gamma encoding. A standard that has remained strong over the years is the one standardised in Rec.709. Indeed newer standards, such as Rec.2020, maintain the same gamma encoding as Rec.709. The main change between 709 and 2020 is the green primary - a change required to keep the resulting expanded colour space within the bounds of visible light (which avoids bit waste). Kodak LADs are good for calibrating printer lights, but as far as I know, they are not available for 16mm. I wish they were ! Nevertheless the basic idea behind a LAD is a good one. A LAD can be spliced to the head of a print, providing the means by which to characterise whatever transfer function is being used by a scan. Once a scanners transfer function has been characterised, software can be written to emulate, on a computer screen, what the film would look like when projected on an analog projector. Any deviation from an ideal result can then be translated into printer point corrections to be made on the printer. This workflow does not imply some sort of machine is making colour decisions. It just provides the means by which a colourist, using their eyeballs and experience, can make decisions translatable back into printer configurations (such as printer point adjustments).
  15. So just to clarify, in case my previous response remains confusing. So in my case, the question of how a scan describes the light transmitted by the film is with respect to the question of how a new print, with new printer light settings, can be made from the original neg, i.e. for projection in an analog film projector. Nothing to do with how a scan might be graded, or otherwise escape such grading.
×
×
  • Create New...