Jump to content

Carl Looper

Basic Member
  • Posts

    1,462
  • Joined

  • Last visited

Everything posted by Carl Looper

  1. The difference between 144 deg and 135 deg is a difference of only 0.09 stops. Slightly less than 1/10th of a stop. In other words: insignificant. The more significant difference is between 135 deg and that which a light meter assumes (180 deg). This is a difference of 0.4 stops. Prism loss is another 0.3 stops. Together they come to 0.7 stops (or 0.6 stops difference if you want to belive a 144 deg shutter). Rounded to the nearest 1/3rd of a stop (which is common practice) both 135 deg and 144 deg arrive at the same compensation: 2/3rds of a stop compensation. As Tuohy pointed out, the simplest solution is to rate the film as 2/3rds stops slower. So rate 100 ISO film as 64 ISO. And then use light meter as is (as if camera had a 180 deg shutter without a prism) 500 > 320 250 > 160 100 > 64 50 > 32
  2. The amount of penumbra, as we travel along the edge of a blade remains the same area (for a given diaphragm and lens). This would mean that if both edges of a blade met at the centre of rotation, image locations closer to the centre of rotation would experience a longer time in the penumbra region compared to locations further away. I suspect the different angles of the blade are to compensate for this, ensuring a more even exposure from one side of the frame to the other.
  3. In relation to the bigger question regarding whether wide gamut colours are stored, the answer is mostly yes and a tiny no. As far as as a downstream decoder is concerned (such as a wide gamut display) you could feed that display *any* three numbers, and the display would emit a wide gamut light corresponding to those numbers. However the display renderer would only do that if it understands the provided numbers correspond to a wide gamut colour. If a renderer does not know what the numbers represent, it is standard for a renderer to assume the numbers represent an sRGB colour, and to emit light of the corresponding colour. Or when otherwise effectively told (by the forward matrix) that the numbers represent an sRGB colour, it will do the same. If the forward matrix is otherwise updated to represent the correct sensor to XYZ D50 transform (without in any way altering the stored pixel numbers) then a wide gamut display will correctly emit the wide gamut colour of the light that illuminated that sensor. C
  4. I should add that if you already know the forward matrix is just an srgb to XYZ D50 matrix, then the experiment is unnecessary, as you will will already know that a subsequent transform, using a XYZ_D50-to-sRGB matrix, will just reconstruct the original values. Which is precisely what happened. But at the time of writing up the experiment, I didn't know the forward matrix was an srgb to XYZ D50 matrix. I just used the forward transform as the required first step in a camera to colour space transform. I just happened to use sRGB as my target colour space, so I could compare the filmstrip against the colours as rendered on an sRGB screen. It was only after doing the experiment that I discovered the numbers used for rendering were the exact same numbers as the original data, i.e that the forward matrix must be an srgb to XYZ D50 matrix. The eyeball test proved the forward matrix was wrong. The identical numbers just proved the the matrix was not a camera to XYZ D50 matrix (unless there is such a thing as an sRGB sensor (ha ha), or that the Sony Presgius is such a sensor (ha ha).
  5. So I scanned some print film (a test strip) to DNG on the ScanStation. The forward matrix in the metadata of the DNG file turned out to be an sRGB to XYZ D50 matrix: 0.4361 0.3851 0.1431 0.2225 0.7169 0.0606 0.0139 0.0971 0.7142 See http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html bottom of page for an independant derivation of an sRGB to XYZ D50 matrix. You can see that the above forward matrix (from the DNG file) is exactly the same as Lindbloom's matrix (if at slightly less precision). The forward matrix in a DNG file should provide a downstream DNG renderer with how to map raw camera sensor data (in the DNG file) to XYZ D50 values (ahead of conversion to whatever colour space the renderer is targeting). To have an sRGB to XYZ D50 matrix as the forward matrix would only make sense if the DNG file contained sRGB values (instead of camera sensor values). Either the camera sensor data has actually passed through a conversion to sRGB, prior to DNG encoding (possible I guess), or the forward matrix in the DNG file is just wrong (perhaps copied and pasted from some default tag set). As an experiment I wrote a low level renderer that took the raw image data from the DNG file, and applied the provided forward matrix to the raw data. I then transformed the supposed XYZ D50 results to sRGB colour space. Eyeballing the results (on an sRGB screen) against the original filmstrip (held in front of the same screen), convinced me the forward matrix must be incorrect. The cyan range in the results were not too bad, but the yellow was way out. A way to determine a more reliable forward matrix is elaborated in this article. https://www.strollswithmydog.com/determining-forward-color-matrix/ Carl
  6. So many stops it almost exceeds the dynamic range of the test system. Any more stops and you would be using the Alexa 35 to evaluate the test system. Ha ha.
  7. There is no suggestion in the posted description of cineon encoding that positives (as distinct from negatives) would be encoded in linear rather than log. Be it the density of negatives or the density of positives, density is defined as the log of reciripical transmittance. Why log encoding? Because, per bit, log encoding is more accurate than linear encoding. There is a claim that positives are encoded in terms of linear values. This may very well be true, but there is nothing in the posted description of cineon to suggest this is the case. So why post it? DPX (based on Cineon) supports both linear and log encoding (among others) so its not out of the question that DPX encoding of positives might be done in linear format. Either way it would be good to know what the actual practice is. By way of analogy, it would be good to know whether a list of numbers representing distances were provided in millimeters or inches.
  8. The various non-linear ways in which film might be encoded is about how to maintain accuracy when otherwise required to reduce the file size. For example, if the target file size is to be 10 bits (for whatever reason such as cost), the worst approach would be to truncate the linear sensor bits. As prevoiusly mentioned, more efficient (in terms of accuracy) is to compute the log of the linear sensor data, and truncate that. Of course, if cost is not a concern then saving the linear sensor data will be the most accurate. But if cost is a concern (which, for most mere mortals, it will be) then 10 bit gamma encoding will be more accurate than 10 bit log encoding, and 10 bit log encoding will be more accurate than 10 bit linear data. C
  9. My understanding of cineon was that it stored densities, which is log data (the log of inverse transmittance). Not linear data. And that this was how prints were encoded as much as negatives (but not requiring as many bits). However it's a moot point. What were interested in is how the scanstation Prores encoder stores data as this will be (I assume) more efficient than DPX (especially if the DPX is truncated DPX).
  10. Just because there is a claim that cineon encoders stored linear data for positives doesn't mean a Prores encoder does. Or am I missing something?
  11. Unless the transforms used by a scanner (along with a spectral plot of the sensor sensitivity) are made available, a developer writing software to process the scans has no choice but to characterise the scanners encoder, eg. with a LAD test or some other understood test strip. How else would they otherwise know how to read the data and render it in the context of a given display? Certainly there will be existing solutions that do this, but if they are any good they will have done a characterisation of the scanner's particular encoder.
  12. I would have assumed the Prores encoder would use gamma encoding, rather than storing linear or log data. Or is that not correct? Othewise dropping linear data down from 12/14/16 to 10 bits (dropping the least significant bits) is the worst thing one could do. Poor mans compression. Better is to firstly transform the linear sensor data to log, before digitising the result as 10 bit data. Even better is to perform gamma encoding prior to 10 bit digitisation. But only if the gamma function is known, otherwise the relationship between the original linear data and the gamma encoded values is severed. In other words, if the Prores encoder is performing gamma encoding (as would be sensible) it would be good to know which gamma transform it was using. Given the gamma transform, the original linear signal can be reconstructed (to a good approximation) using the inverse transform. And given a reconstruction of the linear data, the original light can be reconstructed from a plot of the sensors spectral curves. If the prores encoder otherwise remains opaque, a log file would be the next best thing. Does the scan station DPX encoder support log encoding? Or just linear?
  13. Photography (by which I also mean cinematography) is an art in its own right, quite separate from arts such as painting or drawing from which animation and visual effects emerge. The first known photochemical image was a reproduction of an engraving, but it's not considered the first photograph. It was when the camera was pointed out of a window that the first photograph is deemed to have occurred. Photography, as an art in its own right, is born in this moment. Photography in a virtual production studio means that a good proportion of the image is no longer in the hands of the photographer. Its in the hands of artists in other fields such as those who work in gaming. So I envision a rebellion, where photographers just return to the world outside the studio, and express the power of that world.
  14. Linear encoding is probably the safest option for a scan (involving the least number of assumptions in downstream processing) but it is also the most inefficient in terms of file sizes. Any "corner cutting" on such, (e.g. dropping bits) is going to compromise the data faster than if the scan was encoded in any of the non-linear ways it might be encoded. The most conservative non-linear encoding will be linear to Log encoding. Dropping bits in a log stream will have less affect on integrity than if dropping bits on linear data. Log encoding is also the first (and simplest) attempt to transform linear data according to a human perceptual understanding of light. Following on from log encoding (in terms of efficiency) is gamma encoding. A standard that has remained strong over the years is the one standardised in Rec.709. Indeed newer standards, such as Rec.2020, maintain the same gamma encoding as Rec.709. The main change between 709 and 2020 is the green primary - a change required to keep the resulting expanded colour space within the bounds of visible light (which avoids bit waste). Kodak LADs are good for calibrating printer lights, but as far as I know, they are not available for 16mm. I wish they were ! Nevertheless the basic idea behind a LAD is a good one. A LAD can be spliced to the head of a print, providing the means by which to characterise whatever transfer function is being used by a scan. Once a scanners transfer function has been characterised, software can be written to emulate, on a computer screen, what the film would look like when projected on an analog projector. Any deviation from an ideal result can then be translated into printer point corrections to be made on the printer. This workflow does not imply some sort of machine is making colour decisions. It just provides the means by which a colourist, using their eyeballs and experience, can make decisions translatable back into printer configurations (such as printer point adjustments).
  15. So just to clarify, in case my previous response remains confusing. So in my case, the question of how a scan describes the light transmitted by the film is with respect to the question of how a new print, with new printer light settings, can be made from the original neg, i.e. for projection in an analog film projector. Nothing to do with how a scan might be graded, or otherwise escape such grading.
  16. I have no idea how other scanners compare. My interest in Scanstation is only because a local scan house has one, and I want to use it for grading a print.
  17. So a projection print is improved by adjusting printer points on the printer. The traditional devices used for assessing an answer print are colour analyser cameras and densitometers. One could also use a scan of the print for this. But not if the transforms used to encode the scan are opaque. This is not about how a scan "looks" but what the resulting numbers on file mean, i.e in terms of the physical light transmitted by the film. If the files are encoded in terms of known standards, such as Rec.709, or Rec.2020, or any other known standard, it is then simple enough to grade the print, and compute the required compensation for the printer lights. I only mention Rec.2020 as it's gamut triangle is a reasonable match to print stock gamut, without bit wastage. One could, of course, use an even larger colour space, if at the expense of larger file sizes.
  18. The lack of technical documents regarding how ScanStation scans a film when particular setting are applied, and that operators of ScanStations appear to have no idea what you are talking about when talking about colour spaces, means that one is left to hypothesise what the result of a scan might be. In practice, of course, scans can be eyeball adjusted in software such as Resolve, and most colours will be reconstructable through that process. It's when otherwise using a scan of a film print to assess that film print (e.g. ahead of making a better print) that the specifics of the encoding become important. One might very well take a scan through Resolve and make a beautiful digital result from it, but how to otherwise use that grading information to make a new projection print, is entirely lost if a scan house (or Lasergraphics) can't or won't describe the scan in terms of the transforms it uses to encode the scan. That all said, a combination of speculation (hypothesising) and tests of such would eventually solve for such lack of information.
  19. In terms of capturing (to file) the full chromacity range of the film, an encoding in terms of Rec.2020 would be better than Rec.709. This requires at least 12 bits per colour component. The opto-electric transfer function (or "gamma") of Rec.2020, is exactly the same as Rec.709. If Rec.2020 otherwise achieves a wider gamut encoding, it's only because it uses more bits. To put it another way, (ignoring the change of hue for the primaries) a 10 bit version of Rec.709 encoding will have a larger triangle in chromacity space than an 8 bit encoding. The difference in primaries between Rec.709 and Rec.2020 just means the shape of the gamut triangle remains within the visible chromacity range, i.e that the bits required to encode the full chromacity range of the film don't have bits wasted on invisible chromacities. In principle, (apart from said waste), a high bit encoding, using Rec.709 primaries, can be transcoded to Rec.2020 ( in downstream processes), without any chromacity loss. In practice, the waste factor is important as it affects file sizes. It is more optimal to alter the primaries (in particular green) to keep file sizes in check. Which is what Rec.2020 does.
  20. I would conclude from the discussion that the scanstation software doesnt provide for selecting an encoding colour space larger than the Rec.709 triangle. I imagine the sensor would be capable of doing so were it programmed to do so, but given the apparent absence of such an option in scanstation software (based on commentary by those who run scanstations) its a reasonable assumption to make that it only encodes in terms of the Rec.709 colour space.
  21. In terms of weave, 16mm film is edge aligned in the camera. Not perf aligned. So any scanners that align 16mm film, using the perfs for registration, will exhibit weave, due to a certain amount of weave in the film stock perfs (relative to the edge of the film). Carl
  22. In an oscillation between the left side spring guide and the right side spring guide (were it not otherwise moved out of the way), the right side has more give, so the film would align to the left side guide, and between the left side guide and the right side hard points, the film would end up aligned to the right side hard points (because the left side has more give than the right side hard points) This makes the right side spring guide appear somewhat redundant. But perhaps it's for film that isn't as wide as it should be. At least it will aligned to the left edge guide. Carl
  23. Hi Jeremy, I've acquired the same camera and am having the same conundrum regarding the motor. Would you be willing to share your solution? I'm interested in driving the 1:1 shaft with a custom built motor, but the shaft resists forward rotation and any backward rotation force I apply winds up the spring motor. I'm assuming there must be a way of disengaging the spring motor mechanism but I can't see any obvious way of doing that. Any clues you have would be greatly appreciated.
  24. In the older Ektachrome they diverged at dmax, but that may not necessarily be the case for the newer Ektachrome. The curves may very well be correctly drawn. If they are not correctly drawn that would be a very different issue from the typo identified (log10 units vs camera stops) as it would require someone deliberately drawing those incorrect curves.
  25. Correction. In my last post I said: Since unwanted transmissions through impure dyes decrease with an increase in dye density, you could increase the density of the impure dyes with respect to what you might otherwise do were the dyes pure - or what amounts to doing the same thing: decrease the density of the purer dyes with respect to the impurest one. It should read: Since unwanted transmissions through impure dyes increase with an increase in dye density, you want to decrease the density of the impure dyes with respect to what you might otherwise do were the dyes pure - or what amounts to doing the same thing: increase the density of the purer dyes with respect to the impurest one.
×
×
  • Create New...