Jump to content

Dennis Couzin

Basic Member
  • Posts

    152
  • Joined

  • Last visited

Everything posted by Dennis Couzin

  1. I made the fadeout test just 64 pixels square, hoping this would alleviate processing stress. Being a non-standard size might have had the opposite effect! The point of my reply to Tyler Purcell's "check out the black and white transitions in this video ... the final Pro Res HQ file looked just like this" is that his final ProRes probably isn't at fault, but rather his playback. One way to check his final ProRes is to play it better than his "player+operating system+graphic card+monitor" now does. Another way is to determine that his editing system's filters are healthy. That way he knows as he edits that he's not degrading his picture. Those degradations are usually small, like the 10-bit ->8-bit -> 10-bit detour mentioned, and might not be apparent even in a perfect viewing at that stage, but they have cumulative effects and should be avoided if possible. It's not just "settings" that change how the video Y' range is figured: 64-940 and the ranges having super-whites and/or sub-blacks. Many video filters just do this for you, without any indication. They do it to you. I wrote about some sadly funny examples in another forum.
  2. I use FCP7 and have wondered about the same thing. Why do fadeouts made in FCP7 look jumpy? I let FCP make a "cross dissolve" on a white slug 955 frames long. This was a fadeout from 100% to 0%. Codec was 10-bit uncompressed which allowed accurate inspection of frames in the binary file. So far as I checked, in 6 places, it made a perfect fadeout with Y'=1019 in the first frame, Y'=1018 in the second frame, etc., etc., until the last frame, which should have been Y'=64 but was Y'=65 instead. That's a forgiveable error. Here it is: 1019-64_dissolve_10bit.mov Yet it looks jumpy when I play it back in either FCP7 or quicktime. The problem is not in the file, so it must be in the player+operating system+graphic card+monitor. Most players are crap. Most monitors are 8-bit. You are right to be wary of filters used in video editing. Many reduce your 10-bit video to 8-bit, then do their thing, then pad it to 10-bit. I caught the rather expensive Video Purifier denoiser doing that. I've found many Apple video filters to be amateurish botches.
  3. All this implies is that even Log encodings in 8-bit can't save the image from banding. The problem is with 8-bit itself. It offers 255 steps when the eye demands about 332 steps for 1000:1 contrast ratio 140 nit monitor image. The ideal coding is the one that makes those 332 steps with no waste. The ideal coding is derived from Hecht (1924) in my 2009 paper. Apologies in advance for the paper's using the milliLambert unit and for not mentioning any of the Log encodings.
  4. I wouldn't call it a color space. That term is so overused/misused. S-Log is just how some Sony cameras encode the linear signals from its sensor. The raw, or matrix-manipulated but still linear, RGB signals are massaged with a mathematical function for efficient encoding. The differences between Sony's S-Log, Arri's LogC, Canon-Log, and Kodak's Cineon functions are mathematically puny. To understand what this coding does it is best to think at first in monochrome: black&white photograpy. Suppose a good sensor captures 13 stops of luminance. How did the eye discriminate within those 13 stops? The very popular view -- but thoroughly discredited by Hecht in 1924 -- is that we see logarithmically, so we discriminate an equal number of steps of luminance, like 50, within each stop. So we would like to encode the sensor output in 13×50=650 values. This can be done in 10-bit if we alot an approximately equal number of values per stop. The simplest log encoding codes the bottom output of the sensor as 0; codes the output from 1/50 stop above bottom as 1; codes the output from 2/50 stop above bottom as 2, etc. Call the first step the perceptual discrimination encoding step. Then at the other end of the image chain, when the 10-bit coding must be displayed, comes a pictorial aesthetic decoding step. The code values will yield luminances again, but it is extremely unlikely that the display mode will cover 13 stops. A B&W photo print covers less than 7 stops. A projected DCP in a very dark theatre will unlikely have more than 9 stops of luminance within any frame. Yet there is the wish to hint more of the original scene's luminance range -- nicely captured by the camera -- into the limited display. This is done by toe and shoulder rolloff. Ansel Adams among others wrote about the importance of toe and shoulder in photography. The toe in the final print comes from the print paper (and its processing). The shoulder in the final print comes from a combination of the toe of the original negative and the shoulder of the print paper (and their processings). The photographic artist is in the darkroom for both steps, and is in control. Tone compression toward the high and low extremes has figured in all pictorial art. With high-class video and digital cinema it is possible to keep the two steps separate, so that the camera maker engineers an aesthetically neutral encoding and the artist applies full control to toe and shoulder, among else, thereafter. (10-bit offers enough code values to put 78 into each stop so the artist can contrastify any part of the range and still have the necessary 50 for smooth gradation.) Camera makers unfortunately second-guess what should be aesthetic decisions. Thus the proliferation of hardly different log encodings. Competent post-people can undo the camera makers' encoding intrusions, returning to "scene referred" images, and work freshly. Admittedly it's more complicated in color, but it is still desirable that camera makers keep their noses out of aesthetic decisions. The one decision they can't avoid is the spectral sensitivities of their RGB sensors. These they must begin publishing, as they proudly publish their customized log encoding functions.
  5. Your question boils down to: if you have two illuminators, and you like the pictures you get by shooting with filter X on the first illuminator and no filter on the second illuminator, then will you also like the pictures you get by shooting with no filter on the first illuminator, filter X on the camera lens and filter anti-X on the second illuminator? Both ways, the image receives the exact same light from the first illuminator. But the image receives much less light from the second illuminator when it and the camera lens are filtered. So you must increase your second illuminator to keep your shot from changing. How much increase depends on what X and anti-X are. It's not enough to speak of green, magenta, anti-green, etc. filters. Filters block light wavelength-by-wavelength, and how a filter looks doesn't show what the filter does, especially with funny light and a somewhat funny sensing camera. I found spectral data for the FLD (not FLB) and this is no ordinary slightly pinkish filter. It has a deep drop centered at around 545 nm whose purpose is to tame the fluorescent lamp's spike at around 545 nm. (Its strong shape belies its weak color -- exactly what you'd expect from a filter able to "correct" a white light for a camera.) It's unlikely you can find a compensatory greenish filter. (The X and anti-X filters stacked on top of each other must become a spectrally flat ND filter.) With the wrong greenish gel on it the second light will contribute wrong colors to the picture taken through the peculiar pinkish filter.
  6. Yesterday Yanis Varoufakis was made Greek finance minister and I could go to YouTube and see and hear a brilliant speech of his from May 2013. Cultural asset.
  7. Oops! Here is the matrix promised in the previous post: 0.0093584 0 -0.0000009 -0.0047567 0.0131727 0.0009423 0 0 0.0093552
  8. XYZ color space also defines a triangle on the CIE diagram that covers the entire visual range. No simpler virtual primaries can do this than those with chromaticities: x=1,y=0; x=0,y=1; x=0,y=0; corresponding to XYZ. I regard ACES as an engineering monstrosity conceived in jealousy over DCI's adoption of XYZ color space. The achievement of ACES was to reduce the red triangle of XYZ space to the blue triangle of ACES space. What is that worth? Bandwidth is scarce in the camera and in the distribution media, but not in the intermediate video processings where ACES is aimed. Is the difference between the red triangle and the blue triangle worth all the transformations back and forth between CIE XYZ space ACES space larding SMPTE 2065-1-2012? Six pages of the 23 page SMPTE 2065-1-2012 are filled with six columns of 7 decimal numbers all trivially derivable from the CIE tristimulus functions. The document says that ACES sensitivities r-bar, g-bar, b-bar are linear combinations of the CIE x-bar, y- bar, z-bar, but omits to state the transformation. Here it is: 0.0127366 0 -0.0000012 -0.0033790 0.0093583 0.0006692 0 0 0.0086872 This little matrix makes those six pages unnecessary, since the CIE functions are widely available. The bottom row of the matrix shows that ACES b-bar is exactly the same as CIE z-bar, while the top row shows that ACES r-bar is very slightly different from CIE x-bar. It's ACES g-bar that's quite new. CIE y-bar has the beautiful advantage of representing luminance. What does ACES' messy and idiosyncratic "color space" offer? CIE XYZ space has been around since 1931. The x-bar, y-bar, z-bar functions for the 2° observer are that old. There are rumblings within the CIE to finally replace them. Were this to happen, two of the three ACES primaries, and all the matrices in SMPTE 2065-1-2012, would become obsolete. But workers in CIE XYZ space would hardly notice the change (just as colorimetrists now switch effortlessly between the CIE 2° and 10° observers).
  9. It depends on what DCP projector makers do. Four and five primary projectors are inevitable. How can SMPTE make its P3 recommendation stick when home theatres are projecting larger gamuts? This diagram, based on CIE 1976 u'v', gives a better impression of what's missing from P3 than the usual CIE 1931 xy diagram. The idea that digital cameras "have" a color gamut, and especially that some fine cameras have extra-large gamuts, is a confusion. Digital cameras have the raw RGB data of their sensors, based on the RGB spectral sensitivities and the dynamic range. The camera makers, or whoever can access the raw data, can milk it to include reproduction of very high saturation colors. It's not like consumer digital cameras don't "see" such colors. Almost every camera's sensitivities include almost the whole visual spectrum, and, except toward violet (like below 460 nm) and toward far red (like above 640 nm), they can distinguish all individually produced wavelengths (by their differing R:G:B ratios). Also they can sometimes distinguish a spectral color, like a 500 nm blue-green, which is way outside everybody's gamut, from a 95% saturated version of that hue, also way outside everybody's gamut. The key to understanding limited camera gamut is in that word "sometimes". Actual colors (except from lasers) consist of wide ranges of wavelengths, and unless the RGB spectral sensitivities are very smart, and the milking very appropriate, the camera will reproduce many actual colors poorly. Indeed the camera will make some colors that should look the same different and some colors that should look different the same. Otherwise a 3D LUT could straighten out the camera's color reproduction perfectly. So with every claim of color gamut a camera-maker owes a vouch of color accuracy. "Accuracy" is a tricky notion. A camera may purposely warp the visual color space, since a picture is seen differently from a scene, but warping isn't scrambling. The relation of color gamut and high bit depth and low noise is very interesting. The spectral sensitivities of the human eye are fairly simple one-peak curves that are not difficult for a sensor maker to copy. But, two of the human eye's three sensitivities are so near together as to make engineering problems for cameras. IBM dared do it with their Pro/S3000 camera from around 2000. How camera makers choose their spectral sensitivities is a dark secret -- but the sensitivities themselves are easily measured and can't be kept secret. In all, the color gamut claimed is as large as the manufacturers dare to milk from their RGB data without embarrassing levels of color error.
  10. Every cinematographer's kit should include a spectrometer. I used to recommend a Zeiss pocket spectroscope, but hand-held, self-contained spectrometers are now available for just $2000. These instruments show you what's cooking in the suspect "white" light. So you know when two are alike, when filtration might be possible, etc.
  11. The color space in the DCI Specification (version 1.2, March 2008) contains all colors, organized in the very same XYZ space as the CIE defined in 1931. Therefore a DCP, which encodes X'Y'Z' (which become XYZ by applying gamma 2.6), can include all colors. If you're thinking that ACES was needed for that, look back at DCI. But, there are no projectors capable of displaying all colors. So DCI specified a minimum gamut for projection. Therein you see the P3 primaries (bolded). Eventually DCI ceded this specification to SMPTE. In SMPTE's recommended "D-Cinema Quality" RP 431-2:2011, the same P3 primaries appear (bolded): There are interesting differences between the two quotes. First, the word "Minimum" has disappeared in the SMPTE version. This might imply that a projector that displays more than the color gamut, i.e., more of what can be encoded in the DCP, is disobeying the SMPTE recommendation. If so it removes an uncomfortable open-endedness in the DCI Specification, but also reduces the incentive for improved projectors. Second, SMPTE calls the gamut a cube, but how can a cube be defined by just five points? SMPTE knows they are describing the output of a projector with three separately modulated primaries. The sixth point is the sum of the Red and Green primaries; seventh point the sum of the Red and Blue; eighth point the sum of the Green and Blue. (SMPTE didn't have to describe the white point because it is the sum of the Red and Green and Blue.) Eight points can describe a cube. But it's not a cube in XYZ space because the lines from the black point to the Red, Green, Blue points in XYZ space are not equal in length nor are they perpendicular. The gamut is some parallelepiped in XYZ space. SMPTE confused the clean and complete XYZ space with their expedient P3 space. Third, SMPTE dumped DCI's (and also its own RP 431-2 (2006)) rather outré white point (x=0.3140, y=0.3510, which can be roughly calculated by summing the given primaries in the DCI quote above) for a saner white point x=0.3190, x=0.3338. Yet the idea of a correct white point for a DCP is dubious. The "natural" white point for XYZ space is x=0.3333, x=0.3333. Many other white points may be chosen. The purpose of this long post was just to emphasize the difference between the XYZ color space of the DCP and the P3 space of DCP projectors. A video can be encoded in P3 primaries, but why bother? To save some bits versus XYZ encoding? The P3 will have to be reencoded as XYZ in the DCP.
  12. Someone calling himself Richard Boddington posts a trailer for "Against The Wild", on YouTube, and the first comment shown is from a Richard Boddington: "Available at all US Walmarts." Who's getting free advertising from the great satan and whore of the earth? YouTube is a cultural asset we should not belittle or begrudge. I find information I need on YouTube, including visual information to decide whether to see various directors' films. Google has researched a new video codec for its YouTube streaming. Google makes little money from YouTube relative to YouTube's cultural contribution. YouTube is more than entertainment. Its cultural contribution relies on its liberality. The unavoidable intermingling of entertainment with other contribution results in some commercial grievances. The aggrieved may post their gripes as YouTube videos and see how many views they get.
  13. So true. I cut my teeth in color science while working on photographing paintings. It was whacky how one slide film could handle the reds and oranges in a painting and another slide film handle the greens and browns in the painting. Try putting the two slides together! But technology is better today, and the reproduction problem for Technicolor film is fundamentally simpler. Today there are cameras that come close to meeting the Luther Condition -- equivalent to matching the human eye's three spectral sensitivities. This is necessary for photographing paintings, because they're made from a large variety of pigments (and in your case of Klimt, even metal foils). But Technicolor film is comprised of just three dyes. Therefore almost any camera having known spectral sensitivities can, with the aid of some mathematics, photograph/scan Technicolor film perfectly, provided the spectral absorptances of the three unit Technicolor dyes are also known -- they can be. We need a more scientific approach to film restoration -- not colorists spinning knobs. The old film prints are fugitive and fragile, and the digital restorations will soon be all that's left. The scanning and digitization can be done correctly, now, even before there are adequate display technologies.
  14. @David and Mark, you two are reducing the original question about the look of Technicolor to an experts' game of detecting Technicolor originated material. Cole's question was about the look: We rarely get to see Technicolor projected but I'm sure that Technicolor prints can be properly scanned and digitized and hopeful that these can eventually be projected or viewed on monitors. Today are gamut issues for digital display of colorful motion picture prints. See how the Kodak 2393 gamut dwarfs even the DCI P3 gamut in this diagram. Monitors and digital projectors must improve by adding at least a fourth primary.
  15. I must disagree with David Mullen's view that optical details contributed significantly to the Technicolor look. Of course it was a complicated system running three films through two gates in a camera -- see nice diagram at http://www.digital-intermediate.co.uk/examples/3strip/technicolor.htm -- and then it took a small army of engineers to maintain registration through the dye transfer. But the little artifacts, the worst being mis-registration, which make Technicolor prints detectable to experts, are quite apart from the look of Technicolor prints, which can be recognized by all, the moment they see the picture. That look is from its color. Dye transfer allows a wider choice of dyes, but dye transfer per se does not affect color. It makes no difference that three dyes are intermingled in the Technicolor print rather than layered as in the Eastmancolor print. (OK, it makes a difference to the colors of emulsion scratches.) How unsharp was Technicolor's red record? The blue sensitive and red sensitive emulsions were in contact (ignoring the thickness of the red coating). The red exposing light did have to pass through the full thickness of the crystal-packed blue sensitive emulsion before reaching the red sensitive emulsion, but this is comparable to the red exposing light having to pass through both the blue sensitive and the green sensitive emulsions before reaching the red sensitive emulsion in a tripack color film. Red layer MTF is typically much lower than blue or green layer MTF for all our camera films. Does this contribute to "look"? The way cole t parzenn formulated the question: "What of the look of Process IV came from the three strips and what came from the dye transfer printing?" -- in terms of machinery -- obscures the color question and make it less amenable to modern, electronic cinema, considerations. (It also omits the processings of the color separation records which figured in the looks, I think, of the Russian embodiments.) Today's two questions are: 1. what does it take to restore, by scanning and digitizing, a Technicolor print (including the question of display) and 2. How to emulate the Technicolor look? David makes an important point reminding us that Technicolor was from the era of carbon arc lamp projectors. The different light source can affect projected color far more than just the shift in correlated color temperature -- e.g., colder -- of the white point. For film projectors, it's the spectral power distribution of the light source, not its chromaticity, that determines what colors appear on the screen. This is completely different from video projection where it's just the chromaticities of the projector's primaries that determine color. (The spectral reflectance of the screen figures too.) Xenon arcs have a nasty spike at around 470 nm. Who's got the carbon arc SPD? Xenon arc and carbon arc probably project films differently, in different ways for different films (depending on their dyes), and not just warmer-colder differences to which the eye can adapt.
  16. Sorry there's an error, besides the spelling error, in that. Color print films have their green sensitive layer on top. The unhappiest mnemonics of all are those one forgets.
  17. No protest on that. If you'd read my 1991 article on DIY optical adjustment, you'd find no words "collimate" or "collimator" in that. Optical knowledge is to the point, and doesn't hide behind fancy words. I thank Carl Looper for correctly dubbing a method in the old article as guerrilla.
  18. One person in L.A. who can help you plumb the question of the "look" of Technicolor movies is Ross Lippman at the UCLA Film & Television Archive. Your question about matching orthochromatic and panchrochromatic emulsions is unclear. Almost every trilayer color film includes non-color sensitized and color-sensitized emulsions. Color sensitization is independent of tonality, which needs to be matched, and of grain, which perhaps doesn't. The Technicolor process produced (YMC) subtractive color prints from three emulsions capturing BGR records, respectively. As such it's "look" was not necessarily different from that of other, integral, subtractive color films. It was different for technical reasons. There are two aspects to a film's look. One is the gamut of colors in the print. You can recognize a gamut without recognizing the things pictured. Gamut is determined by the YMC dyes' spectrums and their maximum densities. Technicolor had the advantage of being able to choose among a greater range of YMC dyes than those achievable by chemical coupling in integral color films. The spectral absorptances of Technicolor's YMC dye set should be published somewhere. If not they wouldn't be hard to measure. (They should be known for restoration purposes.) The other aspect of "look" is how the film reproduces, and especially misreproduces, colors. This is determined by the spectral sensitivities of the BRG recording emulsions (as filtered in the taking), the characteristic functions of the emulsions, and the YMC dyes' spectrums. The whole kit and caboodle. It will be difficult to figure out Technicolor's effective BRG sensitivities today. Technicolor had no advantage in this vs. integral color films (but it is difficult to figure out the effective BRG sensitivities of any color films for which this data wasn't published). Technicolor could adjust its emulsions' characteristic functions during processing, but any well-designed integral color film has already adjusted these. As Frank Brackett of Technicolor memorably said: The gamut part of the "look" is easy to determine, but the color reproduction part is difficult.
  19. My old article on DIY optical adjustments describes completely different methods from Mr. Elek's, but there is one point of disagreement: there are reasons not to use maximum aperture.
  20. You can probably accomplish with color infrared film whatever you would accomplish with B&W infrared film. Usually B&W infrared film is shot through a Red or an IR-only filter. Shoot IR Ektachrome as usual through the Wratten #12 filter. Then, to simulate IR-only B&W, bypass the images in the yellow and magenta dye layers by printing the color picture onto B&W panchro film through a red filter. To simulate Red + IR B&W, also allow some of the image in the magenta dye layer when printing the color picture onto B&W panchro. It might be simplest to do this by double exposing the B&W panchro: once through a red filter; once, more weakly, through a green filter. The spectral sensitivity curves of B&W and color IR films are published in Kodak Publication N-17 (1971). They all quit at around 900 nm. There's a feast of other information about IR photography in Kodak Publication M-28 (1972).
  21. Where did the belief that 5231 was excessively contrasty -- a myth, really -- come from? According to the Kodak data sheets the recommended control gamma for 5231 is 0.65 to 0.70, very average and exactly same as 5222's. However, 5231 reaches the control gamma with about 4½ minutes (D-96, 21°C) development, whereas 5222 needs about 7 minutes of the same kind of development to reach it. This means the lab must be competent, and not prone to compromise, so your 5231 and 5222 are each correctly processed. Are bad labs the source of the myth? The characteristic curves in Kodak's 5231 data sheet (H-1-5231 dated 2/99) are badly printed. The graph is stretched too tall, so a hasty glance would indicate that 5231 is very contrasty film. Could this be the source of the myth?
  22. The characteristic curves for Plus-X (5231) and Double-X (5222) are extremely similar. Kodak data sheets provided these three curves at slightly different gammas The two for 5231 were slid along the log exposure axis to show how closely they surround the one for 5222. (The curves' tops suggest that 5222 may have higher D-max and thus greater latitude than 5231.) The differences David and others perceive are probably due to the two stocks' graininesses rather than to their tonalities per se.
  23. Does "contrast" mean the same in these two posts? Macro-contrast has to do with gamma and D-max. It is so greatly adjustable with development of a B&W negative stock that Plus-X can't be termed "too contrasty". If macro-contrast can make Plus-X shot films beautiful, then the beauty must be in the final print (or digital release) which has its own macro-contrast only slightly dependent on Plus-X's? Micro-contrast has to do with acutance. Can it ever be too great for a DI? It is anyhow reducible with image processing. Micro-contrast could also be the "snap" mentioned by Dirk DeJonghe in post #6. It should show up in the MTF curves, but Plus-X's doesn't seem special. I've been out of the game for many years. If there is a new meaning of "contrast" I'm curious to learn it.
  24. I only wonder about the "if you need to really eke out every last bit of color in your grade" statement. Why should DPX be even a whit better than ProRes 444 in terms of color? ProRes 444 is 12-bits and agnostic about encoding, so it can receive Cineon, or other logarithmic, or any three-band encoding you please. If the encoding is far from what it expects, ProRes 444's spatial compression could be poor, but this will affect detail rather than color. Where detail and color collide is with image noise. ProRes 444 won't preserve the film grain as accurately as DPX does, and grain does influence the perception of colors. So the grader might make slightly different grading decisions working from the DPX vs. the identically coded ProRes 444, but this is not a matter of more vs. less available color.
  25. Expressions like "film sharpness" which launched this strand have an accepted use. Here is Papa Kodak's language in its Vision3 data sheet: For lenses, for incoherent imaging, the MTF contains all the information needed to derive sharpness, acutance, resolving power, etc. For film there are the two complications that the image is noisy (grainy) and that there are development adjacency effects in addition to the underlying optical MTF. Nevertheless "sharpness" derived from the film MTF in the same way it is derived from the lens MTF is meaningful. Unless you have a serious theory to offer in replacement, why carp at the other guy's semantics?
×
×
  • Create New...