Jump to content

Dennis Couzin

Basic Member
  • Posts

    152
  • Joined

  • Last visited

Profile Information

  • Occupation
    Other
  • Location
    Berlin

Contact Methods

  • Website URL
    http://sites.google.com/site/cinetechinfo/

Recent Profile Visitors

6,214 profile views
  1. Oh dear, why are you saying that the Switar RX lenses are suitable for any camera but the Bolex H16RX? That matter was sorted out 40 years ago. The two RX lenses will give lousy images without the 9.5mm thick glass between them and the image plane. See my old studies at https://sites.google.com/site/cinetechinfo/ .. Or see another guy's recent work on this at https://cinetinker.blogspot.com/2014/12/rx-vs-non-rx-lenses.html . .
  2. 48 years ago there was much confusion concerning the intercompatibility of standard C mount lenses on Bolex reflex cameras quite asside from the obvious probem of some lenses smashing into the prims. Kern, Angenieux, Schneider and others made special RX-mount lenses for those cameras for an optical reason. Imaging through a 9.5 mm thick glass prism can introduce profound aberration. Their special RX lenses were made with compensatory aberration. I wrote about this in 1976, 1978, and finally in 1987. I see that https://cinetinker.blogspot.com/2014/12/rx-vs-non-rx-lenses.html discussed my articles, but it links to my Google site "cinetechinfo" which has since vanished. Here's a new link to the 1987 article: https://mediafire.com/file/6rh6f6tth12l625 . In a nutshell: A C-mount lens works weil on a RX camera or a RX-mount lens works weil on a C camera, if and only if: (1) the lens is slower than about f/2 or f/2.8, or stopped down this far; AND (2) the lens has a deep set exit pupil, about l½ inches or farther into its screw mount.
  3. Kino Flo 3200K fluorescents aren't what Kino Flo says they are. Kino Flo publishes a spectral power distribution for its KF32 lamp at http://www.kinoflo.com/GIF%20image/Lamps/Spectral-Charts.gif Their KF32 graph has gone through some artistic retouching. Its blue and green (mercury line) spikes are at the wrong wavelengths. (They're at nearly correct wavelengths in Kino Flo's KF55 graph.) More deviously a third, violet spike has been completely disappeared. I had a KF32 lamp measured at a good laboratory. Compare their graph at http://www.mediafire.com/file/1thd5hamyk224ew/kf32.gif to Kino Flo's! That violet spike at 404 nm matters since many, but not all, digital sensors are sensitive there, so it accentuates the differences between cameras (and between lenses). Better use a UV filter (~Wratten 2A) when using this lamp. Indeed the Kino Flo KF32 hardly deserves to be called "High CRI". It scores 87.1, but the CRI measure is based on just 8 specimens and is easily gamed by lamp manufacturers. The newer color fidelity index from CIE and IES is based on 99 specimens. The Kino Flo KF32 scores just 79.2 by that measure. The fidelity is most horrible for greens. Note: all these measures are for visual color rendering. DC
  4. No. No. No. You have misunderstood my comments about color dithering in recent posts #58 and #60 on a parallel strand named "What's the Attraction?". I like that Van Gogh self-portrait, but the discussion in here should stay monochrome. In here, dither is a means to display intermediate tones. We do need more tones than 8-bit can provide as shown by optimal Hecht coding. But we hardly need more colors (sans tones) than 8-bit can provide. The motivation for color dithering -- pointillism, etc. -- was not for precision but expansion. See that other strand.
  5. You missed my point of post #58. The 8-bit depth of the typical screen, is already enough to display all colors (within the limitations of the chosen primaries) with nearly full visual precision (meaning human visual precision, as explained in the post). Likewise the mixing of oil paint allowed painters to display all colors, with unlimited precision, within the limits of their chosen pigments. The pointillist painters were aiming at colors unachievable by those uniform mixed paints. Likewise, the new video or film colors that might be producible with color dither would not just be colors that extra bit depth would achieve, but newer ones. They could be outside the limitations of the monitor's primaries, or outside the region of CIE color space achievable with light, or even outside CIE color space, requiring higher dimension.
  6. Carl, behave yourself! I didn't write what you say I wrote and put in quote marks, above. What I wrote, and what you copied above your post and presumably sort-of read, was this: I said that your illustration of noise producing useful dither was not applicable to our technologies because your noise had large amplitude, unlike film grain noise, and also because you are reducing the original 8-bit image to 2-bits. It was a dramatic illustration but irrelevant. You can't produce a corresponding illustration where the noise is film grain-like and where the output is 8-bits as all output today is. (Your input can be greater bit depth if you wish.) You've leaped over the quantitative to the qualitative where your arguments are just amusing, not serious. Of course dither provides advantage to our technologies. It allows an 8-bit monitor to show 10-bit video without the banding that usually results from reduction to 8-bit. Apple QuickTime does this on my 8-bit monitor when it plays ProRes, but interestingly it does not do it when it plays uncompressed 10-bit. That dither is created by QuickTime, it is not in the original 10-bit video (which has no grain or noise). That is the meaning of my final sentence quoted just above. When a transform must knock 10-bit down to 8-bit it has the option of representing the lost 10-bit levels by wee patterns of the neighboring 8-bit levels just above and just below. Example: 10-bit 500 becomes 8-bit 125. 10-bit 504 becomes 8-bit 126. So what does the player do when it encounters 10-bit 501? It makes a fine pattern of 25% 8-bit 125 and 75% 8-bit 126. It looks pretty good. People may download my banding test clips and see how they play on their systems. With such a transform you didn't need to add 10% noise to the face that was to become 2-bit. You simply used the wrong transform. Video post production is a chain of transforms rather than a chain of re-imagings as film post production was. Video transforms can be quite intelligent.
  7. If it's true it should be explainable. If we mean by colors those that fill extended areas without variation, then video is capable of the finest color precision. I think a good 10-bit monitor can display all colors within its gamut and its luminance range, with perfect precision. A good video camera can produce the file for that display. There are infinitely fine gradations possible in X,Y,Z, which are numbers, but not what X,Y,Z measure, which are colors. For two colors to be two, our visual system must be able to distinguish them, and our visual system is not very precise. A famous illustration of the imprecision is the "MacAdam ellipses" diagram. Talk about 8-bit per channel video representing 16 million colors is belied by there not being 16 million colors, or at least not those 16 million colors. Consider a simple example. Make our white at x=y=0.333 and give it X=Y=Z=1. Now quantize each of X,Y,Z linearly into 1000 levels (almost 10-bit). Now while adapted to white, look at all the billion other combinations like (0.123, 0.456, 0.789). Are there any colors missing? Use CIE L*a*b* space to measure how far apart our digitized colors are from their nearest neighbors. For example, how close is (0.123, 0.456, 0.789) to (0.123, 0.456, 0.790)? Delta E = 0.08, so those two colors can't be distinguished. But how close is (0.123, 0.456, 0.789) to (0.124, 0.456, 0.789)? Delta E = 0.67. So those two colors might be distinguished, so we need a somewhat finer net than this almost 10-bit XYZ net. XYZ space includes much more than the whole visual gamut. Typical RGB spaces include much less than the whole visual gamut, so our 10-bit RGB monitor should, as suggested, be precise enough to display all colors. It is a simple programming exercise to find out, if you trust the CIE L*a*b* metric. There are two other ways to interpret what Carl says colorists know. One is that this is about extracting more colors from the film than from the video, not about there being more colors representable. Yeah, color film is a grainy mess, and a high resolution scan will yield quite different colors at each pixel within a "grey" patch. There will be color controls, such as "saturate", that will extract new colors from this "grey" patch even though grey shouldn't respond to "saturate". These color extractions might have no relation to what was pictured, but they are remarkable. I did color grading of films, including 8 mm, in the film days, and now do color grading of videos, and I've never had the experience of "extracting" or "pulling" colors from a picture. This is because it's an experience that comes from using video color grading methods (which include, e.g., the "saturate" control) on color films. It's a medium-crossover effect. Enjoy it while you can. Another interpretation is more fundamental. It is that color might be more than uniform X,Y,Z on areas can measure. Can spatial variation of color create new colors? Does everyone with eyes, not just the colorists, see colors in super-8 films which they don't see in video. A better example is that of the pointillest painters who claimed to be painting new colors. It's not hard to imagine color irregularity modifying color, or even pushing it to where untextured color can't go. (I'd like to read the eminent perceptual psychologist Floyd Ratlif's book on Signac.) Are the modifications or pushes along the usual three color dimensions, or do they introduce new dimensions of color?
  8. I can't leave this strand without saying that I am unconvinced by all Carl's arguments re film grain. 1. He emphasizes a single case where image noise provides an information advantage, when a high bit-depth image is reduced to low-bit depth whence the noise introduces dither. That advantage is not reaped in our technologies, where the film grain noise is of low modulation (as seen or optically recorded) and where there are no dramatic transforms in bit depth. Anyhow, our transform could itself introduce dither. 2. Information advantage is not at all what film grain provides to stills or to movies. Gestalt psychology (and the allied aesthetics) studies what it provides, information theory doesn't. To Greg, eye saccades are not tiny sub-sensor jitters, but of several degrees (as explained back in post #43). Concerning the camera mimicking eye. The original "camera obscura" mimicked the eye in two important ways. It used a lens to make a real 2-dimensional image from the 3-dimensional scene outside. And it was enclosed to prevent the scene's light from falling on the image except via the lens. Few further developments in cameras relate to the eye. An iris was added to the lens to adjust the image brightness on the receiving medium, while incidentally modifying the depth of field. The eye is shutterless. The eye focuses differently from cameras. Color films and color video mimic the eye insofar as they incorporate sensors with three different spectal sensitivities. Video color is more eye-like when it recodes the three channels as YUV. The now accepted opponent process theory of color vision works similarly. Video cameras are altogether more eye-like insofar as their sensor plane issues signals to some "higher" functioning something -- computer. Film functions dumbly by directly turning the sensor into the displayer. This is what prevents a reversal color film from making correct colors. (Maxwell figured that out in around 1850.) Video can, in theory, make correct colors.
  9. Take care when choosing non-RX lenses for the Bolex RX camera. Bolex disseminated a rule-of-thumb: lenses up to 50 mm should be RX; lenses longer than 50 mm need not be RX. The rule-of-thumb was FALSE. I first challenged it in 1976-78, and returned to the matter (with more optics under my belt) in 1987. Others have confirmed my findings, most recently this by Dom Jaeger. Unfortunately, as Bolexes become collectors' items, the original Bolex literature becomes holy text. When buying old RX lenses, beware that either the focus mount grease chosen by Kern, or some lubrication around the iris, tends over time to transfer to the two lens surfaces facing the iris. Cleaning those two surfaces requires minimal disassembly of the lens. Concerning the dim viewfinder, the diagram of the optics shows internal reflections off three 45° prism faces. Those purposely uncoated faces must be scrupulously clean to avoid reflection failure. Many years outgassing of the camera's paint and lubricants might cloud them.
  10. Agreed that the graininess in film is entangled with the image. I described this back in post #42 as the grain noise being convolutional (as opposed to additive or multiplicative) with the image. There is no reason this convolution can't be performed ex post facto on the video image provided the video image has sufficiently higher resolution than the sought film grain simulation (as explained back in post #36). How much higher resolution to start with, or alternatively, how much resolution must be lost, in order to obtain how good simulation of film grain, is the engineering problem that should concern us. It's the usual balancing act in which the "how much" and the "how good" must be quantified.
  11. I fail to see what this example proves. No proof is needed that film is analog. Repositioning a sllver blob partly in a pixel area continuously varies the lightfall in the pixel area. Size, as well as position, of the developed film grain is analog, so size can make the continuous variation of lightfall too. These continuous variations say nothing about any kind of precision the film may have. They only say that the pixel sensor needs infinite precision to record the lightfall. But the pixel sensor needed infinite precision to record the original optical lightfall, and on good basis we are satisfied with 10- or 12-bit recording. (See my "Hecht and digital video" (2009).) What reason is there for the pixel sensor to record the film grain's affected lightfall with such precision, since the film grain does not accurately represent the optical lightfall that matters? Measurement or recording can be more or less precise. Noisy instruments are imprecise whether analog -- continuously variable -- or not. The silver blob in the example does not record the optical lightfall with any precision. Actually many blobs are required, and only then can we select an aperture, like Kodak's 48 micron aperture, to move over the collection of grains and through which to measure the circumscribed lightfall. The precision of the grainy field is determined by how little the lightfall changes while moving the aperture. Fine repositioning of individual grains within the ensemble hardly affects this. The precision is something like ±0.009 around density 1. That's is a 4% range in transmissions, so an 8-bit linear sensor is sufficient, or less than 8-bits after gamma encoding. I fail to see how any of Carl's bit-depth or quantisation arguments apply to digital picturing as it is now practiced. There are many special questions like "how to fake the appearance of film grain in video?" and "what are the optical and resolution requirements for preserving film grain quality in scanning" and "how effectively can spatially dithered 8-bit video convey 10-bit video?" that need serious attention.
  12. Crazy thread, but Carl understood the point of wiggling the sensor, which Simon didn't. Of course film frames should be perfectly registered. But the grains in the frames are not perfectly registered. They jump like crazy. The reason for wiggling the sensor is not to wiggle the image but only to wiggle the positions of the pixels which capture the image. That is, to make the video pixels somewhat more like film grains. My Jumpy Pixel Experiment of 2009 involved jogging the sensor around by a random 0/10, 1/10, 2/10, ... 9/10 of a pixel horizontally and vertically, frame-by-frame, and then jogging the projector's DLP chip around by a compensatory amount so the image stayed put on the screen while the pixels comprising the image jump about. Look at one corner of each of these six pictures to understand what was done. Actually, observers of this experiment should not know what was done to the images, lest their responses be biased by their beliefs. Non-film people, including children, have been the best observers. Links to the experiment were in post #16, but I've since added an option. First read the readme here. Then download either the "None" version or the "ProRes" version.
  13. Carl Looper's quartet of pictures deserves our attention. original original to 4 colours original + 10% noise original + 10% noise to 4 colours He means the example to show that "the grain doesn't do anything other than mediate the transfer of tone between one bandwidth and another." I was surprised by how he exemplified reduction of bandwidth, by transforming the 8-bit images on the left to 2-bit images on the right. In our video context where images are always 8-bit, 10-bit or 12-bit, I was expecting reduction of bandwidth to be in the spatial domain -- either by softening the image through loss of high spatial frequencies, or by resampling at lower resolution. So our first request of Carl should be for examples where 8-bit remains 8-bit. But allowing that he has reduced bandwidth by reducing bit depth, we should wonder whether the noise in Carl's example is similar enough to film grain noise to sustain the example. If his lower left image had film grain noise the example actually wouldn't work. It works because the noise involves such large modulation that the far-apart levels in the 2-bit images are reached, achieving the dithering. Actual grainy film involves much smaller modulation. Where the 7276 (my favorite stock) data sheet says RMS granularity = 9 this means that the microdensitometer, set up to emulate human visual acuity at 12x magnification, finds the film density 1 varying = 1±0.009 rms. This puny variation cannot even register in an 8-bit image, much less traverse 2-bit levels. So reducing bandwidth by reducing bit depth was a poor example. Grand flourishes like "adding noise does not affect the image" don't get us far. There was much brilliant work done on image noise starting in the 1950's by Rose, Fellgett, Zweig, and many others. Did any of these scientists rely on grand flourishes? There's a nice picture of different models of film grain on page 75 of "Image Science" by Dainty & Shaw. (I must have been remembering that picture when I described models, but overlooked the optical softening step.) Kodak printed a small pamphlet "Understanding Graininess and Granularity" and there's discussion in Campbell's "Photographic Theory for the Motion Picture Cameraman" easily accessible to cinematographers We should demand more attention to the particulars of film grain. It's noise, but it's our special noise with its special character revealed in its Wiener spectrum or by how it looks.
  14. It is easy to keep metaphysicians busy. Ask them: What is the relation of "material embodiments" to their "images"? What makes material thing A the embodiment of image B? Two different material things can be the embodiments of the same image. Can one material thing be the embodiment of two images? Can there be the "zero" embodiment? Can there be a minimal embodiment? A maximal embodiment? Images have attributes. Of what kinds? Images have attributes that may or may not be visible in the image's embodiments. "An image is not to be found in its material embodiment". Is it perhaps to be found in the totality of all its material embodiments? Is it to be found some other way, or in no way? When we "treat the image as an objective reality independent of it's material embodiment" what kind of independence is this? Carl is "proud to be metaphysical". Oh-oh, being metaphysical is not the same as being a metaphysician. Maybe these questions will not keep him busy. Who has slaughtered the word "theory"? And the saddest thing is that it contributes nothing to either the art or science of cinema.
  15. The print itself is not the image? The print is what the painter call the ground? Carl, here is not where to pursue the metaphysics of depiction. No real problems are solved by philosphy anymore. The grainy photographic image is akin to a half-tone, except at too small a scale to be resolved by photographic lenses or by the viewing eye so instead of grains one sees a grainy field, variegated grey where the light image was a smooth grey, rough edges instead of smoothy falling ones. Graininess is an operation on the light image that we can model. The dot screen model is admittedly too crude a model. It simply isn't true that "once [the image is] embodied in material form it's too late to do anything with the signal". Material form is one or another lossy encoding. What's lost is lost, but a new signal can be reconstructed from what remains. Some things that can be done with the original signal come out exactly the same when done with reconstructed signal. Some other things not. Becoming grainy is a lossy operation in itself. Cine optical printing exemplifies repeated applications of the graininess operation. An optical image of one grain pattern is impressed upon a second grain pattern. Then an optical image of that thing is impressed upon a third grain pattern. The resultant of several steps can still be called a grain pattern. So what happens when a video original undergoes the "becoming grainy" operation? Some grainy image will result. What are the limitations on the qualities of that grainy image? (In the worst case the operation could consist of modeling the out of focus video cast onto photographic film and reimaged in very high resolution video.)
×
×
  • Create New...