Jump to content

Dennis Couzin

Basic Member
  • Posts

    152
  • Joined

  • Last visited

Everything posted by Dennis Couzin

  1. You don't look like such an old school teacher to predate MTF -- Otto Schade's work of the1950s. The general motive behind measuring film MTF is that it can be combined, by simple multiplication, with lens MTF. Development adjacency effects somewhat complicate matters, but with MTF we have a single concept underlying sharpness, acutance, resolving power, etc. for both lens and film.
  2. No. No. It's not that simple. Film unsharpness is largely due to photons bouncing around in emulsions before they reach the grain where they make latent image. So emulsion thickness is the major sharpness factor rather than emulsion count. (Also realize that even B&W emulsions are multilayer, but let's ignore this so as not to overly complicate the explanation.) Each of the three emulsions for color films can be much thinner than B&W emulsions producing similar density because the former make density from dyes produced in proportion to developed silver while the latter make density from developed silver itself. That proportion can be high, so the undeveloped color film emulsion need not contain much silver halide, and thus be thinner. You didn't think color film stock contained 3× as much silver as B&W film stock, did you? The sum of the separate color-sensitive emulsion thickesses of color film stocks is comparable to the emulsion thickness of B&W film stocks. In the color films it's the photons that will expose the red sensitive layer, in the bottom position, that suffer the worst diffusion. So the MTF for the red sensitive layer is by far the worst of the three in the color film and generally worse than for a B&W film of similar EI. Since red light contributes significantly (like 23%) to luminance in the final (projected) image, that unsharp red sensitive layer in the original film is a defect in tri-layer color film design. For color print films, which can be much slower, the design is different, with the red sensitive layer on top, followed by the green sensitive layer, and the blue sensitive layer which will hardly contribute to final luminance on the bottom. (Unhappy mnemonic: when you hand clean a color print your cloth picks up some cyan color, whearas when you hand clean a color original your cloth picks up some yellow color.) Unfortunately that optically improved design was impossible for faster color films.
  3. Small correction to just previous post. Better to weight the curves 23%, 69%, 8% (R, G, B respectively).
  4. I have to agree with Dirk on this. Flip through Kodak data sheets and compare the MTF curves for color negative and B&W negative stocks with similar EI. MTF out to 60 c/mm is a common criterion for sharpness for 16mm films. For the color stocks there are separate MTF curves for the three layers. I believe it is fair to weight the curves 26%, 64%, 10% for the red sensitive, green sensitive, and blue sensitive layers respectively. Vision2 100T color negative bests Plus-X negative in this comparison. I was surprised by this, and don't know the explanation. The MTFs that Kodak measures include development-adjacency effects, evidenced by MTF values over 100%, and those of the color negative films are at least as great as Plux-X's and Double-X's. The red sensitive layer shows low MTF at least in part because it is buried under the other layers and the test exposure is at the "image plane" probably the film front surface. The distance of the red senstive layer from the front further weakens the net MTF during contact printing. But optical printing can focus inside the emulsion pack (see here).
  5. There is an optical reason not to use filters in front of wide angle lenses. The light from the wide parts of the scene traverse the filter at an angle, changing its color. (For a filter behind the wide angle lens, the light leaving the lens toward the frame corner generally makes a much smaller angle.) A 9 mm lens on the 16 mm format isn't very wide, so you can probably get away with it. Here's what happens. The light rays from the corners of the scene reach the filter at 35°, measured from the normal. They bend at the glass and then pass through the filter at a 22° angle. This means the density of the filter is increased by 8% (for those rays from the corner of the scene). If it's an ND 1.0 filter, it becomes an ND 1.08 filter for the corners. Moreover, the effect is wavelength-by-wavelength, so if it's a #85 filter, it is about a third of the way from #85 to #85B (plus a tad of ND) for the corners. The front filter effect becomes more serious with "really wide-angle" lenses like the Angenieux 5.9 mm, causing density to increase by 14% for the corner, or the Century 3.5 mm, causing density to increase by 22% for the corner. Then the reflections at the filter must also be considered.
  6. When using "some motion conversion processes", as David mentioned, the 50p-to-25p conversion is not a discarding of every second frame. Twixtor, for example, analyzes the 50p the same whether it will then process it to 1000p, 100p, 50p, 33.333p, 25p etc. When Twixtor adds "motion blur" it simulates a longer shutter opening by means of interpolated frames. For example, I recently received 50p footage made by an amateur camera that chose to greatly shorten exposure times because of the high scene illumination; @#$%&! As a result, chosing every other frame yielded 25p with such small effective shutter angle that simple walking action looked choppy at 25 fps. Twixtor, in effect first converted the 50 fps footage to a large N fps. Then it summed groups of N divided by 50 frames, creating an ersatz 50p shot with 1/50 sec exposure. Then, discarding every other frame yielded a 25p as if shot with 1/50 sec exposure. That walking action looked OK at 25 fps.
  7. Allow me to offer some scientific opinions which may or may not correspond to current practices. When making a video intended for TV or other BT.709 standardized display, it should be graded on a BT.709 display. No colors outside the BT.709 gamut, no super-whites or sub-blacks, should be included in the final color judgement. The grader might want to peek at these, in order to see what's available "in reserve" in the image, but should then turn the untapped reserves off. When making a video intended for DCP release, it should be graded on a DCI P3 compliant display. The P3 gamut is significantly larger than BT.709. The CIE xy diagram on which gamuts are often illustrated greatly distorts chromaticity differences. The CIE u'v' diagram at this link is less distorting. It shows that very many hues have significantly higher saturation in P3 than in BT.709. The grader can decide whether and how the DCP can make use of the extra colors by seeing them displayed. (On this diagram. NEC PA301W happens to be my monitor. It can display the BT.709 gamut nicely but not all of P3. Kodak 2393 is Vision Premier Color Print Film. It's dominance over P3 is sobering. No tri-color (additive) display can completely enclose this print film's gamut.) If a video is intended both for BT.709 and DCP display, you can do two separate gradings. Or you can dare a compromise grading. Or you can do a BT.709 grading and restrict the DCP to that. *** The question must be asked, which cameras are suitable for BT.709, P3, or whatever display? This has nothing to do with log vs. power-gamma encoding. It has very little to do with the dynamic range of the camera. It depends almost entirely on the spectral sensitivities of the camera's R,G,B sensors. Camera makers never publish those sensitivities, which is silly because a decently equipped optical lab can easily measure them. There are groups at RIT (Rochester) and at the University of Tokyo busy measuring cameras. They or others will get to the Alexa, et al., eventually. From whom are the camera manufacturers hiding their spectral sensitivities? Not from competing manufacturers, who all have the optical labs. Color profiling matrices and the application of 2D and 3D LUTs can only go so far toward making a camera with inappropriate spectral sensitivities into one suitable for a large gamut display. On the other hand, a cheap camera could, in theory, allow beautiful color reproduction on the "full" gamut. Even if a camera immediately applies BT.709 precorrection to the RGB channels and then transforms to BT.709 Y'CbCr, that Y'CbCr can contain color information over the full gamut. There is a game of second-guessing ongoing between advanced camera manufacturers and advanced color graders. Arri, for example, says that the RGB records from its Alexa, after linearization, drive certain virtual primaries. The virtual primaries form a gamut and the implication is that color reproduction is hunky dory on (the real part of) that gamut. This is false. Those three virtual primaries correspond to no three spectral sensitivities. Arri is only saying that assuming those three primaries gives the best overall color reproduction with the camera's spectral sensitivities. But what Arri finds best overall need not be what the grader finds best overall. Hence all manner of 3D LUTs must be invented for the grading, and the suitable gamut will not be anything like Arri's implied gamut. The ACES concept, which boils down to evaluating camera spectral sensitivities by how near they can come to reproducing the original scene colors, is naive. It is a rewarming of the "Luther Condition" first published in 1927. Color reproduction is for a limited gamut, and veridicality is anyhow not the measure, since movies are seen in dark theaters, etc. I don't know the answer to the question in bold. Such questions will become simpler when digital camera manufacturers publish their spectral sensitivities (the way film manufacturers always have).
  8. Rent a high quality 200 degree fisheye lens. Mount it on a still camera with high resolution sensor. Aim it at the sky and take one picture. From this you can extract the 360 degree pan over and over. The extraction requires undistorting sectors of the image. Mount a (front surface) mirror at a 45 degree tilt onto your video camera's lens. The mount must include a ball-race so the direction of the 90 degree bounce can go any way. Aim the camera at the sky and spin the mirror device. This is exactly the 360 degree pan. If your video camera is small, just mount it on a phonograph turntable. Bring both to the cemetary. Either spin the turntable or, if you can use a 360 degree pan taking 1.8 seconds, turn the turntable on (using batteries and an inverter).
  9. To wind length L onto a spool (or core) count the turns on your manual rewind. The equation is L = pi*(t*(N^2) + d*N). pi is pi. t is the film thickness. N is the number of turns of the spool -- you must figure in the rewind gearing because you will be counting turns of the crank. d is the diameter of the spool or core you're loading. Since you are just splitting a 400' core roll in half you can skip the micrometer measurement and the calculation. Wind the 400' onto another same size core until the two rolls are exactly equal by feel. There cut the film. Then load each 200' core roll onto 200' daylight spools, counting your turns. The counts should come out the same. If they differ a bit use the average count the next time you make 200' daylight spool loads using that kind of film. (Different kinds of film have significantly different thicknesses.) Realize that a 400' core roll is just 400' while a 200' daylight spool load, when bought as such, is longer than 200'. As I recall, Kodak gave 107' on their 100' daylight spools. Daylight loading means waste at both ends. E.g., you can only make 11 generous 100' daylight spool loads from a 1200' core load. You can use cores with a core adapter in place of daylight spools if you plan to load and unload the magazine in a changing bag.
  10. As I understand you, Zeiss is assembling f/1.4 lenses, excepting their irises, and then selecting some of them to become f/1.4 cine lenses, some to become f/2 cine lenses, and some to become f/1.4 still lenses. Ask your contact at Zeiss how this selection process works. For example: Those with best f/1.4 become the f/1.4 cine lenses. Of the others, those with best f/2 become the f/2 cine lenses. The rest become f/1.4 still lenses. In this scenario the f/1.4 cine lenses are probably a bit better at f/2 than the f/2 cine lenses, and my reference to "owners of losers" applies. Try a little design exercise: a one element lens for one wavelength, for a certain glass index, using just spherical surfaces, for a certain focal length, for on-axis performance. When designing the lens for f/1.4 you try to optimize the image quality at f/1.4. When designing the lens for f/2 you try to optimize the image quality at f/2. Use a good sophisticated measure of image quality. You find that your f/2 design comes out different from your f/1.4 design -- different curvatures, different thickness -- not just a shaved-down or masked-down version. And you find your f/2 design is a little sharper at f/2 than your f/1.4 design is at f/2. The heuristic exercise is limited to spherical aberration, and it's a big extrapolation to the two Zeiss cine lenses, but it's scary and sad if Zeiss has chosen a "simplest, cheapest way" of making their f/2 cine lenses.
  11. What do you mean by "a crazy enlargement of grain"? Is the number "2" crazy? Since the Spirit is a continuous scanner, each of the four R8mm images within the 16mm frame will be imaged alike. To the extent that the scanner lens is sharper for the center of the 16mm frame than at its edges, each of the four images will show an asymmetrical falloff, but the same for each.
  12. Phil Rhodes: "...the tracking is via the dots, which look retroreflective ..." Fascinating! I didn't know of this technology. Is it appropriate for faces? Two fixed video cameras, each with its nearby lightsource, can be shooting a retroreflective dot while the 3D location of that dot is computed by triangulation. The dot is made retroreflective so a camera with nearby lightsource sees it as distinctly bright. The retroreflective dot can be a tiny sphere of very high refractive index glass, so it retroreflects independently of angle. With retroreflective dots and separate tracking cameras I don't see what's gained by their working in the infrared. They can use a weak white light that doesn't affect the principal photography. The sample video doesn't show so much apparatus, so it is probably doing something simpler, and cruder, than full triangulation. My main question is about placing retroreflective dots on a face if the goal is to include photographs of the normal face. There are about 20 retroreflective dots on the face. Sometimes they appear bright, when there is a light source near the camera, and other times they appear dark like freckles. But they always appear. Any way you build a retroreflector it has its diffuse reflectance too, and it won't perfectly match skin as the face moves around. Can the appearances of the dots on the face be eliminated by image processing? I wonder if an infrared dot method couldn't do better than a retroreflective dot method. The infrared dot is a spot of color that the infrared cameras can see but the principal camera can't. For example, suppose we had a dye that works as a short-pass filter transmitting fully at wavelengths 400nm-650nm while absorbing fully wavelengths past 750nm. A spot of that dye in transparent carrier on the skin will be black to the infrared camera while the principal camera will see the skin as it is.
  13. She moves slowly and the frame rate can be slow. If the green lines are what are used to track her face they can be projected onto her face every other frame. The closeups during 1:08 - 1:15 shows how well the computed net follows her face. The technology is fast, but kinda crude.
  14. The small brass mounted lens is not a "collector's item". It has no maker's name engraved. It comes from an era of factory produced lenses and its slots indicate that it is a part from a larger assembly which may or may not have been a photographic lens. People who service old lenses have boxfuls of such parts salvaged from irreparables. Sentimental value approx. 1 Euro.
  15. The original post compared an f/2 CP.2 with a "wider aperture" ZF.2. The Matthew Duclos blog compared the f/2 85mm CP.2 with the f/1.4 85mm ZF.2. Dom Jaeger mentions the Super Speed f/1.4 85mm CP.2, and links a picture, and yes, it is possible that the f/2 85mm CP.2 has a much larger front element in it than shows. Like the earlier mentioned possibility of a shaved down front element, the question is: who has verified it? Masking a front element and reducing maximum aperture by a full stop would reflect poorly on Zeiss manufacturing. It would appear that they're building the f/1.4 lens, testing it at full aperture, and the losers are put in the f/2 bin for the restricted mounting. That kind of trial-and-error manufacturing existed many years ago when lenses were sometimes binned f/1.8 vs. f/2. But a full stop restriction, today, at Zeiss, is shocking (if true). The losers would probably be a little worse at f/2 too. Owners of losers, carrying around the extra glass, would be ashamed of their lenses. So let's verify for the f/1.4 85mm CP.2 vs. the f/2 85mm CP.2. It is not enough to learn that both these lenses are, e.g., 6 elements in 5 groups. That would only suggest that they are in the same design family. The simplest verification is from the typical entrance pupil, exit pupil, H, and H' locations, optical information Zeiss used to provide. An optical shop that has disassembled the f/2 can also tell you.
  16. It looks like a mounted single element (or doublet). The spanner slots indicate that it was be be screwed into a larger assembly. The brass not being blackened indicates that it belonged to a much older lens than the Tayor Hobson. Then it's uncoated. Right? If you want this gift to be an excuse for learning some optics: Determine the focal length. Determine the front and rear surface curvatures by measuring the sizes of refections. Determine whether it is a singlet or a doublet. If you see 4 principal reflections it's an air-spaced doublet. If you see cement it's a cemented doublet. If you can remove it from its cell you can examine its edge. You can also test whether its chromatic aberration is less than a singlet could make. Assuming it's the front or rear element of an old photographic lens, look at old designs and figure out which lens it came from. Find a use for it.
  17. Albion, go to http://matthewduclos.wordpress.com/2011/11/02/cp2vszf2/ and look at the photo over the caption "It may not be obvious, but these two 85mm primes are share the exact same optical design". The f/1.4 has a much larger front element than the f/2. To say that Zeiss "took the glass and put it in a bigger housing" and then "locked the aperture at T2" is not true. Putting "the glass" into a bigger housing does not make the glass smaller. Until someone verifies that Zeiss shaved down "the glass" and put it into a bigger housing, very unlikely from a design point of view, "optically identical" is marketing (or blogging) boloney.
  18. As an optics engineer I very much doubt that the f/2 CP.2 lens is "optically identical" to the f/1.4 ZF.2 lens. Online pictures show the f/1.4 to have a larger front element. So the lenses are not literally optically identical but having different irises. It is conceivable that the f/2 design is the f/1.4 design with (at least) its front element shaved down, but has anyone verified this? I suspect "optically identical" is optically ignorant marketing talk. Get MTF data at corresponding apertures for the two lenses before making your decision.
  19. One step is about 1/12 of a stop or 0.025D. This might seem unnecessarily fine, but the color positive film had a gamma of about 4, so the 1/12 of a stop made a luminance change of about 1/3 stop, quite noticeable. What the new generation of Mi Ki needs to hear from the old generation is how pathetically weak a color correction was possible by control of just the R, G, B printing lights. Contrast adjustment was not possible, neither for individual layers nor for them together. Saturation adjustment, such as every digital beginner enjoys, was unthinkable. Etc. This does not mean that the color timing was trivial. To the contrary, it required extreme skill to harmonize images by such weak means. We sometimes made radical color changes in place of the small simple ones we couldn't make.
  20. To color grade when printing negative, most laboratories were equipped with an analog video "interpreter". It illuminated and read the negative, and produced a positive display approximating the characteristics of a color print film, and it allowed the operator to adjust three channels R, G, B corresponding to the printing light controls and see their effects. Since color adjustment in printing consisted of no more than those three controls, this much of a "computer" sufficed.
  21. What's really going on photometrically is this: Illumination occurs in two places in the picture taking process: on the scene and on the film in the camera. There is a simple relation between the two. When the scene consists of a perfect white card (100% Lambertian reflector) the illumination on the film is 1/(4*N^2) of the illumination on the scene. N is the f-number of the lens (assumed to be 100% efficient). If the scene is less reflective than the perfect white card, the illumination on the film is reduced proportionally. Example: with an 18% grey card (scene) and a lens set at f/2 the illumination on the film is 18%/16 = 1.125% of the illumination on the scene. You can stick your lightmeter into your film gate to verify this. For macro-photography, when the lens with focal length F is advanced by amount D from its infinity position (with tubes, bellows, or the focus mount itself) the effective f-number is no longer the marked f-number N but rather N*(F+D)/F = N*(1+D/F). You would then substitute the effective f-number for N in the 1/(4*N^2) formula above. A way to understand effective f-number: we think of the f-number as the result of dividing the focal length by the entrance pupil diameter The effective f-number is result of dividing the imaging distance (F+D) by the entrance pupil diameter. (Effective f-number is what determines the lens aperture diffraction too.) Film exposure is the product of the illumination on the film and the exposure time (e.g., lux*seconds). When the 3 inch lens is advanced 6 inches with a bellows, the effective f-number becomes 3 times as large. So the illumination on the film becomes 1/9 as large. Then to maintain the original exposure the exposure time must become 9 times as large.
  22. No, Israel's formula isn't more precise than David's rule of thumb. It is more explicit, since it includes the 't' instead of the commonly assumed 1/50 sec built into David's, but they're otherwise mathematically equivalent. They're both weakly founded rules of thumb for the simple reason that there is no ASA speed for motion-picture films. The ASA scale is the linear scale in the ISO film speed system. But there is no ISO standard for measuring the speed of motion-picture negative films. ISO 5800:1987 says: "The specifications do not apply to colour negative films for motion-picture and aerial photography. ISO 7829:1986 handles aerial photography, but there is nothing for cine film speed except ISO 2240:2003 for color reversal film speed. Probably there is no ISO standard because professional cinematographers are fanatical about exposure and their numerous case-by-case rules of thumb would supercede any standard. Almost all film speed determinations of negative films have been sensitometric and based on just the toe of the film. The speed printed on the can is that kind of number: toe speed. If the film toe is unusual, or if the cinematographer makes an usual use of the toe, or if the rest of the sensitometry is unusual, or if the scene is unusual no general equation based on toe speed will tell you what illumination to put on the scene.
  23. Ansel Adams' tonal aesthetic should not be trusted for cinema. Cinema (traditionally) is viewed in darkness. The projected cine image contains a much greater range of luminances than a photographic print. The projected cine image has no external white reference. Cine and still involve different modes of seeing, requiring different tonal aesthetics. (Also most cinematographers, unlike most still photographers, are limited to standard laboratory processing of what they shoot.) My advice to cinematography students is to learn the theory of tone reproduction from a neutral, scientific source like Chapter 22 of Mees's "The Theory of the Photographic Process" (available online), rather than from artists or craftsmen. Then they can specialize their knowledge with rules and hints from cinematographers, such as David Mullen gives in post #4 above.
  24. @ Alexander: That's not how film exposure works. There is no "setting the ISO" speed. The film has an ISO speed. You can adjust it by a stop or so by pushing or pulling its development, however not perfectly because the gamma then changes. You adjust exposure by changing the product of time × light intensity onto the frame. So you change the time or the light intensity or both. Change the time. You can make the time smaller by reducing the shutter angle. You can make the time larger by reducing the frames per seond. Both have undesirable side effects for capturing motion. Change the light intensity. If the illumination is under your control you can adjust that. Otherwise you have just the lens aperture (which unfortunately adjusts DOF too) and ND filters. ND filters are light intensity reducers only, but when shallow DOF is your goal they should be used instead of stopping down the lens. Your question is about how to accomplish under- and overexposure, not what under- and overexposure do to film images. That's a longer story.
×
×
  • Create New...