Jump to content

Dennis Couzin

Basic Member
  • Posts

    152
  • Joined

  • Last visited

Everything posted by Dennis Couzin

  1. Dirk, we here want "the full story". I've never heard of films changing drastically as they wait some months to be shot. A standard processing producing gamma 0.65 vs. gamma 0.72 is a drastic change. The aging you describe has nothing to do with the emulsion ripening and after-ripening, which are completed during manufacture. It's some kind of decay which must be reducing sensitivity of the smaller grains more than the larger. Has there ever been a study of shelf-life of B&W motion picture films? (Some spectrographic films have NO shelf life.) Mees & James (1966) says: "...some emulsions show a rapid drop in sensitivity during the early period of storage, without a conconcurrent increase in fog. This behavior has been termed anomalous aging by some authors." While 7222 seems a simple emulsion, performance-wise, we don't know what black magic goes into its formulation and manufacture. The danger comes when Kodak doesn't know.
  2. Albert, yes, ghosts is the standard optics term for what you're describing as "direct colored circular flares". As David Mullen explained, the colors are due to the coatings. A coating can be very neutral transmission-wise, e.g., 99.9% red, 99.8% green, 99.9% blue, but this means the reflection is 1 unit red, 2 units green, 1 unit blue, quite colored. And since there must be two reflections to produce a ghost it is, in the example, 1 unit red, 4 units green, 1 unit blue, strongly colored. Since ghosts were a much larger problem before coatings, color does not figure in the definition of ghost, nor in their analysis. I don't know if focal length has an especial effect. A cemented doublet makes a fair telephoto lens. N=2. So just one ghost-producing scenario which you can calculate from basic geometric optics.
  3. Bill, Thank you for taking up my suggestion to shift the disagreements of this strand to the domain of examples. I have always been left cold by the cinematography in Kubrick's "2001". There's a consistency in his use of the 2.20:1 frame: it's always forced. You write: Kubrick arranges the canvas like a master painter, drawing a perfect triangle (one of the more complex shapes) with Moon-Watcher centered in-between the two skulls.Then why is there not one painting in the whole history of pre-Kubrickian art which shares that composition? Is it so clever that Kubrick had to invent it? Are you so enthralled with that film frame that a mere 45°-90°-45° triangle becomes a "perfect triangle". Normally the 60°-60°-60° triangle is the exemplary triangle. Yours is just half a square, rather shameful for a triangle to be. And glancing back and forth between the picture without your triangle and with it, I find the triangle forced. Perhaps there is no painting sharing that composition because paintings are static while the film frame evolves. But you can't have it both ways: that it's an artist's canvas and not subject to the principles of composition; that you see a certain triangle there but it might be changed or gone the next frame. Tarkovsky's "Solaris" made four years later, and 2.35:1, does show compositional inventiveness. Watching this the viewer is thrown to the side -- a dynamic image requiring the frame's width. I'm completely baffled why you say your frame from "2001" showing Bowman in the corridor is a "very similar frame" to your "fallopian tube" frame from Niccol's "Gattaca". The first has symmetry, while the second curls, or tries to, like the frame from "Solaris". I didn't argue against the very wide frame, only that cinema being limited to any small range of shapes is a disadvantage and that limitation to a range of very wide shapes increases that disadvantage.
  4. David, we seem to have different understandings of "visual imbalance" and probably of "negative space". These were hot ideas in the 1950's when Arnheim's "Art and visual perception" came out. Google your "lateral negative space" today and it is an orthodontic term: "The art and science of the smile". Has the terminology had yet another life in Hollywood? For me a picture with vertical format can be more prone to left-right visual imbalance than one with wide horizontal format. A tall picture that's dark on the right half and light on the left half looks less stable than a wide picture similarly half and half. Maybe it's because tall things tip easier. But if the lateral imbalance is of a progression-recession kind, the wide picture is the stronger example. The Dürer engraving exhibits a profound lateral imbalance confected of many elements. If you and the writer of the original article (which I haven't read) mean the same thing by "negative space" then you have a real disagreement, which might best be settled by looking at examples.
  5. Dirk, your whole list of reversal stocks (which are not in order of appearance) are color Ektachromes. 7266 enjoys a simpler sensitimetry and simpler processing than those. But 7222 has yet simpler processing so it's sad to hear that 7222, which has been around since 1959, no longer matches its published characteristic curve. Do you think it is the film -- Kodak's manufacturing gone downhill -- or your and the French Kodak's sensitometry? How are you simulating "daylight" for the sensitometry? How close to photometric is your lux metering? Has the status M blue filter been spectrophotometered? Etc. Etc. When the film manufacturer can no longer produce film with consistent speed, you indeed must test each emulsion batch. That is if you can trust the laboratory to process consistently. When you can't trust your exposure meter either, then it's really time to switch to video where you can see and scope the image as you shoot it. Is the question in this strand still suited to calculation? I understand the question as: If I know how to expose 7266 with a standard camera and no filter, what should I do differently with a Bolex RX camera and 85N6 filter. The Bolex RX means you use 1/80 sec instead of 1/48 sec. That part of the calculation is solid. For the 85N6 filter factor, post #8 showed how it depended on the spectral distribution of daylight, the spectral sensitivity of the film, and the spectral transmission of the filter. How solid are these? D65 daylight is in CIE-ISO standards, but will the daylight when you shoot resemble D65? If it doesn't, film testing some days before won't help you. If you're worried about the daylight quality's effect on the filter factor and thereby on the exposure, then you'd better buy a spectrometer and measure the daylight when you shoot. The spectral sensitivity of the film is less likely to be off the published spec than the film speed that you found to be off because it depends on the emulsion formulation but not the coating. Yet, if a manufacturer is slipping that could slip. The 85N6 is a gel. Wratten gelatin filters, later sold as Kodak filters, used to be well controlled as they had many scientific uses outside photography. Though many have been discontinued, they're easy to manufacture. The last time I spectrophotometered some Kodak gels was in around 1995 (for a non-photographic use) and they were good. I hope they still are but don't know. Post #8 stated that the calculational method requires both that the photographic situation is fully understood and that the data is firm. Hopefully the longer one works with film the better his understanding. I actually quit film in 1993 but continued to think about it, and came to be ashamed at how much film testing I did for lack of understanding and good modeling. While our understanding may be increasing, our data is simultaneously decaying because of the sad state of the industry. So the calculational method does struggle.
  6. Veiling glare and ghosts. Lens surfaces reflect specularly. The edges of lenses, and the interior of the lens housing, and scratches and dirt on the lens surfaces reflect diffusely. Those diffuse reflections can't make ghosts, only veiling glare. But the specular reflections can produce both veiling glare and ghosts. What makes one rather than the other is a matter of focus. Light enters the lens from a small light source. It could be within the picture area or not. The main effects are the results of two specular reflections. The first reflection surface can be any except the front surface. The second reflection surface must be scenewards of the first reflection surface. If the lens has N air-to-glass surfaces, there are then N(N-1)/2 different reflection scenarios. They all occur. The surfaces are curved so for each scenario there's a different virtual image. The lens elements filmwards of the second reflection surface focus do or don't focus it onto the film. The very much unfocused scenarios contribute veiling glare, while the well-focused ones make ghosts. So the total light in the veiling glare plus the ghosts is pretty much determined by the number N and the quality anti-reflection coating. The proportion of the total light in veiling glare vs. ghosts depends on the particulars of the design. I don't know if there are simple indicators. There's a 1980 paper by T. Kojima et al., "Computer simulation of ghost images in photographic objectives", Proc. SPIE, 237, 504, which should answer that.
  7. This seems to have missed the point of my example which says that the engraving is in vertical format while its imbalance is left-right.
  8. Can you imagine a museum in which all the paintings are between 1.78:1 and 2.4:1? That museum would offend anyone's love of pictures. There can be a room in the museum where all the paintings have similar format, just as there were editions of woodcuts all one size and shape, but a pictorial medium demands freedom of shape. To call a spade a spade: cinema is at an aesthetic disadvantage because its underlying technologies -- of shooting and display -- limit pictures to a few aspect ratios. And cinema has painted itself deeper into the corner with its embrace of wide-screen. Painters have always enjoyed composing for wide-format. The painting is best wide when the subject is wide. A famous example is Leonardo's "Last Supper". Thirteen men (almost) abreast suggest the painting's 1.91:1 proportions. But Leonardo wasn't straitjacketed into that format. Most of his paintings have vertical formats. Thus cinema wide-screen leads to paucity of subject matter or, just as bad, to forced composition. Another famous wide painting is Picasso's "Guernica", 2.23:1. This is not a case of wide subject fitting wide format. It's in the genre of multi-action panoramas, where the different parts, while comprising a whole, are meant to be examined separately. Only the bravest experimental cinema allows multiple actions across a screen, with each action demanding our attention, and then it's one short step to multiple screens, the breaking up of wide screen cinema. The suggestion that lively movement commends wider pictures is unconvincing. Was there not enough motion in Vertov or Leger or MacLaren or Lye in their 1.33:1 format? Humans do locomote horizontally, and our heads do rotate better than they tilt, but that's exactly why the cinematic image field doesn't need to be wide. Watch "Spacy" by Takashi Ito. It would be less dynamic (and spatial) were it shot in a wider format than its 1.33:1. "The Ideal City", one of the widest paintings of the Renaissance, is 3.54:1 and perfectly still, anti-cinematic. David Mullen's remark (in post #16) that wide composition "allows you to emphasize visual imbalance", while hopeful, is off the mark. Visual imbalance has been explored for centuries in pictures of all shapes. Look at the intentional left-right imbalance in Dürer's vertical format engraving of "St. Jerome in his Study". Eastman invented rollfilm and set the 35mm width, but it was when the Lumiere bothers perforated the film that an aspect ratio was set. They chose 1.33:1. The choice was a bit influenced by the 35 mm width, since a much wider format would have been too grainy and a much taller format too expensive, but I suspect that the choice was mostly aesthetic. 1.33:1 proved a good compromise format for the cinema medium's start. Otherwise it would have been quickly supplanted. The later evolution toward wider cinema format had more to do with theater geometry and movie marketing than with picture aesthetics. A specially wide screen aesthetics evolved, making the practitioners in the niche feel big, when they really weren't. I just watched Greenaway's "Eisenstein In Guanajuato", 2.4:1. The cinematography is the most advanced I've ever seen, the camera in gyrations to keep the frame filled, until it can't and the frame just breaks into three. Yes, if cinema tries hard enough it might shatter the very idea of aspect ratio. Alternatively, cinema can ditch the movie theaters and TV screens for newer, happier and more flexible display modes. Then cinema can finally be pictures with freedom of shape.
  9. First you need to make the three curves into numbers. D65 daylight and the 85N6 filter are published numerically, wavelength-by-wavelength. The film's spectral sensitivity is only available in graphs. So digitize the graph using the nifty "Digit" program available free from W. Theiss. The digitized graph will have funny wavelength values, so you must interpolate to get all three curves on the same wavelength scale. Also since the film's spectral sensitivity is published on a log scale, you must delog it -- replace it with 10 to that power. If you work in Excel, you will now have 4 columns of numbers: wavelength, D65, 85N6, 7266. It's unnecessary to normalize the D65 and 7266 numbers, although that makes a nicer graph. The 85N6 numbers must stay as is; except if they're percent change them to decimals. Now you can make a 5th column from the D65 number × the 7266 number. Also make a 6th column from the 5th column number × the 85N6 number. Add the 5th column of numbers. Add the 6th column of numbers. Divide the second sum by the first sum. I got 0.116. That filter knocks the daylight exposure of that film down to 11.6%. You don't need to convert this to stops to solve Steve Milligan's problem, but the filter factor is of interest in itself. The log of 0.116 divided by the log of 2 gives you -3.1 for the stops. This means the factor for an 85 filter is 1.1 stops, significantly different from the 2/3 stop we use for it when shooting tungsten balanced color films in daylight. If the spot meter measures nits, footlamberts, or any photometric quantity accurately, then it will give you the lousy 2.47 stops figure I used at first. 7266 responds to the spectrum its own way, shown by the black curve which is different from your meter's.
  10. Burn a candle in a dark room. White things look quite white. The candle flame itself looks quite yellow. You see this even when the flame is next to the white thing. But when you photograph that same candle lit room the white things look quite yellow, while the candle flame itself looks quite white. It's an example where color photography goes horribly wrong, that is, deviant to how we see. The limited dynamic range of photography dooms the candle flame to burnout. Human chromatic adaptation is more complex than white balancing. Intra picture chromatic adaptation doesn't occur. I've wondered how pre-photographic artists rendered candle lit rooms. Here's a good example where the artist (Petrus Van Schendel) painted what he saw. What can do but let our camera or film do its thing? The color relation of the flame to white things comes out backwards and yet acceptable, or we've learned to accept it. Would persons from candle lit times laugh at our simulations?
  11. Thanks to Dom for discovering this curious paper from Romania. Does the "statistical study" hold water? Why did Mr. Olariu choose these 9 artists: Bellini, Caravaggio, Cezanne, Goya, van Gogh, Delacroix, Pallady, Rembrandt, Toulouse-Lautrec? Mr. Olariu found 34 Goya paintings with average proportions 1.04 ±0.04. (The proportion is the long edge divided by the short edge. The ± must mean standard deviation.) Did Goya really paint mostly square or almost square paintings? Google "Goya images" and you will see very many famous Goya paintings. Almost none of them are nearly square. Google images is a kind of random sampling weighted by popularity. What random sampling did Mr. Olariu do to find the square Goyas? The paper says: "The paintings considered in this statistics have been selected from the specified references, where the sides of the paintings have been indicated." Well, does this mean that Mr. Olariu "selected" the samples? Based on what? Or does it mean that he used every painting in the references that included its dimensions. The references are just two: 1. Peter B. Norton, Josph J. Esposito, The New Encyclopaedia Britannica, 15th Edition, 1995 2. Nicolas Pioch, WebMuseum Data Base I looked in the second reference and found just 8 Goyas. All have their dimensions included. Their average proportion is 1.45 ±0.19. The cited statistical study appears to be garbage.
  12. I'm a firm believer in mathematical calculations for photography. Experiments, such as exposure testing, are costly in time and money and are best avoided IF there's a trustworthy calculational alternative. That's a big IF. 1. The calculation must be logical -- the photographic situation must be fully understood. 2. The data must be firm 3. The uncertainties that propagate through the calculation must stay tolerable. On review, what Satsuki, Dirk, and I did was borderline illogical in how we found the filter factor for the 85N6 filter. This filter is seldom used with B&W films. The 7266 data sheet does not include a filter factor for it. Satsuki and Dirk used the 2/3 + 2 stop factor based on use of the 83N6 with daylight color films. I found a 0.18 factor, which is 2.47 stops, based on how dark the 85N6 looks in daylight to the human eye. B&W 7266 film sensitivity matches neither daylight color films' nor the eye's. So what is the filter factor for the 85N6 with 7266 film shooting in daylight? The data for calculating that are available. Kodak publishes the spectral transmission of the 85N6. Kodak also publishes the spectral sensitivity of 7266 film. The CIE publishes the spectral power distribution of D65 daylight. Here they are. You multiply the D65 curve by the 7266 curve and add it up over all wavelengths to find the unfiltered exposure. You multiply all three curves and add it up over all wavelengths to find the filtered exposure. The ratio is the filter factor. As calculated here from 380 nm to 640 nm the factor is 0.116 or 3.1 stops. Higher than Satsuki and Dirk figured, much higher than I figured. The film is practically dead above 640 nm. But the film and daylight are both alive below 380 nm. This implies a filter factor more than 3.1 stops. Unfortunately Kodak doesn't publish the film's sensitivity below 380 nm, and we also don't know how transmissive the lens is below 380 nm. On the other hand, if you used a minus-UV filter over the lens, the 85N6 filter factor would become less than 3.1 stops. So 3.1 stops is the filter factor, and again using Bolex's 1/80 second figure the new answer is ASA = 200 × 0.6 × 0.116 = 13.9. I hope this exercise helps more than it hurts.
  13. Confirming Satsuki: The Bolex RX4 shutter angle is around 135 deg. while the reflex prism light loss closer to 1/3 stop than 1/2 stop. According to Bolex, the combination yields 1/80 second effective exposure at 24 fps. So the factor versus a straight 180 degree shutter is 48/80 = 0.6. Kodak publication B-3 indicates that the 85N6 gel will cut the light to 18%. So the ASA to set would be 200×0.6×0.18 = 21.6. Call it 20. My old Luna Pro meter with 1/80 sec marked for the Bolex RX.
  14. You can't calculate the aspect ratio that way. When all four angles are less than 90 deg then you must replace the angles with their tangents and then do the calculation. But when an angle reaches 90 degrees its tangent is infinite... The eye that sees 100 deg to the side of the nose, is seeing away from the flat screen placed in front of the nose. Ergo, there is no flat screen corresponding to the full field of view.
  15. In the 24 January post I called ACES an engineering monstrosity. Yesterday in a different strand I called it idiotic. It gets worse and worse the more you look at it. Now it turns out that the one ACES accomplishment, finding a small triangle to surround the chromaticities, was done wrong. FIRST: The primaries pictured below form a triangle 4% smaller than ACES' primaries' do. They're ugly, but no primary is uglier than ACES Blue at x=0.00010, y=-0.07700. ACES must have made that 0.00010 just to make their triangle a smidge smaller. Fools, for missing the substantially smaller triangle. SECOND: The very idea of measuring areas in the chromaticity diagram to appraise wasted color encoding is mathematically wrong. The chromaticity diagram is an x,y diagram. If the color encoding were x,y,Y then the areas would indicate the relative multitudes of code values. But the DCI coding is XYZ instead. Consequently the measure of how many code values are used for which parts of color space can't be done directly on the chromaticity diagram. Do this. Based on three perpendicular axes X,Y,Z form a unit cube. This cube represents all XYZ codings. Lay the x,y chromaticity diagram on the XY plane where the cube rests. Construct another plane in the plane where X+Y+Z=1. Project the chromaticity diagram upwards and find its image on that new plane. Now from the X=Y=Z point project that image into the cube. The volume of the cube within that projection measures the part of the encoding that makes colors. Measured this way, XYZ encoding is much more efficient -- less of it failing to make colors -- than by the invalid area comparisons on the chromaticity diagram. To do this volume job to evaluate ACES encoding is even messier. Make ACES RGB the three perpendicular axes. Then transform the CIE color volume described above into those ACES coordinates. Too few people understand the geometrical relation of the CIE chromaticity diagram to the XYZ color space. The ACES authors' even lack understanding of the chromaticity diagram. It was crazy to rest ACES color space on the purple-line of the chromaticity diagram which is inherently fugitive. THIRD: The volume method reveals what fraction of the color encoding is completely wasted on non-colors, but color science suggests that it is just as wasteful if there are regions of color space where the color encoding is too fine, i.e., much finer than human color discrimination, and of course it's bad if there are regions where the color encoding is too coarse. So the real measure of the efficiency of a color encoding is its accord with human color discrimination. XYZ and RGB of one or another kind are so hugely non-uniform in that perceptual sense as to defeat trivial geometrical measures. It is possible that a YUV type color encoding could do better.
  16. I criticized ACES earlier: see post and the post immediately following it. ACES is just a rewarming of the Luther condition from 1927! It ignores viewing conditions. Its flare factor calculation is primitive. Its white point is not the CIE Standard Illuminant it says it is. It ignores the real engineering issues that have pushed cameras away from the Luther condition and existing metrics for measuring by how far. Take a peek at Shuxue Quan's 2002 dissertation. Why-oh-why does ACES exist? It can be done better, and has been done better, in CIE XYZ. Since writing my earlier posts I learned that while the 5nm CIE tristimulus data are widely available, CIE charges for their 1nm data. So is the 1nm RICD data published in SMPTE ST 2065-1:2012 a dirty way to cheat CIE of their fee? ACES deserves its own topic, which will be fun only if one of those responsible for it participates.
  17. It's good to be a little nervous about the quality of our software. The more I use FCP7 more amateurish it looks, but it was thought OK for editing some large professional movies just a few years ago. What has changed in video software-land? The software "looks" more confident, its bells and whistles positively gleam. But is the knowledge base so much firmer, and are the software authors so much wiser and more careful than in the past decade? In many areas software can evolve brilliantly in very few years, but there are some big burdens on video/electronic cinema software. One is that the color science is a mess. From the camera color sensors, through the coding, through all the LUTs, through the controls afforded the colorist, there is a lack of clear thinking. For example, the ACES system is idiotic. Another is that simply having images in pixels leads to very complicated problems of appearance unknown in film photography. Even blowup and reduction are difficult. Glance at these two discussions from one small corner of the image processing world: Image Magick 1, Image Magick 2. Then glance at an analysis of the algorithms used in Photoshop. The digital image science is uncertain enough, but software writers have their own bright ideas for algorithms. Another is that there is strong impetus for shortcuts, in order to speed up image processing. So how good is our software? This can only be known a posteriori. Users need to use it critically -- test it hard if not pry up the bonnet. And I have to ask juniors, for the downside of "legend" status is getting old and losing ones sharp eyes.
  18. I just meant the ordinary two spatial dimensions -- up/down, left/ight -- with three color dimensions at each point in the space. The color information really does flow on three channels somewhat like video's Y,U,V. Visual space is three-dimensional, sort of. And visual perception is temporal, sort of. And color is arguably more than three-dimensional. But those niceties are unnecessary for the comparison with hearing (and even complicate it). For further reading. Two breathtaking intellectual highpoints of the 3×2'ness of vision: R. Carnap's construction of the visual sense as the 6-dimensional subspace found in the full swarm of sensory experience (The Logical Construction of the World, 1928); and J. Letvin's explanation why higher animals evolved with no more than 3 cone types (color sensors) because of the retina having 2 dimensions (The Colors of Colored Things, 1967). The auditory sense does not lend itself to the kind of construction Carnap uses for the visual sense. In fact he retreats to constructing the ears and then declaring the auditory sense that which goes dead when you cover your ears! (He needed the visual sense to construct the ears.) The point is that the two senses are deeply different. Whoever, including the fine composer Messaien, likens sound mixing and color mixing has gone off the deep end. Red light and green light mix to what? A color chord? A color dissonance? No, no, to yellow light. And what's an octave above some pure color? Our ears are Fourier analyzers, our eyes not. I think this is at the heart of the audio/video quality matters we're discussing. None of this is to suggest that sound and image can't interact beautifully. Rudolf Arnheim's arguments against sound films were lame (but his argument against 3-D films was sharp.) In fact the deep difference between hearing and seeing can be exploited in those interactions.
  19. Resolve is a big black box, literally. It does a lot, with convenience; it's not do-it-yourself. LUTs are text files with simple formats. I make them in Excel. Even 1D LUTs are rather large, and not suited for value-by-value editing. 3D LUTs are huge. Excel is the perfect editor.
  20. David M's construction is best understood with the little circles being little squares of a size so they altogether fill 1/4 the image area. Then the four displaced exposures are equivalent to one exposure made on a image sensor having four times as many square pixels that fill the whole image area. Image sensors normally strive for that high "fill factor" for light efficiency. CCDs can have 100% fill factor, but CMOS sensors always have much less. Arri can afford to use a CMOS with just 25% fill factor because a scanner is not subject to available light. A scanner can simply increase the illumination (until heat becomes a factor). So four shots with the low-fill 3K CMOS simulates one shot with a high-fill 6K CCD. I suspect that the higher effective fill factor offers some reduction in aliasing too. If Arri had used an image sensor with high fill factor the micropositioning trick would be imperfect. Exact 6K can't be derived from four exposures with a high-fill 3K. Proof: imagine a 6K consisting of a checkerboard of 1's and 2's. Then the high-fill 3K always reads 1+2+1+2=6 wherever it is placed. In that case the trick doesn't work at all though it usually works somewhat. Chris M's suggestion that Arri traded 4× time for 4× space (~cost) overlooks that the larger sensor would be slower. Sensors take time to unload the image. I don't know if this time is proportional to the number of pixels, but if so there's no time lost in making four exposures, each with 1/4 as many pixels, excepting the time for the half-pixel motions by the piezo drivers. (How fast are they?) I suspect the ArriScan makes separate R, G, B exposures of each frame. Why would it suffer with a Bayer-pattern sensor? I think it aims to control the R, G, B spectral responsivities by means of an array of umpteen different colored LEDs. Then in 6K mode the Arriscan makes 12 exposures from each frame: 3 colors × 4 positions. The number of exposures can increase further when it is aiming at maximum dynamic range. This is how scanning should be done. It's funny that the question launching this topic has been overlooked. The question was written as if it were well known why the Arriscan oversamples -- e.g., uses a 3K sensor for its 2K scanning and an effectively 6K sensor for its 4K scanning -- and asked why 50% oversampling. I don't think there is any magic in the choice of 1.5×. If Arri could have used an approximately 13K sensor it could have met the Nyquist condition so no OLPF would be needed, and then it could extract a 2K, 4K, or whatever scan from the full optical data. The topic question might better have been put: how much is gained by just 50% oversampling, per axis?
  21. There's near consensus that low quality audio sticks out and is disturbing in ways that low quality video doesn't, but I don't buy either of these explanations. The problem with Dan's explanation is that voices are heard every day in very poor sonic conditions. Often we can't make out what they're saying. What's the difference with poor visual conditions? The problem with Ravi's is that it begs the question. Why isn't low quality audio just as usable for stylistic effect as low quality video? There is visual imagination. Almost any image invites deep examination. This is true anyhow for naturally occurring or amateurishly recorded images. In this, visual imagination overlays visual information. There is no corresponding aural imagination (except perhaps for specially musical minds). We have some ability for extracting audio information from noise or distortion, but not for making something new from the remnant information. This mental difference between vision and hearing might be due to the very different neural mechanisms. Vision is a three channel and two dimensional affair. Hearing rests on frequency analysis. Frequency deficiencies are recognized, but mind can't operate in the frequency domain. Badly recorded sound is therefore broken sound.
  22. My previous post was about "darkening". I see now in Will Løkkens' figure that what he was calling "darkening" was actually gamma increase. Increasing gamma does darken an image, specifically the middle tones as Will notes in the figure, but it does so much more than this that the concepts are incomparable. Very simply, increasing gamma increases color saturation by exaggerating any inequalities between the R, G, and B channels. The reason is in the numbers. Suppose the R value going to the monitor used to be 1.5× the G and B values. Then if you add a gamma 2 filter the 1.5× factor gets squared. The R value going to the monitor is now 2.25× the G and B values, so saturation has increased. Darkening applies to the image itself. Gamma does not. Gamma applies to particular channels by which the image is conveyed. If you convey the same image by equivalent channels, like YUV instead of RGB, and apply the gamma to the YUV instead of to the RGB, the gamma effect on the image is different. "Darken" and "gamma" have pretty well accepted meanings while "contrast" does not. To munge all three concepts together is, well, very messy.
  23. One simple explanation is ill-designed darkening filters: filters that do not take into account how the video is to be decoded and displayed. For example if a video is encoded full range R'G'B', those values generally go through a gamma exponentiation or some other transformation to become the RGB that drive the displays's primaries. If straight gamma is how, then the darkening filter should simply multiply each of the R'G'B' by some number. If simple log is how, then the darkening filter should simply subtract some number from each of the R'G'B'. If it's an unsimple transform then the darkening filter must also be unsimple. For example, in [0,1] scaling, suppose R'=0.65 and G'=B'=0.40, and suppose the intent is gamma=2, so a ruddy brown color will be displayed made of: 0.42 units of the red primary and 0.16 units of the other two primaries. But what if the darkening filter is designed for log coding, and subtracts 0.15 from each, making R'=0.50 and G'=B'=0.25? That will be displayed made of: 0.25 units of the red primary and just 0.06 units the other two. The result is dramatically redder than before the darkening. One can also imagine a stupid darkening filter for Y'CbCr video that just changes the Y' without changing the Cb and Cr. Those who believe that the chroma channels carry the chromatic information independent of the luma channel are asking for that one! My playback and display are set up for legal range BT.709 with simple gamma 2.35. I just now made an appropriate 50% darkening filter as a 1D LUT. It doesn't redden reddish things at all. Remember how underexposing color transparency films made flesh colors very reddish. It happened so often that you might come to believe it as a fact about color. No, it was merely because of the RGB emulsions' toes. Normally exposed Caucasian flesh in transparencies makes RGB exposures on toes, R less so than B and G. With underexposure all three are climbing up off the toe, making the gap between R density and the others increase. Red faces. This is an example of the darkening function not according with the recording function (which for transparency film is intimate with the display function). I fully agree with Chris Millar that "software is very opaque about what is doing under the hood". Film was not that way. Software and digital cameras will not be that way if cinematographers demand so.
  24. My only NLE experience is with Final Cut Pro version 7 which is a mixed bag in regard to arithmetic precision. Some of its filters do 32-bit floating point arithmetic and some don't. Its Color Corrector 3-way filter is a wholesome filter while its RGB filter isn't. Its Gamma filter is a wholesome filter while its Gamma Correction filter isn't. Etc. One must take great care in such an NLE not to blow ones 10-bit image fineness. (There are some funky partial workarounds like this and this.) 32-bit floating point arithmetic should preserve an image's 10-bit gradation. So far as I can determine the sole purpose of 10-bit vs. 8-bit is the avoidance of banding. (Even 10-bit isn't quite enough when getting near black, and if your codec is 10-bit, no ultra-precise arithmetic can improve its gradation.) It's said that 10-bit allows finer color grading than 8-bit because 10-bit can produce some colors that are distinguishable from all the colors 8-bit can produce. Rather, the advantage in 10-bit grading is in the avoidance of banding via the bridging colors, rather than in the representation of extra colors in themselves. Probably more important than gradation fineness is spatial fineness, including sharpness, cleanness, etc. for which ultra-precise arithmetic doesn't help. It's the quality of the algorithms for rescaling, rotation, deinterlacing, denoising, etc. that determines this image fineness. Is Resolve top quality in all these?
  25. Topic is adrift. Heat waves can be produced by camp stoves but mirages can't. Both phenomena are results of heated air having reduced refractive index, but they are geometrically too different to lump together. I suggest cinematographers read the still-amazing 1937 book "The Nature of Light and Color in the Open Air" by M. Minnaert, both for understanding such phenomena and for enhancing perceptions.
×
×
  • Create New...