Jump to content

DJ Joofa

Basic Member
  • Posts

    149
  • Joined

  • Last visited

Everything posted by DJ Joofa

  1. Got to do with the MTF response of the film. Kindly see the graphs in Glen Kennel's paper in the May 1991 issue of SMPTE journal and his book where he compares 2K, 3K and 4K scans out of a film negative and cites, for e.g., that at 25% increase in MTF over some base value the film negative response was 40% for 2K scan, 50% for 3K scan and 53% for 4K scans, and so on. (These numbers are what my memory serves, but I am quite sure they are in that ballpark.) His graph show some difference in 2K vs. 3K scans but very little difference in 3K vs. 4K, just diminishing returns after 3K.
  2. Gamma has some serious implications based upon is it applied to: (1) relative luminance obtained from linear light R,G,B; or, (2) individual R,G,B and then deriving luma from such non-linear R*, G*, B*. (1) results in the better reproduction of luminance detail, where as, (2) results in better color fidelity near saturated primary colors.
  3. There is some difference between 2k vs. 3k DI scan, but little difference between 3k and 4k scans, therefore, film scan may not go over 3k IMO.
  4. DJ Joofa

    It's a sad shame

    Lance, care must be taken whenever judging the merit of the camera based upon projection. It is not mathematically possible to have both satisfied *everywhere* with some of the current techniques used in signal processing in cameras: (1) (more) accurate reproduction of luminance detail (2) color fidelity near saturated primary colors Hence, any novice analysis that sees the imperfection manifested in (1) or (2) may be wrongly attributed to the power of a camera.
  5. DJ Joofa

    RED APRIL

    You are right. Agree with you.
  6. DJ Joofa

    RED APRIL

    True, however, rolling shutter causes flicker and additionally banding with certain lights as reported on RedUser.net also. Kindly see the image below where I have drawn exposure time on x-axis and magnitude of image banding due to light flicker (normalized in some sense) on the y-axis. Only those locations where the blue line meets the red the banding has "flicker free" amplitude, elsewhere, in theory, banding/flicker should be visible. The amplitude of these bands depend upon exposure time, frequency should remain constant, and the scrolling between frames (i.e., phase difference) is due to frame rate (beat frequency between frame rate and flicker frequency). Hence, if frame rate is matched correctly, then the scrolling should stop, but the static banding should stay there. However, in theory, if exposure time is properly matched, then there is no need to match frame rate, as the amplitude of flicker has been made to vanish. If exposure time is not matched, then the amplitude of the flicker is reduced by selecting a larger exposure time, though, at the expense of more motion blur.
  7. DJ Joofa

    RED APRIL

    It is possible to have a global shutter in CMOS, and Red would have been a more revolutionary camera if it had opted for it.
  8. My understanding is that R3D files are wavelet compressed, therefore, it is easy to extract a 2k image from the 4K wavelet co-efficients without doing the full decompression. May be that is the reason scratch does 2k out of 4k. The point is from wavelet data one can have faster smaller size preview (pretty good quality) for color grading reasons without doing full decode and hence saving time/horse power.
  9. I see. Makes sense. You are right.
  10. I would think that opposite should be true. You want the lens in the equation. Basically, you want to the test the output of Red and F23, and hence, it is very important that your input to the two systems be the same. Putting the *same* quality lens on both will not do justice to smaller sized chip of F23 if the MTF of the lens is not matched with the F23 chip. As long as that is not true, which would happen frequently in practise, because of limited MTF of lenses, you want to *normalize* the input, and hence use correspondingly *different* lenses on both cameras that provide the same level of input detail in relation to their different sensor sizes.
  11. I think people are unnecessarily complicating what is a "raw" format. Context makes it clear in most situations what format we are taking about. In addition, there is no standardized notion of what exactly is a raw format. Even the raw format you get from Red is definitely not raw raw [sic], because I am sure they have applied calibration such as dark current offsetting, black level adjustment, gain (even if fixed), etc. Now at what stage do you stop. I have seen people go to ludicrous extent of saying that should analog gain processed signal be called raw, or raw is only when you do digital gain in software (say RedCine)?, etc.
  12. It is possible that interlaced TV was there in 1930's and perhaps earlier. However, I wanted to make the distinction between analyses by some people for the detail loss on progressive scanning, as opposed to interlace twitter, which is a separate phenomenon.
  13. Evangelos, Since you have the infrastructure, and are well placed technically, may be you can do an experiment to reverse engineer the sensitivity curves of the Red's Mysterium sensor. It may not give exact curves, but is doable, and is instructive, and might require stuff such as Wolf Faust's targets, or equivalent. PM me and I shall send you some information on how to do this.
  14. Don't know how to edit my original post so I am responding to my own post. I corresponded with Adam Wilt, and indeed he is aware of the difference of notion of the original usage of the term Kell factor as applied to vertical detail loss in progressive scan vs. the flicker/twitter resulting from interlaced scanning. However, his response was that he does not want to scare the audience away by going into details. Fine enough for me, though, I still believe that the term Kell factor should be reserved for its original intended meaning of resolution loss in progressive video. On the other hand Kell was the not only person trying to determine this elusive number. At least half a dozen different people were approaching this issue from different angles, and in fact, Mertz and Gray published their findings in July 1934 (factor = 0.53) before Kell et. al. (factor = 0.64) in Nov. 1934, which Kell later revised to a higher value.
  15. Originally, the term Kell factor only applied to progressive scanning, and indeed so, because interlaced TV was not invented when people did their analysis of loss of vertical detail in progressive scanning. The loss of vertical detail due to interlacing is sometimes also called "Kell factor of interlacing", however, it is not to be confused with the original meaning of Kell factor arising due to scanning spot size in progressive video. I.e., in the original concept for progressive video it was the issue of vertical detail lining up with the scanning spot, and in interlaced scanning the issue is the flicker/twitter resulting from interlaced fields not displayed at the same time. It appears even Adam Wilt mixed up the two on this page: http://www.adamwilt.com/TechDiffs/FieldsAndFrames.html There is no scanning spot in digitals sensors today. However, the original concept of Kell factor was due to the fact that vertical detail may not line up with the scanning spot, and that can still be used today for digital sensors. For e.g., one would get different responses if alternate black and white lines, which approach the size of photosites on a digital sensor, are displaced slightly so that they do not line properly with photosites, and their effect is average out to some extent.
  16. I was talking about noise reduction which could inherently result from naive deBayering techniques that do simple averaging. I am sure noise reduction is employed by different cameras at many other levels.
  17. That could be due to all so elusive Kell factor. It would appear that wide angle shots of fine detail such as trees and shrubs, more so in anamorphic sequeezing, will smear out some detail because of Kell factor which has been variously reported from 0.64 - 0.9.
  18. I agree with noise reduction in darker image areas. I have seen it many times how it improves the overall perceived image quality.
  19. I agree with you that an SNR based analysis via a software is an acceptable and legal measure of DR. I think the point of contention was that Graeme Nattress is of the view that ImaTest is not predicting the applied gamma accurately enough. Therefore, the discussion boils down to should SNR be done before gamma or after gamma -- and it is difficult to have a unanimous opinion on this.
  20. Your point is well-taken. Leaves of trees are difficult candidate as they may cause Moire-type artifacts if the image variation becomes smaller than Bayer pattern spacing of individual Red, Green1, Green2, and Blue channels for the Red camera. Even if they don't cause artifacts, in the maximum case where the variation of image is just matched, or close to the spacing of the color channels in the Bayer pattern, simple deBayering techniques that do signal averaging per channel may not benefit from the scenario I was describing in my posts earlier. However, for images where tree leaves are not significant portion of the whole image, then I would tend to believe that for the rest of scene the per-channel signal variation in the immediate neighborhood of a given location in the Bayer grid may be less than the noise variation in the corresponding positions. Hence, if we are able to reduce relative noise variation more than signal variation, then we may get a cleaner signal because of a greater reduced noise.
  21. It is possible to extract more dynamic range if DR measure is defined in terms of SNR -- not by increasing signal, but by decreasing the noise levels. For uncorrelated noise with the same mean and standard deviation at different sample locations, the standard deviation of the average of noise decreases. Chebyshev inequality can be used to verify that; basically, the average of a measurement is likely to be closer to the actual value, even if each of the measurement involves more or less error. The signal characteristics are assumed to stay the same; cross-correlation between signal and noise is not present by assumption. Hence, SNR should increase and therefore, DR.
  22. Evangelos, thanks for your appreciation. My name is Dapan Joofa, but I prefer to go by just Joofa. I am an engineer. I have followed some of your and Graeme Nattress comments. I agree with many points of both of you. I agree with Graeme Nattress that "down stream" stuff such as gamma, etc., should not be used for dynamic range calculation. On the other hand, I also agree with you that Jim Jannard should not have provided ImaTest as an example of dynamic range of Red One Camera, if Graeme Nattress has reservations about it.
  23. From elementary probability theory the SNR should improve for sum of noisy samples of uncorrelated random variables. If we are shooting a *static* scene, and if the temporal noise on the sensor is considered uncorrelated between neighboring samples of the same frame, and even between samples at the same position but on different frames, and neighboring samples of the same color are assumed to have (more or less) the same signal value, and temporal noise is also considered uncorrelated with the signal values, then SNR should improve, and SNR can be used to derive a measure of higher dynamic range in this sense. It is easy to see that naive deBayering techniques that use samples of the same color for reconstruction benefiting with improved dynamic range in the manner described above as some sort of averaging is done somewhere in the process. However, more sophisticated deBayering techniques look at the samples of different colors in addition to higher order derivatives and curvatures, and it may be difficult to quantify in closed form the exact dynamic range gain/loss because of deBayering process. But it appears to me that dynamic range after deBayering may be different from the actual sensor frame whether it is a gain or loss. Temporal filtering can certainly help to improve dynamic range in the manner described above. Hence, even if a single frame is used for dynamic range calculation by a software, say ImaTest, but that single frame was derived by temporal filtering, then it should have more dynamic range than the actual sensor frame.
  24. This is not true. GAIN setting on camera is useful if it is implemented before ADC as it has better noise characteristics than digital gain in RAW after ADC
×
×
  • Create New...