Jump to content

Graeme Nattress

Basic Member
  • Posts

    145
  • Joined

  • Last visited

Everything posted by Graeme Nattress

  1. Any raw metering is for exposing the raw. Methodologies there include "ETTR" expose to the right, or protecting highlights, or ensuring you're not buried in the noise-floor. They are not how light meters work. A single meter incident reading has no concept of "scene", yet that is what the camera records. Graeme
  2. Not rubbishing them as they're not rubbish, just not 100% accurate. Are you incident metering or spot metering? In RGB false colour what colour is the grey card? A meter won't ever tell you how to expose the RAW as it doesn't understand that. What we do is check that 18% grey incident metered will produce an RGB level (on the 0 to 100% scale) of 44% when the ISO and lens is set to the meter's recommendation. Graeme
  3. Why more visible on some lights than others - no idea. I only know that when I first saw this on the RED, I thoroughly checked into it and was able to prove to myself and others that it was indeed a lens effect that we were seeing. As for other cameras and other sensors - never really checked into it. Graeme
  4. I've also seen the effect on film too - there was a couple of shots in GF Newman's Law and Order that exhibited it. Graeme
  5. From my experiments, purple fringing is actually a lens effect and can be eliminated by stopping down the lens. You can easily see it's not a photosite overload effect by shooting into a bright light covered by a slit. First check you're seeing the purple fringing then take two more shots - one stopped down until the purple fringes have gone and one ND'd down by the same amount as you stopped down. When I did exactly the above, the stopped down version showed that indeed the purple fringes had gone, but they were still there on the ND'd version. This shows it's not the light level hitting the sensor causing an overload which is the issue, but that it is happening in the lens. Graeme
  6. First, all cameras need low pass filters to avoid moire and aliasing. Go look at the output of the Sony F35 - lots of chroma aliasing on verticals and strong luma aliasing on horizontals, for example. Or try the Foveon stills cameras - strong luma aliasing as they utterly neglect to put in an OLPF. The reduction from a full resolution = to the pixel resolution is necessary in any sampled system. The measured luma resolution on the RED was conservatively stated by me as 3.2k, but with improvements in the whole imaging chain, our latest measurement is 3.5k. Measured chroma resolution is and always has been 2k and is not reduced to 1.6k as you state above. If there is any luma in the colours at all, the resolution can increase up to the max measured luma resolution, but doesn't drop below 2k. RED is not a replacement for film, but is being worked upon as a valid alternative to film. We are not attempting to emulate film grain or the cones in the eye - we are just trying to make the best looking images we can. Graeme
  7. On all silicon based sensors, blue always has less response / is noisier than red and green, so the advice would still be to stick with green-screen. Graeme
  8. We've been working hard on the development of the RAW data, learning from users' best practise in developing the R3D files to ensure a very good starting point for grading. The posted MX footage used that new starting point and we got very good feedback on how it was to grade and work with that validates the learning we did to create it. On the exposure side of things we've been re-working the in-camera metering toolset to provide more good exposure tools to. Once you've had a chance to use them - both the post development and in-camera, feedback please. Graeme
  9. Fair enough Phil. I guess the problem comes from the vast number of anti-bayer comments our there, that we hear on nearly a daily basis, from all manner of sources - some big, some small, some very big and should know better, but nearly all incorrect. For some reason, those anti-bayer comments always fly in our direction, not Arri's. We like what Arri are doing - their approach makes sense to us and sense to them, although we'd probably disagree on a few little bits, there's a lot of mutual respect there. And the one that I just don't get is that somehow bayer colour filter array patterns are bad, and that RGB stripe colour filter array patterns are fine, even though they demonstrably show the chroma aliasing issues that supposedly make for bayer pattern CFAs being bad. So in so much of the religious crusade - and I can just imagine the T-Shirt advertising that in my head and it's rather funny - I don't see it as out of proportion compared to the FUD from the anti-bayer (and sometimes pro CFA) crowd. Although only a small number of people ever take time to post of forums, a lot more people read them, and pass on the wisdom they have learned. Graeme
  10. I'd stand on oath that you told me it was a 1k camera. I know you've changed your mind since, but that was your opinion that you personally told me at least one point during that event. As for the "If you want true, full whack, no holds barred 4:4:4, Red is a 1K camera!", it is also incorrect as say you were just to totally ignore the 2nd green in each bayer block, you have an RGB triple at 2k resolution. Graeme
  11. Correct - Nyquist is not nonsense. It's often misquoted, and misunderstood, but not nonsense. It can be counter-intuitive though, and I suspect that's where a lot of the misunderstandings come from. I remember what it was like myself when I was introduced to it in the audio realm and I couldn't quite grasp how the 22khz waveform could be represented in a 44.1khz CD signal. But I got it figured out, and I remember generating test pulses to go through the D-to-A into an old tube osciliscope so I could look at the effects of the output filter.... Graeme
  12. First, the condition is > not >=, or in words "at least". Second, the image you show is not bandwidth limited, it being square waves, and hence have infinite frequency content. Low pass filter them so that no frequency above the fundamental exists, and then make sure to put the result through a reconstruction filter and what goes in = what goes out. Graeme
  13. To resolve N cycles, you need at least 2*N samples. Therefore, to resolve N line pairs, you need at least 2*N samples, which means to resolve N lines (not line pairs) you need at least N samples. Graeme
  14. http://www.studiodaily.com/blog/?p=603 quotes 15 seconds a frame on one core for Dalsa. When I was last optimizing the RED code, it was less than 1 second for a frame on one core. But the 4k footage runs through either of two pathways in the Color workflow - (and switchable with the preference pane for FCP / Quicktime) - either 2k from 4k "half standard" which is very fast indeed, or "half high" which does a full demosaic and downsample from 4k to 2k, but it's optimized for that purpose and hence runs much faster than a full demosaic to 4k followed by a downsample. Graeme
  15. SNR can work out good for luma though - you just sum the top, middle and lower responses. However chroma is suspect, especially after the very strong color correction matrix that is needed to get RGB out. Extensive chroma noise reduction is part of the basic raw decode pathway (see Foveon docs for more info). Keith, as you point out, silicon just isn't a very good color filter. However, the end result is a clever and unique technology, but, at the resolutions we'd want to use, way too slow for digital cinema use.
  16. Foveon always has been a CMOS process from what I understand.
  17. Phil, you completely miss the important point that your 8bit video is gamma encoded, whereas sensor bit depth is generally linear light bit depth.
  18. Well, as you know, cheap cameras with bayer sensors (even high end DSLR's in JPEG mode) don't exactly use sophisticated demosaic algorithms. In software, we have the luxury of doing some pretty clever stuff. For small chips, yes, one is cheaper than three, but also remember that that one chip is highly unlikely to have as many photosites as the three chips combined. As chips get larger, they get more expensive, but it's not a linear progression based on area - more exponential, so as you get up to very big chips (ie S35 size and beyond) I'd hazzard that the three smaller chips are cheaper, even with the associated prism and alignment issues. But the thing is, you and I don't know for certain on that - it's just friendly speculation between us :-) So yes, at the low resolution end, a single chip is much cheaper, but I don't think so a the large end of things. Either way, I think the argument is an interesting one. Graeme
  19. AFAIK, and maybe someone who works at a fab can chime in here, but one large chip is generally more expensive than three smaller chips of 1/3 the photosite count?
  20. Yes I have, but it's hard to see the slightly lower chroma resolution other than on such a test chart. It's not the kind of thing you notice in actual use of the camera. That's because we hardly ever find a chroma edge without an associated luma edge, and our eyes just don't see sharp chroma only edges that well. As I said, every camera is a compromise, but I think, for the reasons I pointed out above, that the Bayer pattern is the most perceptually useful compromise for a given number of photosites. Graeme
  21. Of course, 3 chip cameras often have offsets in their chips which stop them delivering properly co-sited chroma in 4:4:4. And of course, if you try to get a triple of 1920x1080 sensors to actually allow you to measure a 1920x1080 resolution you will find that you've contaminated your precious image with significant levels of aliasing artifacts. To me, the only correct engineering and aesthetically correct approach is to properly optically low pass filter to achieve significant attenuation at the full resolution of the sensor making sure that any modulation caused by aliasing is at a practical minimum. That's why we "only" get resolution out to around 3.2k (~78% of the linear resolution, a slightly higher number than I originally quoted at the start of the project, but I've learned a lot since then). Say you have 12mp of photosites. What is the best way to arrange them? 3x2.66k chips (the prism + three chip approach) 12mp = 4.61k bayer pattern sensor (what we do, near enough) 12mp RGB stripe, 4.61k (Genesis approach) If you properly optically low pass filter, the three chip approach will give you about 2.4k resolution in R, G and B, but it's limited to using 2/3" chips and lenses. It will give a great image, but you've limited your lens choice. You may get lower dynamic range than current HD cameras as of the finer pitch on the photosites. The bayer pattern approach will give around 3.6k luma resolution and 2.3k chroma resolution. Notice how you don't loose much at all chroma resolution compared to the 3 chip approach, but as our vision is relatively insensitive to chroma resolution, all you'll see when projected is the higher luma resolution. However, if you're not careful in your demosaic algorithm, you may still get some chroma moire. We deal with this completely in our demosaic though. The RGB stripe approach is tricky. If you don't have an OLPF, you'll get 4.61 / 3 (as you need a triple of photosites to get your pixel as the arrangement of photosites is unsuitable for the kind of demosaicing we can do with bayer patterns) = 1.53k. However, the Genesis uses a non-square pixel approach of 1920x3 horizontally and 1080X2 vertically = 12.4mp, giving a max resolution before adding the OLPF of 1.9k. That would probably be around 1.7k with a proper OLPF. All of the above are compromises, however, I do think that above shows that the bayer pattern is the one that gives you the most perceptually relevant resolution for your budget of photosites. In the above I'm taking into account that we generally use a stronger OLPF on Bayer pattern sensors, and the effects of a known high quality demosaic algorithm. The 3 chip approach and RGB Stripe approach above are "guestimates" only, and would vary upon specific implementation of the OLPF and sensors. I've not seen, for instance, an F35 or Genesis pointed at a zone plate, so I don't know how strongly or weakly they set their OLPF, and I don't know if they do any software processing to account for the fact that the RGB on their sensor is not co-sited as it is in the three chip approach. Graeme
  22. And that assessing DR based on stops would, to me necessitate it being calculated back to a reference of linear light data, and if that is not done accurately, the measured scale of the noise will be incorrect.
  23. We've only ever relied on our own way of doing things. The production of the Imatest screen was to show exactly what I've been talking about - that by mucking around with curves it gives different results, and hence is not reliable for us. Joofa - you're right that dynamic range should be on linear light data as that's the only common reference. Then you're actually measuring how the sensor respond to light, it's linearity, and the noise as it does so. With the funky curves that some cameras burn into their data, you have absolutely no idea what the actual light level that code value refers to, which was fine back in the video age, but is not so cool in the data centric digital cinema age.
  24. What you don't seem to follow is that the curve (as long as it doesn't clip highlights or crush shadows out of existence) should not and cannot alter DR. It certainly cannot IMPROVE it from the linear light export (given sufficienct bit depth, which surely a 16bit tiff has for 12bit sensor data) . Now, in your video-centric world, a curve can make a difference as you're going from a high bit depth AtoD, to often 8bit tape, and the nature of that curve and how it distributes code values is important. However, in a digital cinema data-centric way of working, where I can access those 12bits of linear light sensor data in a linear light form, without curve applied, I expect to be able to measure the DR in that form and get a reasonable answer, which I can't with that software. This discussion is getting very pointless now. I don't know how many times I can explain things. Going off on a tangent into demosaicing, which does not alter the dynamic range at all, in any way.... The myriad of parameters you mention are irrelevant. Just think it through.... If a curve could improve the DR of a camera, don't you think a) we'd be doing it, B) everyone else would be boosting their DR with a magic curve.
  25. That you don't seem to understand why this is an issue is the cause of our problems. Until you do understand it, we can discuss this no further as I have tried on multiple occasions to get across that this is a fatal flaw in the analysis software. I know for certain that my linear light data has as much DR as the camera can have. There is NOTHING I can do to it that will improve it's DR, as it's basically a dump of the sensor data that was captured. Everything I can do it (in the form of altering the linearity of the data with a curve) will either preserve the DR or make it worse. Now.... I put a curve on it, and the software Imatest now tells me I have more DR. No I don't. I have the same amount - the exact same amount I started with. I put a different curve on it, and the number goes up again. Now do you understand????? Graeme
×
×
  • Create New...