Jump to content

Evan Richards

Basic Member
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Evan Richards

  • Rank

Profile Information

  • Occupation
    Other
  • Location
    Boston
  1. Yes. I suppose that makes sense. When you put it that way, it seems so logical. But then I think about it more...and it starts to confuse me. Maybe my definition of what a "stop" is is off. I had always thought of it as doubling or halving the light. If you increase your aperture by one stop, you double your light. If you drop in a .3 ND (1 stop) you cut the light in half. So it seems like if you had two cameras with different dynamic range as long as you start with the same exposure on both cameras doubling the light (increasing by one stop) on each camera would be the same increment regardless of sensor. Double is double. Different dynamic ranges allow one camera to shoot MORE stops than another, but the stops themselves would be the same. How am I thinking about this wrong?
  2. In my mind I guess I was thinking exposure would be an objective measurement and and one stop increase would be the same irrespective of which camera is being used. So, it seems like what you're implying is that maybe the best way to measure where the exposure lies would be to shoot a test and view it on a waveform monitor. Expose a gray card to 50% (just for ease of measurement) then expose up one stop at a time until it gets to white and expose down a stop at a time until it gets to black. Thanks David!
  3. There has to be a correlation right? Lets say I shoot a 18% gray card and it is sitting at 50 IRE on a waveform monitor. What would an increase of 1 stop read? A decrease of 1 stop? I'm trying to create some customized false color LUTs just for my own purposes to analyze images in resolve. But I'd like to be able to say ok, gray = middle gray, one stop over that should be yellow, two stops over that should be orange, three stops over that should be red, etc. but I'm struggling with knowing how to measure the exposure gains. Thanks!
  4. I understand. I think. So if you were grading a 10 bit image, anything over 940 or under 64 will be blown out or crushed after it is saved as a rec709 image. Would be nice if they gave you the option to change the scale to IREs. Most other monitors I've seen use that scale. Perhaps it's not useful for grading 10 bit images though. But it does go up to 120 and down to -40. Seems like that would give you some more latitude. Thanks!
  5. OK. That makes sense. Do other applications or monitors measure it that way? I've never seen it measured on that scale before. Thanks!
  6. I thought waveform measured IRE and I thought IRE was measured from -40 to 120. Whats the deal? What does that number represent? What scale is it using to measure? Thanks!
  7. Very true. I have a 5D mk II so those are the files I'm used to working with. So, when you say 1600 is just 800 with 1EV gain, does that mean it's no different than adding the gain in post? I've read something like that but didn't know if it was true. Because if that WERE the case, there would never be any reason to change your ISO from 800. Good to know about the 5d mk III raw. I'll bet the mark II and mark III CR2 files have more dynamic range than the magic lantern raw.
  8. Right. I knew it was advertised at 13 stops. It's interesting because if I'm reading this right, this chart would seem to indicate it would be best practice to shoot at 1600 ASA in brighter light because you'd get more latitude in your brighter areas. Still doesn't answer how this compares to Canon RAW CR2 files though. Hmmm. Thanks!
  9. Which is why I specifically asked about raw stills in the title and body of my post. Actually it would be a folder full of DNG files. They are similar because they are both raw image file formats that can both be opened and manipulated with adobe camera raw. I was asking about the comparison in dynamic range which should be an easy correlation to make.
  10. I've shot a lot of stills and have some experience with Canon raw files. How do these compare with black magic raw video? The stills are higher resolution obviously but is the dynamic range comparable? Does Blackmagic raw have better dynamic range? Please advise. Thanks!
  11. Ok, so I'm a relative noob to cinematography in the sense that I've basically only ever shot with the 5d mk II and other cameras that shoot in h.264 formats. I'm interested to see how far I can push better footage. 4:4:4 footages and 4:2:2 footages. So I went on Kinefinity's site and downloaded some of their test footage. Correct me if any of the following information seems wrong. I'm just trying to figure things out. I brought the raw cineform .mov footage into after effects. In case it matters I'm specifically using this clip: https://www.dropbox.com/sh/r8a92kcey52p7ql/AABEGqhWEQjugWrYbHFN4dJxa?dl=0&preview=BLN-0001-028-A1-5426.zip I set the project settings to 16 bit with working space set to "none". I realize the footage isn't 16 bit but that should give me the color leeway I need to make sure I don't lose any information which would happen if I set the project settings to 8 bit (right). OK. So I have my raw footage. Raw footage is by its nature 4:4:4 right? The chroma subsampling is always a result of the compression correct? You'd never get a subsampled image from the sensor would you? I bring it into after effects and drop it into a timeline. I then export it three different ways. I export a prores 4444 (with export depth set to trillions of colors) and prores 422 (with export depth set to trillions of colors) and apple intermediate codec which I've read is a 4:2:0 codec (with export depth set to millions of colors). Then I import these three clips into my timeline. So now I have 4 tracks, each one the same shot. 1. My original file which is an uncompressed cineform .mov 2. An apple prores 4444 (hopefully this won't look much if any different from the source file) 3. An apple prores 422 4. An apple intermediate codec (4:2:0 I believe) When I start pushing these around the only difference I notice is that on the 4:2:0 shots the red colors seem a little more saturated. Some colors are slightly shifted as well. More green it seems. When I adjust the exposure though, it doesn't seem to give me any more latitude in the car headlights in the raw footage that it does in the 4:2:0 footage. When I crank up the saturation the 4:2:0 footage does seem to be a little blockier than the raw footage. But overall the difference between 4:4:4 and 4:2:0 seems to be pretty negligible and I know that can't be right. The differences between editing a quicktime movies I've shot on my 5D and editing a raw photo is night and day and I have to imagine a 14-16 dynamic range camera like the Kinefinity 6K the difference would be even greater. The only reason I used after effects is because I know how to use it pretty well and it was the only software I have that would read all the different formats (resolve wouldn't read the apple intermediate codec and I couldn't export an h.264 at the 6k and I didn't want the variables introduced if I resized anything. So what's going on. What am I doing wrong. Is there a better way to test this out? Please advise. Thanks!! Evan
  12. And I suppose the more flatly and evenly your cards are lit the more accurate your test will be.
  13. That's a good thought. Will give it a shot! Thanks!
  14. Awesome! Thanks for you expertise David! Will give it a shot. Cheers
  15. So lets say I'm an individual who doesn't have much of a budget (I am and I don't) for expensive testing equiptment etc. Is there a fairly accurate way to measure the dynamic range of a camera? I did a little googling and wasn't too satisfied with the answers. I feel like there might be some way to do it by exposing a gray card properly and then exposing up and down to see when it blows out or loses detail...but I'm no expert at all. It SEEMS like there might be some kind of low tech solution like that and I just want to make sure I'm not missing it. Any suggestions?
×
×
  • Create New...