Jump to content

Dennis Schaller

Basic Member
  • Posts

    10
  • Joined

  • Last visited

Everything posted by Dennis Schaller

  1. Yeah I already thought about that too... It looks like thats the case on the 7D but the lines are too small to tell exactly. We shot the One in 2K because thats how we shot our film (we needed Slowmo), so we wanted the results to match our settings. There are still some open questions though (and I really dont want to write about faulty test results in my bachelorthesis): Ok so now what can I get out of my test shots without relying on wrong results? Questions collected: 1. Do you need to expose white/black to the maximum when using a color chart? I guess the answer is: If you don't expose black/white to the max, the colors wont be as saturated, but the direction they're showing on the vectorscope stay the same? 2. Can you tweak saturation/master gamma without distorting the results of a color chart shot? You should right? Because saturation/master gamma settings influence every color the same? 3. Do you have to keep the same distance when using a resolution/multiburst chart? Do different focal lengths make the results of a resolution/multiburst chart unusable? My answer would be: As long as the image that is projected on the sensor has the same size, the distance doesnt matter? (But will a long focal length/same focal length with crop factor resolute the image as good as if you're closer to the chart?) 4. When testing rolling shutter, if you change the distance but your framing stays the same (due to focal length and/or crop factor), does it change anything? (It shouldn't because the image on the sensor will be the same size and rolling shutter just depends on the readout time of the sensor right? Or does the increasing distance reduce rolling shutter even though your framing stays the same?) Would be really great if you could answer those too David, thanks for the answers you already gave, it's always good for me to have some profound answers, because I'm really looking into all that technology stuff and it's really fun to think about that stuff in a way it all makes sence :)
  2. Hi, I didnt know where else to put it, so i hope this is ok. I have some questions regarding camera testing with test charts. We actually did some tests yesterday (and i guess we did a lot of mistakes, since that was our first time and noone had a plan :wacko:). We had a test room in a big rental house, so the setup was ok from the start. Two tungsten lights on both sides of the charts, I checked to light them evenly so that shouldve been fine. We white balanced a C500 so the white was right at the middle in the vectorscope (3100K), so that shouldve been fine too. We used the following 3 test charts: I suppose we did some mistakes in some of the tests though so I'd like to know which of our results are usable. Cameras we tested: Nikon D3100 (Low-Budget DSLR with 1080p video), Canon 7D, Canon C500, Red One. (and for fun a GoPro Hero2 and 3, but i guess those results are unusable anyways because the charts were too small due to the fisheye) Lenses: Cheap standard lenses for the DSLRs (18-55mm Nikon, 18-135 Canon), Arri/Zeiss High Speed 85mm T1.3 for the C500 and the One. 1. For colortesting i guess all you have to do is get the white balance right (if your scene is lit well enough), right? So we shot that chart with all of our cameras without watching the framing (shouldve done that?) AND we actually didnt look at a waveformmonitor to get the white to clip and the black to be black (that could be a problem right?)... DSLRs with some of the standard picture profiles, C500 in log mode, One delivers Raw ofcourse. Now if I open the files, crop the image so all you see is the chart, I think i have some nice values at the vectorscope (but are the usable?). Also can I just add saturation/tweak the master gamma to make the vectorscope be easier to read or will this distort the colors? (as you can imagine its quite centered in the C500's log picture) 2. For testing the dynamic range right, you have to set the white in the middle just to clip and the black to be at the bottom of the waveform, I know that. But what can you actually tell from a 9-stop dynamic range chart when all the cameras easily capture that DR? Does it even make sence? (We set our exposure wrong on those shots, so I guess theyre unusable anyway) 3. I just wanted to use the third chart to show lineskipping in the DSLRs images (because there are some diagonal lines ofcourse), but I guess it would be better to get some bigger diagonal lines, because the images I got out of the test, I think aren't sharp enough to show line skipping. Another thing about this chart: I guess its important to frame it to fill the whole image right? (Because you cant compare anything otherwise?) In my results the C500 easily resolves all of the lines possible. Then we had to move the tripod because of the One's crop factor in 2K mode. So we roughly doubled the distance to the chart and then recorded a shot with the One (I guess changing the distance is a mistake for that test, even though you have the same framing?), the results are quite bad, it resolves all the lines up to 6... Then with the DSLRs we had quite different focal lengths (which could be a mistake too?), so we moved again to get the same framing. In my results the 7D shows some nice moiree (at least that result is usable I guess) and resolves the lines up to 5/6 to be distinctive diagonal lines. The D3100 is so hard to focus that those results are unusable anyways I guess. Ok so now what can I get out of my test shots without relying on wrong results? Questions collected: 1. Do you need to watch the framing when using a color chart? 2. Do you need to expose white/black to the maximum when using a color chart? 3. Can you tweak saturation/master gamma without distorting the results of a color chart shot? 4. Can you get any results out of shooting a 9-stop DR chart when your camera resolves those anyways? (Except for results like how the ISO changes your grays or something like that) 5. Do you have to watch the framing when using a resolution/multiburst chart? (I guess so) 6. Do you have to keep the same distance when using a resolution/multiburst chart? 7. Do different focal lengths make the results of a resolution/multiburst chart unusable? 8. When testing rolling shutter, if you change the distance but your framing stays the same (due to focal length and/or crop factor), does it change anything? (It shouldn't because the image on the sensor will be the same size and rolling shutter just depends on the readout time of the sensor right? Or does the increasing distance reduce rolling shutter even though your framing stays the same?) Wow that's quite a lot, I hope someone takes the time to answer, thanks in advance.
  3. So out of a 4096 x 2160 pixel raw-file you'll get a 4096 x 2160 pixel RGB-videofile? Whats with those 75% then (what is this 75% resolution loss referring too)?
  4. Hi, just one quick question: I normally read that a 4K raw file will only have around 75% of its resolution after debayering and converting to a videofile. Now i guess Red's trying to confuse me. In this article they're saying that "A 4K Bayer sensor is capable of producing full 4K 4:4:4 RGB files". In context they're talking about being able to produce 4:4:4 video and apply 4:2:0 chromasubsampling afterwards, but if you only look at that part of the sentence it is actually wrong isn't it? Thanks in advance.
  5. Just to let you know: I guess the answer I was expecting was "Log data isn't stored otherwise, the bits are just used differently because the LUTs assign them differently" I never heard of LUTs, so it was very unclear to me what Log actually does to manipulate the data. Thanks anyways, you two gave me some good points too :)
  6. Ok that's good to know David. I thought raw files were so much bigger by default. That it wouldn't matter what settings you're recording in, raw would be bigger than log. But what you're saying makes more sence. Good to know that raw files aren't necessarily bigger with the same bitdepth. But that actually sounds a bit like compressing data in log wouldn't really pay off, because raw files will be smaller? That souldn't be like that should it? Phil no I wouldn't say that. I know those are different things and that raw doesn't mean big filesize and log doesn't mean small filesize. But as far as I see things, one of the benefits of recording log should be that you're getting smaller files (and that you can view them and that it maintains nearly the same dynamic range that raw files have). David I read the page of Arts article again but it actually looks like he's having an error in there somewhere too? (or I don't get it right) His example with the candles: In raw you coun't every lit candle in the room like 1 2 3 4 5 6 7 ... and in log you only count 1 2 4 8 16. While it makes sence that counting only the powers of 2 will result in less data to be saved, I don't think that's all you need to make your picture. You need those small steps in brightness difference and saving only the powers of 2 is like having full stops of brightness difference between each step.
  7. Ok David you're right, I messed up quite a lot in my post. I was wrong about raw saving RGB-Data for every pixel of course. So what it really does is just saving a value for the lights intensity for every pixel. Those pixels are either red, green or blue. Got it till there. Then you're saying that raw files are smaller then a debayered image. A question regarding this point: Is there a format that outputs raw debayered video without a log/rec709 curve? No right? You can either output raw or log/rec709 compressed material which will be smaller in filesize than raw. (Just for me to clarify) And that is the point where I need some more information. What exactly happens in the "apply a log gamma curve" step? I guess the curve compresses the raw values right? So when you still have only one value (representing the light intensity) for every pixel you compress those values logarithmically. My big question is "Why are raw files bigger than log files? How is the data stored in raw files different from the data stored in log/rec709 format files?". To get back to my example: Data in raw files is stored like Pixel coordinates - light intensity that hit the pixel (<- for every pixel there is) Since its digital values, it all has to be 1/0. In a 2K Alexa image (measuring 2880 x 1620 pixels) you'd have to have at least 12bit to store the X coordinates of the pixel, 11 bit for the Y coordinates plus the 14bit or whatever it is that youre recording the brightness values in. Example (12bit X coords, 11bit Y coords, 14bit brightness): 101101000000 11001010100 11111111111111 X: 2880 - - - - - -Y: 1620 - - - - -B: 16383 So those 1/0 combination will tell the pixel on the bottom right corner to be maximum white. Now how comes log files are smaller in filesize? What is done to those ones and zeros to compress them? Oh and Phil: Yeah I know the topic is quite confusing because some cameras even compress raw files with a log curve etc. I think I understand it enough to get the basics and to know when to use what or why it looks like it does, but I'm really interested in those geeky things maybe noone other than me thinks about. If I understand the real basics it makes everything much clearer for me so I try to get to the bottom of everything in my thesis as well.
  8. Hi guys, in the Alexas tech specs it says: "Adjustable range from -8 to +8 CC. 1 CC corresponds to 035 Kodak CC values or 1/8 Rosco values." When talking about magenta/green shifting for white balance. I'm quite inexperienced with CC filters/cameras in that range, so I don't know what those values should tell me. Can someone give me a physical background on those Kodak CC or Rosco values? What exactly do those numbers tell me? I found this source explaining the CCXXC notation: http://www.olympusmicro.com/primer/photomicrography/ccfilters/cccyan.html So I guess the Alexas range of correcting green/magenta is like that: 8*035 = 28 (there are no 28 density filters i guess, so lets say 30) Alexas range: CC30M - - - - - - - no correction - - - - - - - CC30G Is this correct? (I'm still interested in the rosco notation but can't find any information on that topic) Tanks in advance.
  9. Hey guys, I'm writing my bachelorthesis to finish my studies here in germany. My topic is "production of professional videomaterial to compare different cameras". I read an article by Art Adams over at provideocoalition about the differences between Raw and Log Files. I've got a few questions going a bit deeper into the topic though: Art wrote the following in his article: 16,384: totally saturated sensor (maximum white) 8,192-16,383: First stop down from maximum white 4,096-8,191: Second stop down from maximum white 2,048-4,095: Third stop down from maximum white 1,024-2,047: Fourth stop down from maximum white 512-1,023: Fifth stop down from maximum white 256-511: Sixth stop down from maximum white 128-255: Seventh stop down from maximum white 64-127: Eighth stop down from maximum white 32-63: Ninth stop down from maximum white 16-31: Tenth stop down from maximum white 8-15: Eleventh stop down from maximum white 4-7: Twelfth stop down from maximum white 2-6: Thirteenth stop down from maximum white 1-2: Fourteenth stop down from maximum white So he's talking about the digital values and the brightness stops they represent using Raw files. Then in log mode the data is compressed using logarithmic scales. This is how I imagine it's working: For every pixel in the image data using Raw is stored like this: Red: 10111011100000 (representing 12000 in binary values) Blue: 10011100010000 (representing 10000 in binary values) Green: 1111101000000 (representing 8000 in binary values) I dont exactly know what color this would be but just to have some example values. So in Raw the file is getting quite big because those values for every pixel is quite much data (at least in 4:4:4 for every pixel). First question: is this roughly how data is stored (there will be some more meta data in the file, but i guess this is how pixel color data is stored?) Second question: how are those values compressed in log? Everyone is always talking about brightness being compressed in log, but i guess what is actually meant are the brightness values for each color channel seperately (as I did in my example above). Since in log the data has to be stored in binary values too, how are the log values written exactly? Hope someone is able to help me. I may be asking some more in-detail questions in this thread over the next few weeks while im writing. Thanks in advance.
×
×
  • Create New...