Jump to content

Asker Mammadov

Basic Member
  • Posts

    9
  • Joined

  • Last visited

Profile Information

  • Occupation
    Other
  • Location
    Italy
  1. This question has already been asked once but that thread was 7 years ago, and since then I'm sure much more information has become available to the public and thus better explanations have also become possible. I'd like to take the example of ARRI's image processing pipeline. At a native ISO of 800 there are 7.8 stops above 18% diffuse reflectance available. Therefore, 18% x 2^(7.8) = 4011% diffuse reflectance is the clipping point. As far as my understanding goes, all cinema cameras sensors (but let's stick with Arri) record light linearly and in turn linearly convert the analog signal to 16bit digital values (Linear ADC) with the voltage generated being proportional to the the digital value(I.e Gamma =1) and only then does the conversion from 16bit Linear values to 12bit Arri LogC take place. Let's also assume all of the 16bit data is available for use, so excluding noise and other limiting factors,in order to make life simpler Now let's say I want to expose for an 18% gray card so that it ends up as middle gray on Arri 12bit LogC( I can't remember exactly what IRE they recommended, but let's assume it was 40% IRE). This is where I start getting confused. On a Linear ADC (assuming we expose correctly LINEARLY so that 18% gray card is 0.4486% LINEAR IRE. Because 18/4011 is 0.4486%) : 18%/4011% x 65535= 294 This means that there are only 294 tones from pitch black to 18% gray card on the original data the linear sensor/ADC recorded. Yet, once the 18% gray card gets converted from Linear 0.4486% IRE to Log 40% IRE on 12bit there are 1638 tones from pitch black to 18% gray card. Where are all these extra tones coming from? Is there interpolation happening here? The whole point of Log recording is to allocate more bit values to the darker areas up till 18% gray where we are more sensitive. Yet, the logarithmic values are all just redistributions of the Linear 16bit values if I understand correctly. Thus, there was never more than 294 tones of true information recorded between pitch black and 18% gray. The only other thing I can think of is that when we're exposing for 18% gray card to be 40% IRE on 12bit LogC we're actually OVEREXPOSING on the Linear Readings so that an 18% gray card is read as 1638/65535 x 4011% = 100% diffuse reflectance(or 2.5% LINEAR IRE). Which gives us 1638 tones of true information because 100%/4011% x 65535=1638 on the 16bit linear values I hope I explained my conundrum clearly. If not, feel free to ask I'll try and explain again. Thanks PS: I know this doesn't help in any way with being a great cinematographer. I just got curious haha.
  2. I see. So basically, it was all restricted by the popular print format available at the time. And in order to avoid all the hassle of converting from 1 format to another so it's ready for projection in 4 perf 35mm, they just stuck with shooting in 4 perf 35mm.
  3. Interesting. I wonder why we never ended up using VistaVision to refer to cinema cameras with Full-Frame/Large Format sensors and just adopted the photography term instead. Anyway, thanks for your answers.
  4. I decided to check out the history of film stocks and their different formats, and I realized that there are no actual full-frame movie stocks that were ever developed or used for cinema back in those days. Full-frame as we cinema people interpret it today, from what I understand, came from digital manufacturers making sensors based on the 135mm film stock dimensions made for Photography: -Which are ~35mm wide in its negative and ~24mm tall negative(which is approximately the actual width of negatives on Super35mm film stocks, so basically it's like someone rotated Super35mm stocks 90 degrees) -Pulled along horizontally rather than vertically, which is actually what allows the ~35mm width negative Unless I'm missing something, why was there no Full-Frame( not the full-gate 35mm but the actual negative 36 x 24) ever used? It was quite surprising to see that cinema film stock developers decided to just jump straight into Medium Format from Super35. The only thing that might come close to it is VistaVision, but I'm not 100% sure if that actually qualifies as full frame . Thanks
  5. I apologize in advance if this isn't the correct sub-forum for this question, but it didn't seem like it fit into any of the other ones so I put it here. Feel free to change it if it doesn't. Lately, I've been watching all kinds of footage from different camera brands ranging from the most budget friendly film-making cameras all the way up to the premium stuff. And while there is no doubt that we're living in the golden age of cameras (in terms of being accessible to new-comers), at one point I came across a comment where someone mentioned the term "motion cadence" and how high level cameras have that little something that stands out against the budget versions. I never really knew the word for it until he said that term, and I have noticed that it really does add a pleasing motion to the image quality (this is all assuming 24fps 180 shutter, of course). What is the reason for this discrepancy between manufacturers? The only thing I could think of was perhaps the type of shutter being used in digital cameras but that's about it. Thanks
  6. The general consensus is that this is the future. Do you guys believe cinematographers will also have to become decent in it in the future? Or will that stuff be left to the VFX people?
  7. This is exactly what I've been thinking. I really need to get a reflector at the very least (To bounce some ambient back on whatever the focus is. Especially when shooting the subject against such a bright background). Thanks for your advice. Enjoy your Christmas!
  8. I shot this film completely solo using only natural light(As I have no lights. Just someone who recently started doing filmmaking as a hobby) on a Lumix GH5 with a sigma 17-50 f2.8 lens. I'm looking more for a cinematography critique but even storytelling critiques are welcome. I'd also like to mention YouTube compression affects the luminance of some scenes (I'm generally decently-versed in understanding color spaces and I've reuploaded multiple times trying different versions(like changing data levels between video or full for an sRGB color space but unfortunately, nothing. The only thing that I couldn't do is upload in 4k as I have a dinky laptop and low storage space and slow internet speed). Looking forward to your opinions! Merry Christmas and Happy New Year
×
×
  • Create New...