Jump to content

Search the Community

Showing results for tags 'linear'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Cinematography Forums
    • General Discussion
    • Cine Marketplace
    • Cameras Systems and Formats
    • Lighting for Film & Video
    • Camera Operating & Gear
    • Camera Assistant / DIT & Gear
    • Grip & Rigging
    • Visual Effects Cinematography
    • Post Production
    • Students, New Filmmakers, Film Schools and Programs
    • Lenses & Lens Accessories
    • Film Stocks & Processing
    • Books for the Cinematographer
    • Cinematographers
    • Directors and Directing
    • In Production / Behind the Scenes
    • On Screen / Reviews & Observations
    • Business Practices & Producing
    • Camera & Lighting Equipment Resources
    • Jobs, Resumes, and Reels
    • Please Critique My Work
    • Cinematography News
    • Sound
    • Off Topic
    • Accessories (Deprecated SubForum)
    • Regional Cinematography Groups

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Occupation


Location


My Gear


Specialties

Found 3 results

  1. This question has already been asked once but that thread was 7 years ago, and since then I'm sure much more information has become available to the public and thus better explanations have also become possible. I'd like to take the example of ARRI's image processing pipeline. At a native ISO of 800 there are 7.8 stops above 18% diffuse reflectance available. Therefore, 18% x 2^(7.8) = 4011% diffuse reflectance is the clipping point. As far as my understanding goes, all cinema cameras sensors (but let's stick with Arri) record light linearly and in turn linearly convert the analog signal to 16bit digital values (Linear ADC) with the voltage generated being proportional to the the digital value(I.e Gamma =1) and only then does the conversion from 16bit Linear values to 12bit Arri LogC take place. Let's also assume all of the 16bit data is available for use, so excluding noise and other limiting factors,in order to make life simpler Now let's say I want to expose for an 18% gray card so that it ends up as middle gray on Arri 12bit LogC( I can't remember exactly what IRE they recommended, but let's assume it was 40% IRE). This is where I start getting confused. On a Linear ADC (assuming we expose correctly LINEARLY so that 18% gray card is 0.4486% LINEAR IRE. Because 18/4011 is 0.4486%) : 18%/4011% x 65535= 294 This means that there are only 294 tones from pitch black to 18% gray card on the original data the linear sensor/ADC recorded. Yet, once the 18% gray card gets converted from Linear 0.4486% IRE to Log 40% IRE on 12bit there are 1638 tones from pitch black to 18% gray card. Where are all these extra tones coming from? Is there interpolation happening here? The whole point of Log recording is to allocate more bit values to the darker areas up till 18% gray where we are more sensitive. Yet, the logarithmic values are all just redistributions of the Linear 16bit values if I understand correctly. Thus, there was never more than 294 tones of true information recorded between pitch black and 18% gray. The only other thing I can think of is that when we're exposing for 18% gray card to be 40% IRE on 12bit LogC we're actually OVEREXPOSING on the Linear Readings so that an 18% gray card is read as 1638/65535 x 4011% = 100% diffuse reflectance(or 2.5% LINEAR IRE). Which gives us 1638 tones of true information because 100%/4011% x 65535=1638 on the 16bit linear values I hope I explained my conundrum clearly. If not, feel free to ask I'll try and explain again. Thanks PS: I know this doesn't help in any way with being a great cinematographer. I just got curious haha.
  2. Hello everybody, reading some manuals I see that they talk about lenses with linear focus mechanism and lenses with non-linear focus mechanism, what is the difference between them? Thanks in advance for the help
  3. Digital cameras can do some amazing things nowadays considering where they were just even five years ago. One thing I sometimes struggle to understand is how these newer cameras with 13+ stops of dynamic range are actually quantizing that information in the camera body. One thing we know from linear A-to-D quantization is that your dynamic range is a function of the number of bits of the converter chip. A 14-bit ADC can store, at best (and ignoring noise for the moment), 14 stops of dynamic range. However, when we do introduce noise into the mix (sensor, transfer charge, ADC, etc.) and linearity errors, there really isn't 14 meaningful stops of dynamic range. I did a lot of research on pipeline ADCs (which I believe are the correct type used) and the best one I could find, as defined by the measured ENOB (effective number of bits), was the 16-bit ADS5560 ADC from Texas Instruments; it measured an impressive 13.5 bits. If most modern cameras, Alexa especially, are using 14-bit ADCs, how are they deriving 14 stops of dynamic range? I read that the Alexa has some dual gain architecture, but how do you simultaneously apply different gain settings to an incoming voltage without distorting the signal? A pretty good read through regarding this technology can be found at this Andor Technology Learning Academy article. Call me a little skeptical if you will. Not to pick on RED, but for the longest time, they advertised the Mysterium-X sensor as having 13.5 stops (by their own testing). Of course, many of the first sensors were used in RED One bodies, which only have 12-bit ADCs. Given that fact, how were they measuring 13.5 in the real world? Now, with respect to linear to log coding, some cameras are opting for this type of conversion before storing the data on memory cards; the Alexa and cameras that use Cineform RAW come to mind. If logarithmic coding is understood to mean that each stop gets an equal number of values, aren't the camera processors (FPGA/ASIC) merely interpolating data like crazy in the low end? Let's compare one 14-stop camera that stores data linearly and one that stores data logarithmically: In a 14-bit ADC camera, the brightest stop is represented by 8192 code values (16383-8192), the next brighest is represented by 4096 code values (8191-4096), and so on and so forth. The darkest stop (-13 below) is only represented by 2 values (1 or 0). That's not a lot of information to work with. Meanwhile, on our other camera, 14-stops would each get ~73 code values (2^10 = 1024 then divided equally by 14) if we assume there is a 14-bit to 10-bit linear-to-log transform. As you can see here, the brighter stops are more efficiently coded because we don't need ~8000 values to see a difference, but the low end gets an excess of code values when there weren't very many to begin with. So I guess my question is, is it better to do straight linear A-to-D coding off the sensor and do logarithmic operations at a later time or is it better to do logarithmic conversion in camera to save bandwidth when recording to memory cards? Panavision's solution, Panalog, can show the relationship between linear light values and logarithmic values after conversion in this graph: On a slightly related note, why do digital camera ADCs have a linear response in the first place? Why can't someone engineer one with a logarithmic response to light like film? The closest thing I've read about is the hybrid LINLOG Technology at Photon Focus which seems like a rather hackneyed approach. If any engineers want to hop in here, I'd be much obliged--or if your name is Alan Lasky, Phil Rhodes, or John Sprung; that is, anyone with a history of technical knowledge on display here. Thanks.
×
×
  • Create New...