Jump to content

Search the Community

Showing results for tags 'log'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Cinematography Forums
    • General Discussion
    • Cine Marketplace
    • Cameras Systems and Formats
    • Lighting for Film & Video
    • Camera Operating & Gear
    • Camera Assistant / DIT & Gear
    • Grip & Rigging
    • Visual Effects Cinematography
    • Post Production
    • Students, New Filmmakers, Film Schools and Programs
    • Lenses & Lens Accessories
    • Film Stocks & Processing
    • Books for the Cinematographer
    • Cinematographers
    • Directors and Directing
    • In Production / Behind the Scenes
    • On Screen / Reviews & Observations
    • Business Practices & Producing
    • Camera & Lighting Equipment Resources
    • Jobs, Resumes, and Reels
    • Please Critique My Work
    • Cinematography News
    • Sound
    • Off Topic
    • Accessories (Deprecated SubForum)
    • Regional Cinematography Groups

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Occupation


Location


My Gear


Specialties

Found 7 results

  1. This question has already been asked once but that thread was 7 years ago, and since then I'm sure much more information has become available to the public and thus better explanations have also become possible. I'd like to take the example of ARRI's image processing pipeline. At a native ISO of 800 there are 7.8 stops above 18% diffuse reflectance available. Therefore, 18% x 2^(7.8) = 4011% diffuse reflectance is the clipping point. As far as my understanding goes, all cinema cameras sensors (but let's stick with Arri) record light linearly and in turn linearly convert the analog signal to 16bit digital values (Linear ADC) with the voltage generated being proportional to the the digital value(I.e Gamma =1) and only then does the conversion from 16bit Linear values to 12bit Arri LogC take place. Let's also assume all of the 16bit data is available for use, so excluding noise and other limiting factors,in order to make life simpler Now let's say I want to expose for an 18% gray card so that it ends up as middle gray on Arri 12bit LogC( I can't remember exactly what IRE they recommended, but let's assume it was 40% IRE). This is where I start getting confused. On a Linear ADC (assuming we expose correctly LINEARLY so that 18% gray card is 0.4486% LINEAR IRE. Because 18/4011 is 0.4486%) : 18%/4011% x 65535= 294 This means that there are only 294 tones from pitch black to 18% gray card on the original data the linear sensor/ADC recorded. Yet, once the 18% gray card gets converted from Linear 0.4486% IRE to Log 40% IRE on 12bit there are 1638 tones from pitch black to 18% gray card. Where are all these extra tones coming from? Is there interpolation happening here? The whole point of Log recording is to allocate more bit values to the darker areas up till 18% gray where we are more sensitive. Yet, the logarithmic values are all just redistributions of the Linear 16bit values if I understand correctly. Thus, there was never more than 294 tones of true information recorded between pitch black and 18% gray. The only other thing I can think of is that when we're exposing for 18% gray card to be 40% IRE on 12bit LogC we're actually OVEREXPOSING on the Linear Readings so that an 18% gray card is read as 1638/65535 x 4011% = 100% diffuse reflectance(or 2.5% LINEAR IRE). Which gives us 1638 tones of true information because 100%/4011% x 65535=1638 on the 16bit linear values I hope I explained my conundrum clearly. If not, feel free to ask I'll try and explain again. Thanks PS: I know this doesn't help in any way with being a great cinematographer. I just got curious haha.
  2. Hello, My first post here. I practice video as an enthusiast. I own a Fuji X-T30 that I use for stills. I am learning how to use it the best way for video. I am trying to understand the path of image processing within the camera and in the editing software. My concern is about comparing those two methods: - use built-in film simulation, like “Eterna” (low contrast, low saturation, aimed at giving a cinematic look out-of-the-box) - use “log” profile, then apply the “log to Eterna” LUT provided by Fuji in an editing software. This video is comparing those two paths: The author of the video, as well as most people in the comments, seem to agree that the “log+LUT” path is way better, with 2 stops increased dynamic range and so on. I don’t. I prefer the “built-in Eterna” path. I prefer the color (less greenish). I find it retains more details in the skin. But most above all, if the “log+LUT” path seems to retain a little bit more details in the highlight, it looses ten times that gain in the shadows. OK, might sound 100% subjective right now. But with my limited understanding of how all this work, I do not see how a “Log+LUT” path could be better than “built-in” if one do use the same target color profile. My understanding is that either way, one apply some kind of LUT. When using the built-in film simulation, that LUT is applied to a high quality set of data: full depth (14 bit or so), no lossy compression. When recording with a log profile, one start form the exact same data, but one first reduce it, then one apply the LUT in the editing software. The reduction with log profile is more optimized in term of DR than a linear reduction, but it should be inferior to using the full set of data. This is a different story for stills, as “raw” really is “RAW”. For stills, there is no reduction of quality before applying image processing in an external software. The overall image processing chain is the same, it is simply split at a different point between “built-in” and “external software” corrections. The trade off is not quality, it is “amount of data” vs. “ability to correct later”. But video with log profile is not “raw”. It seems that even Blackmagic or Arri “raw” are not raw. My understanding is that “log+LUT” has no benefit if one use a LUT similar to the film simulation built in the camera. It is only useful if one plan to use custom LUTs that have no built-in equivalent. So… what am I missing?
  3. I hope this is the correct forum for my question. I'm using a Panasonic GH5 with V-log L. I'm a bit confused on using V-log L. I was told that the reason for using log in general is to save the highlights (especially the sky in outdoor shoots). But in most forums, they are advising to over expose the shot to get rid of the noise in the shadows. But doing this blows out the skies. Therefore, I think there is no sense in shooting with log. Just shoot with a standard profile and light the scene correctly as v-log can't save both highlights and and shadows at the same time. The problem is, it's not easy to do this when shooting outdoors run and gun in the morning when the sky is in the shot. Am I thinking correctly or am I missing something? Thanks! God bless!
  4. Hi, I had an opportunity to shoot a body builder for a short documentary/story. This was a guerilla and i was just packing 2 - 1x1 leds. I wanted to underexpose the shots to get that cut light to shape the muscles. The shots picked up a lot of grain in shadows losing detail This was shot with cine4 gamma black gamma range wide level +4 black level +2 color mode pro shot at 200-400 iso with canon 16-35 and 24-70mm on HFR mode. The editor was like these shots cannot be used. https://ibb.co/dTb5v9 https://ibb.co/h5ab2p https://ibb.co/fRpfTU https://ibb.co/h7C0TU https://ibb.co/cbBuoU https://ibb.co/jeAXa9 https://ibb.co/fvo2a9 https://ibb.co/iV5ha9 Do I need to expose the image normally and then bring it down in post? What am I doing wrong ? What do i Do if I have to underexpose properly in future.
  5. Looking for some exposure help when shooting moody scenes with the FS5 - I have seen a lot of people recommending setting the zebra to 70 percent ( which makes things pretty bright on the monitor for a darker scene/ is this something people only do in LOG? ) are there any other usable on camera exposure tool other than zebra?. Don't use Sony's much and am more used to Using a REC 709 LUT or false colors within my ALEXA eyepiece to expose. Also, should I be switching from LOG3 to cine gamma for lower light situations? Also any general tips for exposing low light scenes without crushing the blacks?
  6. Digital cameras can do some amazing things nowadays considering where they were just even five years ago. One thing I sometimes struggle to understand is how these newer cameras with 13+ stops of dynamic range are actually quantizing that information in the camera body. One thing we know from linear A-to-D quantization is that your dynamic range is a function of the number of bits of the converter chip. A 14-bit ADC can store, at best (and ignoring noise for the moment), 14 stops of dynamic range. However, when we do introduce noise into the mix (sensor, transfer charge, ADC, etc.) and linearity errors, there really isn't 14 meaningful stops of dynamic range. I did a lot of research on pipeline ADCs (which I believe are the correct type used) and the best one I could find, as defined by the measured ENOB (effective number of bits), was the 16-bit ADS5560 ADC from Texas Instruments; it measured an impressive 13.5 bits. If most modern cameras, Alexa especially, are using 14-bit ADCs, how are they deriving 14 stops of dynamic range? I read that the Alexa has some dual gain architecture, but how do you simultaneously apply different gain settings to an incoming voltage without distorting the signal? A pretty good read through regarding this technology can be found at this Andor Technology Learning Academy article. Call me a little skeptical if you will. Not to pick on RED, but for the longest time, they advertised the Mysterium-X sensor as having 13.5 stops (by their own testing). Of course, many of the first sensors were used in RED One bodies, which only have 12-bit ADCs. Given that fact, how were they measuring 13.5 in the real world? Now, with respect to linear to log coding, some cameras are opting for this type of conversion before storing the data on memory cards; the Alexa and cameras that use Cineform RAW come to mind. If logarithmic coding is understood to mean that each stop gets an equal number of values, aren't the camera processors (FPGA/ASIC) merely interpolating data like crazy in the low end? Let's compare one 14-stop camera that stores data linearly and one that stores data logarithmically: In a 14-bit ADC camera, the brightest stop is represented by 8192 code values (16383-8192), the next brighest is represented by 4096 code values (8191-4096), and so on and so forth. The darkest stop (-13 below) is only represented by 2 values (1 or 0). That's not a lot of information to work with. Meanwhile, on our other camera, 14-stops would each get ~73 code values (2^10 = 1024 then divided equally by 14) if we assume there is a 14-bit to 10-bit linear-to-log transform. As you can see here, the brighter stops are more efficiently coded because we don't need ~8000 values to see a difference, but the low end gets an excess of code values when there weren't very many to begin with. So I guess my question is, is it better to do straight linear A-to-D coding off the sensor and do logarithmic operations at a later time or is it better to do logarithmic conversion in camera to save bandwidth when recording to memory cards? Panavision's solution, Panalog, can show the relationship between linear light values and logarithmic values after conversion in this graph: On a slightly related note, why do digital camera ADCs have a linear response in the first place? Why can't someone engineer one with a logarithmic response to light like film? The closest thing I've read about is the hybrid LINLOG Technology at Photon Focus which seems like a rather hackneyed approach. If any engineers want to hop in here, I'd be much obliged--or if your name is Alan Lasky, Phil Rhodes, or John Sprung; that is, anyone with a history of technical knowledge on display here. Thanks.
  7. I have been having an issue when color grading lately with the C300. After the grade, most blacks that are below 40 IRE seem to be artifacting like crazy on youtube. Whereas I don't see this from other people's work. Ive tried these encode settings in adobe media encoder, Quicktime MOV, h264 55'000 kbps 1920 x 818 Quicktime MOV, h264 35'000 kbps 1920 x 818 Quicktime MOV, h264 25'000 kbps 1920 x 818 Quicktime MOV, Prores 444 1920 x 818 But all of them just aren't cutting it and I dont know what I'm missing. My blacks are generally not brought down to 0 because the director wants a low contrasty look and some scenes were shot very dimly with only one LED to light the scene. On the other hand, we've lit darker scenes with several 1k's and Litepanel LEDs and while the highlights and midtones held up the blacks still artifacted hard. Here is an unlisted link to a test video I made. Please don't share outside of the community. https://www.youtube.com/watch?v=4d3WQT4hcN4 Any and all info would be much appreciated!
×
×
  • Create New...