Jump to content

Search the Community

Showing results for tags 'log'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Cinematography Forums
    • General Discussion
    • News & Press Releases
    • Lighting
    • Camera Operating
    • AC's & DIT's
    • Grip & Rigging
    • Visual Effects Cinematography
    • Grading, DI and Telecine
    • Students and New Filmmakers
    • Cameras Systems and Formats
    • Lenses & Lens Accessories
    • Camera Accessories & Tools
    • Film Stocks & Processing
    • Books for the Cinematographer
    • Cinematographers
    • In Production
    • On Screen
    • Cine Marketplace
    • Business Practices
    • Jobs, Resumes, and Reels
    • Please Critique My Work
    • Regional Cinematography Groups
  • Not Cinematography
    • Producing
    • Directing
    • Sound
    • Editing
    • Off-Topic
    • Forum Support

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Skype


Location


My Gear


Specialties

Found 5 results

  1. I hope this is the correct forum for my question. I'm using a Panasonic GH5 with V-log L. I'm a bit confused on using V-log L. I was told that the reason for using log in general is to save the highlights (especially the sky in outdoor shoots). But in most forums, they are advising to over expose the shot to get rid of the noise in the shadows. But doing this blows out the skies. Therefore, I think there is no sense in shooting with log. Just shoot with a standard profile and light the scene correctly as v-log can't save both highlights and and shadows at the same time. The problem is, it's not easy to do this when shooting outdoors run and gun in the morning when the sky is in the shot. Am I thinking correctly or am I missing something? Thanks! God bless!
  2. Hi, I had an opportunity to shoot a body builder for a short documentary/story. This was a guerilla and i was just packing 2 - 1x1 leds. I wanted to underexpose the shots to get that cut light to shape the muscles. The shots picked up a lot of grain in shadows losing detail This was shot with cine4 gamma black gamma range wide level +4 black level +2 color mode pro shot at 200-400 iso with canon 16-35 and 24-70mm on HFR mode. The editor was like these shots cannot be used. https://ibb.co/dTb5v9 https://ibb.co/h5ab2p https://ibb.co/fRpfTU https://ibb.co/h7C0TU https://ibb.co/cbBuoU https://ibb.co/jeAXa9 https://ibb.co/fvo2a9 https://ibb.co/iV5ha9 Do I need to expose the image normally and then bring it down in post? What am I doing wrong ? What do i Do if I have to underexpose properly in future.
  3. Looking for some exposure help when shooting moody scenes with the FS5 - I have seen a lot of people recommending setting the zebra to 70 percent ( which makes things pretty bright on the monitor for a darker scene/ is this something people only do in LOG? ) are there any other usable on camera exposure tool other than zebra?. Don't use Sony's much and am more used to Using a REC 709 LUT or false colors within my ALEXA eyepiece to expose. Also, should I be switching from LOG3 to cine gamma for lower light situations? Also any general tips for exposing low light scenes without crushing the blacks?
  4. I have been having an issue when color grading lately with the C300. After the grade, most blacks that are below 40 IRE seem to be artifacting like crazy on youtube. Whereas I don't see this from other people's work. Ive tried these encode settings in adobe media encoder, Quicktime MOV, h264 55'000 kbps 1920 x 818 Quicktime MOV, h264 35'000 kbps 1920 x 818 Quicktime MOV, h264 25'000 kbps 1920 x 818 Quicktime MOV, Prores 444 1920 x 818 But all of them just aren't cutting it and I dont know what I'm missing. My blacks are generally not brought down to 0 because the director wants a low contrasty look and some scenes were shot very dimly with only one LED to light the scene. On the other hand, we've lit darker scenes with several 1k's and Litepanel LEDs and while the highlights and midtones held up the blacks still artifacted hard. Here is an unlisted link to a test video I made. Please don't share outside of the community. https://www.youtube.com/watch?v=4d3WQT4hcN4 Any and all info would be much appreciated!
  5. Digital cameras can do some amazing things nowadays considering where they were just even five years ago. One thing I sometimes struggle to understand is how these newer cameras with 13+ stops of dynamic range are actually quantizing that information in the camera body. One thing we know from linear A-to-D quantization is that your dynamic range is a function of the number of bits of the converter chip. A 14-bit ADC can store, at best (and ignoring noise for the moment), 14 stops of dynamic range. However, when we do introduce noise into the mix (sensor, transfer charge, ADC, etc.) and linearity errors, there really isn't 14 meaningful stops of dynamic range. I did a lot of research on pipeline ADCs (which I believe are the correct type used) and the best one I could find, as defined by the measured ENOB (effective number of bits), was the 16-bit ADS5560 ADC from Texas Instruments; it measured an impressive 13.5 bits. If most modern cameras, Alexa especially, are using 14-bit ADCs, how are they deriving 14 stops of dynamic range? I read that the Alexa has some dual gain architecture, but how do you simultaneously apply different gain settings to an incoming voltage without distorting the signal? A pretty good read through regarding this technology can be found at this Andor Technology Learning Academy article. Call me a little skeptical if you will. Not to pick on RED, but for the longest time, they advertised the Mysterium-X sensor as having 13.5 stops (by their own testing). Of course, many of the first sensors were used in RED One bodies, which only have 12-bit ADCs. Given that fact, how were they measuring 13.5 in the real world? Now, with respect to linear to log coding, some cameras are opting for this type of conversion before storing the data on memory cards; the Alexa and cameras that use Cineform RAW come to mind. If logarithmic coding is understood to mean that each stop gets an equal number of values, aren't the camera processors (FPGA/ASIC) merely interpolating data like crazy in the low end? Let's compare one 14-stop camera that stores data linearly and one that stores data logarithmically: In a 14-bit ADC camera, the brightest stop is represented by 8192 code values (16383-8192), the next brighest is represented by 4096 code values (8191-4096), and so on and so forth. The darkest stop (-13 below) is only represented by 2 values (1 or 0). That's not a lot of information to work with. Meanwhile, on our other camera, 14-stops would each get ~73 code values (2^10 = 1024 then divided equally by 14) if we assume there is a 14-bit to 10-bit linear-to-log transform. As you can see here, the brighter stops are more efficiently coded because we don't need ~8000 values to see a difference, but the low end gets an excess of code values when there weren't very many to begin with. So I guess my question is, is it better to do straight linear A-to-D coding off the sensor and do logarithmic operations at a later time or is it better to do logarithmic conversion in camera to save bandwidth when recording to memory cards? Panavision's solution, Panalog, can show the relationship between linear light values and logarithmic values after conversion in this graph: On a slightly related note, why do digital camera ADCs have a linear response in the first place? Why can't someone engineer one with a logarithmic response to light like film? The closest thing I've read about is the hybrid LINLOG Technology at Photon Focus which seems like a rather hackneyed approach. If any engineers want to hop in here, I'd be much obliged--or if your name is Alan Lasky, Phil Rhodes, or John Sprung; that is, anyone with a history of technical knowledge on display here. Thanks.
×
×
  • Create New...