Jump to content

Jon Pais

Basic Member
  • Posts

    29
  • Joined

  • Last visited

Profile Information

  • Occupation
    Cinematographer
  • Location
    Vietnam

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. One year ago, almost to the day, we published a video entitled Why Are HDR Shows So Dark?, which turned out to be one of our most successful rants on YouTube. We’ve since identified no fewer than a half dozen different contributing factors, including: shows that continue to be lit in an SDR environment; fewer than 1% of productions are even monitored in HDR and not infrequently, the very first time anyone sees the picture in HDR is in the colorist suite; from there, you can bet it’s already a foregone conclusion that the HDR version of the show won’t depart radically from the SDR one; moreover, colorists have been known to compromise the grade in order to mitigate motion artifacts; and lastly, it’s no secret that not a few filmmakers and colorists are either ambivalent or even openly hostile toward HDR, preferring the low con look of traditional film instead. In and of themselves, apart from the absence of bright specular highlights that constitute one of the signature characteristics of high dynamic range video, none of these various assaults on HDR would necessarily make the image any darker than SDR; except for the fact that SDR’s relative approach to gamma means that the picture can always be made brighter in a room with high ambient light levels, whereas PQ based ST2084 HDR is an absolute standard where neither the EOTF nor the peak brightness can be increased, with the result that the SDR version could very well be brighter than the HDR one. Enter Michael Zink, Vice President of Emerging & Creative Technologies at WarnerMedia, who, during the course of a highly illuminating interview on The Display Show, proposes several more reasons why HDR shows might be too dark: “So, content metadata is this concept of describing technical parameters of the content itself so that that information can be provided to a display and the display can make better choices, especially when it comes to things like tone mapping. Now, you might recall that in the HDR10 format for instance, most people focus on the fact that it is using SMPTE 2084, the PQ curve in terms of the encoding, but it also includes SMPTE 2086, which is mastering display metadata. Now, that metadata describes what mastering monitor I used but that doesn’t really say anything about the content. So, you can have the most sophisticated mastering monitor and maybe I’m creating a piece of content that is in black and white. So describing the monitor, while helpful, doesn’t really tell me the full story. So, what we ended up doing was to come up with additional metadata that can go along as part of the HDR10 format and at the time it was really this notion of let’s at least create some sort of what we call static metadata that at least describes the two terms for MaxCLL and MaxFALL. MaxCLL is maximum content light level that essentially describes the brightest pixel in the entire film and MaxFALL is maximum frame average light level. You can equate that to an APL essentially: what is the brightest overall frame in the entire entire movie. Now, the reason we invented that is, as I mentioned earlier, is that as part of the discussions inside the Blu-ray Disc Association at the time, it was this distinction between how bright you need content to be for speculars versus for the entire frame, so these two parameters were kind of like developed and Mike Smith, one of the collaborators of mine here at Warner, kind of came up with that at the time to really help describe those content parameters. Now, where they become really useful for instance, let me give you a real-time example, is that let’s say you have a display that is capable of 750 nits yet you’ve mastered on a Pulsar which is able of mastering 4000 nits. But if you, for instance, only have a piece of content that isn’t very bright – maybe the MaxCLL is only 500 nits – the display actually doesn’t need to do any sort of tone mapping. Yet if you don’t use that metadata and you don’t look at the content metadata itself, and instead you just look at what mastering monitor was used, you would take whatever is in the master, assume it’s mastered to 4000 nits, map it all the way down to 750 nits, which means the actual content, the brightest pixel in that piece of content – 500 nits – will now be displayed much lower than that and content will end up looking very dark. And I think we’ve seen a lot of complaints early on when it came to HDR from consumers saying HDR looks too dark; and I think a lot of those instances were caused by those types of bad judgments – probably the wrong term – but certainly by using the wrong type of information; so I think it was always helpful using as much information as possible and I think it would be great for display manufacturers to really pay attention to the different types of metadata that is available and we wanted to make sure that we have or are providing information about what content metadata is there now as I said static metadata is that it’s static it just describes one snapshot over the entire feature film. There’s obviously a lot richer for metadata and dynamic metadata that describes that frame by frame; and for a lot of content, that metadata is available as well; and I think manufacturers should choose to use that one simply because it gives them more information i think from a display perspective more information is typically better if you want to maintain the creative intent”. Michael Zink then goes on to explain how so-called outlier pixels – very bright pixels that are unintended – can skew MaxCLL metadata, in turn distorting tone mapping; and how some television manufacturers are simply ignoring metadata altogether.
  2. It’s recommended to blur image slightly when adding grain.
  3. That's why his website is called celluloid dreaming.
  4. The most esteemed filmmakers and colorists on the planet use film print emulation LUTs. Cullen Kelly has spoken quite eloquently about our rich heritage of film color: “The final fundamental to grading photographically is to use a good print stock. This concept is largely forgotten today but for a century or more, print stock played a key role in defining the look of a film, providing a consistent baseline of creative contrast and color imagery and helping visually unify the images”. – Cullen Kelly “[…] I really do feel, as a hardcore devotee of traditional film print emulation and of borrowing from the incredibly detailed and fine work that’s been done over the course of the last century with color science as regards film negative and film prints and that whole system, that’s the best way for mastering aesthetically pleasing images that we’ve ever come up with as a species, by far. So we have a huge debt that we owe to those things and we owe a lot of diligence in terms of understanding how did that work and using that as our baseline instead of saying ‘I’m gonna write my own script and play around with my lift and gamma and gain’ – all the work’s already been done, and until we can do it better, that’s really where we owe our diligence, in my opinion”. - Cullen Kelly
  5. The print options in Dehancer Pro 5.0.0 are beautiful. Here's a short video I just uploaded to YT in HDR using Konica Centuria 100 DNP and Kodak 2383. And here is more information about how I color corrected the picture and settings in Dehancer.
  6. And here's another free download of Sony a7s III ProRes RAW HQ 4.2K 23.976 HDR footage.
  7. The footage we've seen from the DJI 4D looks amazeballs; and that DJI has committed to working with the following DPs on their upcoming works with the 4D is an arrangement unprecedented in our experience: Rodney Charters, ASC, CNSC, NZCS; Takuro Ishizaka, JSC; Rachel Morrison, ASC; XiaoShi Zhao, CNSC; and Academy Award winners for Best Cinematography: Erik Messerschmidt, ASC; Claudio Miranda, ASC; and Peter Pau, HKSC. Something else that's unprecedented in the business is for anyone to evaluate a pre-production camera in light of its HDR capabilities. For certain, if there are any weaknesses in a system, whether it's with sharpness, noise in the shadows, banding, weak dynamic range or poor handling of highlight roll off - those problems are unquestionably going to be exacerbated in HDR. So it's with great interest that we listened to DP Erik Messerschmidt - an HDR evangelist - talk about how the DJI footage looked in Dolby Vision during the launch event. BaseLight, which opens it natively. It seemed to have plenty of dynamic range. It was sharp, we didn't see any aliasing issues, we didn't see any banding or breakup. It's clearly got plenty of bit depth and dynamic range. And we were looking at it in HDR actually, so we looked at it in Dolby PQ in DCI-P3 and it looked fabulous, it was on par with anything else, for sure". -Erik Messerschmidt
  8. Having seen the footage and watched a bunch of reviews, I'm going to have to pass on the Sony a7 IV as a second body to my a7s III. The poor rolling shutter, the 60p crop, worse color, and less underexposure latitude make the latest alpha body one to skip. On the other hand, the a7s III is pretty spectacular once you balance the color; and I've yet to see any footage from the a1 that surpasses it. Download ProRes RAW HQ 4K 23.976 HDR sample footage.
  9. Dehancer OFX 5.0.0 with Kodak Vision 2383 is awesome, Anfisa. I hope you add Sony a7s III S-Log3 S-Gamut3 (not S-Gamut3.cine) to the growing list of supported cameras.
  10. First of all, 'the guy' has a name: Stephen Perera. Secondly, Perera specifically says "the cinematography was great aside from a few needless anamorphic flares that’s pandering to the YouTube generation that equates flares with great camera work haha". Linus Sandgren is not pandering to anybody. I suggest you cool your jets.
  11. DP Linus Sandgren chose the Panavision G Series anamorphic primes for the Panaflex Millennium XL2 because he was inspired by Allen Daviau's work on E.T. the Extra-Terrestrial. Says Sandgren, "I wanted a lens that would be a workhorse but also have beautiful flares that weren't overly dramatic..." Nothing whatsoever to do with pandering to anybody, let alone the YouTube audience.
  12. I'm not confusing anything, Tyler. Getting HDR to display correctly, either on my TV or phone, has never been an issue. Over 30% of N. American homes have 4K HDR television sets and over 200 models of smartphones have OLED HDR panels. More than a quarter of devices connected to streaming giant Netflix are configured HDR. The reason most HDR shows are flat is that they continue to be lit in an SDR environment, they're monitored in SDR, and the very first time someone sees it on an HDR display is in the grading suite. And it doesn't end there. While colorists do have access to the tools to see HDR, that alone is no guarantee that the master file will end up preserving all the dynamic range, tonality and color envisioned by the director and the DP. For example, the decision in post-production to constrain the levels in the HDR pass to maintain consistency with the SDR version can prevent HDR from taking wing. Not infrequently, a project gets the green light for an HDR master after the fact; both the post-production house and producer preemptively rule out a version that dramatically departs from the SDR version; the result being that HDR turns out to be little more than a marketing gimmick. Another seldom discussed issue is that colorists are resorting to compromising their HDR grades in order to avoid judder artifacts. Getting back to the topic, several reviewers have reported their extreme joy at seeing the jet blacks and bright highlights made possible by watching No Time to Die in a Dolby Cinema with projectors using dual 4K Christie lasers that emit 31ft lamberts, not to mention the incredible audio, which the director himself was thrilled with.
  13. I don't like SDR. I see everything in HDR. SDR is flat and lifeless. I've read that the HDR version of the Bond film was sensational.
  14. So nothing whatsoever to do with YouTube. I watch YouTube, even occasionally upload to YouTube and the films I watch don't abuse flare. You might as well blame the tooth fairy or Santa Claus.
×
×
  • Create New...