Jump to content

It's 2021 and digital capture still looks like sh


Karim D. Ghantous

Recommended Posts

  • Premium Member
32 minutes ago, M Joel W said:

Thanks, I didn’t key it. It’s just a lum vs sat curve roughly like the image attached.

That’s from Lumetri but Resolve has the same curve.

Screen Shot 2021-03-03 at 8.26.07 PM.png

Interesting, I tried that in Resolve too. Perhaps an order of operations issue. 

Link to comment
Share on other sites

This might be me overthinking things (as usual). But I think, even if you’re grading film, Resolve doesn’t obey the same rules as film and you can end up introducing colors that wouldn’t be there naturally, over-saturated highlights in particular. For instance by shifting lift to blue and gain to orange, the highlights might get too warm in an artificial “digital” way. So even if Arri is clamping down on over-saturated highlights with the Alexa (not that they always are, the brightest areas can clip a little weird imo) Resolve isn’t. So I try to apply lum vs sat last. Anyway that’s my thinking.

Link to comment
Share on other sites

  • Premium Member
12 hours ago, M Joel W said:

This might be me overthinking things (as usual). But I think, even if you’re grading film, Resolve doesn’t obey the same rules as film and you can end up introducing colors that wouldn’t be there naturally, over-saturated highlights in particular. For instance by shifting lift to blue and gain to orange, the highlights might get too warm in an artificial “digital” way. So even if Arri is clamping down on over-saturated highlights with the Alexa (not that they always are, the brightest areas can clip a little weird imo) Resolve isn’t. So I try to apply lum vs sat last. Anyway that’s my thinking.

Yes, I agree with your assessment. My usual workflow in Resolve is to use serial nodes for primary corrections with the LUT node (if there is one), last in the chain. Then parallel nodes off of the first node for secondaries so that they’re working from the full data. I should try the Luma vs Sat curve after the LUT node (or just use a color space transform instead). Thanks Joel!

Link to comment
Share on other sites

  • Premium Member
7 hours ago, Satsuki Murashige said:

Yes, I agree with your assessment. My usual workflow in Resolve is to use serial nodes for primary corrections with the LUT node

Interesting, do you use custom LUT's or are you using stock one's. I actually create my own LUT's because I was so dissatisfied with the stock ones. 

Link to comment
Share on other sites

  • Premium Member
4 hours ago, Tyler Purcell said:

Interesting, do you use custom LUT's or are you using stock one's. I actually create my own LUT's because I was so dissatisfied with the stock ones. 

Generally always a custom one that I make per project in prep and also use in camera for monitoring.

I tend to think of it like a custom printer light - using the provided camera LUT is a bit like getting your film dailies printed at 25-25-25. It’s not going to be ideal if you’re going for a specific look. It doesn’t have to be extremely different from stock, but every little bit of specificity helps.

Link to comment
Share on other sites

  • Premium Member
10 hours ago, Satsuki Murashige said:

It doesn’t have to be extremely different from stock, but every little bit of specificity helps.

Ah got ya. So do you do this based on pre-production camera tests?

I've kinda always entered the projects I've colored in post, so I haven't been through the process of making a grade early. 

Link to comment
Share on other sites

  • Premium Member
On 3/4/2021 at 5:07 PM, Satsuki Murashige said:

Then parallel nodes off of the first node for secondaries so that they’re working from the full data

I was under the impression resolve always works from the full data set, even with serial nodes, the reduction of data only happens in the final output. Am I wrong?

 

EDIT: meaning i could have a first node that changes exposure, clipping half the image, then a second serial node counteracting the overexposure, and i get the highlights back..I have to test this

Edited by David Sekanina
Link to comment
Share on other sites

  • Premium Member
2 hours ago, David Sekanina said:

I was under the impression resolve always works from the full data set, even with serial nodes, the reduction of data only happens in the final output. Am I wrong?

 

EDIT: meaning i could have a first node that changes exposure, clipping half the image, then a second serial node counteracting the overexposure, and i get the highlights back..I have to test this

My understanding is that each serial node affects the next one in the chain, which is why it’s often recommended to put LUTs last in the chain instead of first. But I could be wrong, I’m not a colorist, nor that well-versed in Resolve really. I’ve just found a method that works for me and keep it simple. I’d suggest asking a real colorist on the LiftGammaGain forum!

Link to comment
Share on other sites

  • Premium Member
4 hours ago, Tyler Purcell said:

Ah got ya. So do you do this based on pre-production camera tests?

I've kinda always entered the projects I've colored in post, so I haven't been through the process of making a grade early. 

Yep, I shoot tests in prep based on the look we want. For example, this was the first time I’ve shot at 1600EI on the Alexa Mini combined with underexposing to capture more highlight detail (thanks for the tip Miguel Angel!). So I had to test that. Also shot a LowCon filter test, and an eyeball macro shot test I posted in another thread. Since they all have to be rendered out and presented to the director, it’s a good chance to also grade and make a few LUTs.

In this case, we just used one custom ARRILook that was very similar the Alexa LUT, but with lower highlights, deeper blacks, and a bit less green in the midtones. Once you make an ARRILook in their ARRILook Creator software, you can put them into the camera and also convert them 64x64x64 .cube LUTs for Resolve and 33x33x33 LUTs for monitors on set. On other projects, I might have two or three to represent different story locations like cool for rainy London, warm for sunny California, yellow-green for ‘exotic’ Madrid. Of course, that’s in combination with changes to lighting, production design, etc.

I do my best to make sure the LUTs get sent along with the footage to post, though at that point they can choose to disregard them and start working with the Log footage if they so choose. But I think they usually find it helpful to see exactly what we were looking at on set, at least as a starting point. 

Link to comment
Share on other sites

  • Premium Member

If I ever got a project where the colorist was hired before shooting, then I would probably run this through them and have them grade the tests and make the LUTs. But usually on the projects I’m on, we don’t know if there will be budget left for a colorist at that point. So I do it partially to protect myself and make sure that even if all they do is have the editor apply the LUT at the end, it will look no worse than it did on set. 

Edited by Satsuki Murashige
Typo
Link to comment
Share on other sites

  • Premium Member
1 hour ago, Satsuki Murashige said:

My understanding is that each serial node affects the next one in the chain

I just did a quick test: adjusted the gain on a log 10 bit DPX film scan in the first serial node so the image clipped massively, then on the second serial node turned down the gain again. All clipped areas recovered. While the first node does affect the second, no data is lost in the process as far as I understand. I'm not a colorist either.

34643436.thumb.JPG.9c6f63f6a5ba01ce905045ea3d4996e4.JPG

EDIT: it's more complicated:

https://forum.blackmagicdesign.com/viewtopic.php?f=21&t=84615

 

Edited by David Sekanina
Link to comment
Share on other sites

  • Premium Member
11 hours ago, Satsuki Murashige said:

I do my best to make sure the LUTs get sent along with the footage to post, though at that point they can choose to disregard them and start working with the Log footage if they so choose. But I think they usually find it helpful to see exactly what we were looking at on set, at least as a starting point. 

Do you think the LUTs add anything but consistency from production through post? 

Also how do you upload the luts to the camera body? Can you do it on a media card or does the system with the software have to be on set?

Edited by Tyler Purcell
Link to comment
Share on other sites

18 hours ago, David Sekanina said:

I just did a quick test: adjusted the gain on a log 10 bit DPX film scan in the first serial node so the image clipped massively, then on the second serial node turned down the gain again. All clipped areas recovered. While the first node does affect the second, no data is lost in the process as far as I understand. I'm not a colorist either.

 

Theres a few things going on here.  First off any node based visual FX software works the same way.  Each node is a math expression applied to the footage.  In a two node system the first is nested inside the second.   So whether or not the order of operations matters, is dependent on whether or not the order of operations matters for the math.   If the first node is gain, then you are simply multiplying each pixel by a number.  If the gain is set to 1.3 then if x stands for each pixel value, then the expression is 1.3 * x. The software stores that output in another variable like y and input that to the second node. Lets say thats a gamma correction, that math expression is square root of the image, or z^(0.5) ( z to the one half power is the same as the square root), z represents all the pixels values at the second node.  Now lets input y for z, (1.3*x)^(.05).  The question is if you do the operations backwards do you get the same thing?  Not really you get this 1.3*(x^(0.5)).  If you input the pixel value .6, you get .883 the first way after the color correction and 1 the opposite way.  So for some operations the order matters its the same is you multiply in one node and apply a log curve in another  2 * log(x) is not the same as log(2*x).  I strongly recommend messing around with desmos.com and plugging those two expressions in to see how vastly different the curves are. 

The second thing is encoding.  Its important to know that images are just spreadsheets of numbers they can be anything. The caps that happen to those numbers are due to the monitor.  It has a clipping function where any number past a threshold number gets saved as the threshold number, that is usually 1 when image values have been normalized between 0-1. BUT, that only happens when you encode, or save the numbers.  If they haven't been saved then you can have pixel values at 99,000, when the monitor can only show 0-1, as long as you don't encode those values, the clipping function won't clip the actualy data the computer still knows the original values and you can bring those values back into range with an inverse function non destructively.    

What many people miss is that when you encoded or shot the footage with a log curve, There is already a math expression applied to the footage.  So off the bat non of the color grading tools really work as advertised.  Gain will effect the black point and lift will effect the clipping point of the footage.  So in some image processing workflows you have to go back to linear as long as the manufacturer isn't hiding the expressing how to do that, so there is no math applied to the image before you work on it.

Grading before or after the lut, is dependent on the operation you are trying to do and how complex the lut is.  Like you don't want to do skin tone correction before the contrast curve and matrix adjustment.  If the values are log they will be hard to key.  That might be better after the lut, but scaling and offseting is probably not good before.

Link to comment
Share on other sites

  • Premium Member
11 hours ago, Tyler Purcell said:

Do you think the LUTs add anything but consistency from production through post? 

Also how do you upload the luts to the camera body? Can you do it on a media card or does the system with the software have to be on set?

It’s consistency but also communication. You’re telling everyone, I want the final image to look like this. You light and expose to the LUT, just as you would to your own dailies printer lights. The director can say whether they think it’s too dark or too green while you’re shooting. The editor sees what we saw on set. I believe it helps them to edit when the footage looks close to how it will be in the end. And of course, you still have the Log image to go back to if any changes need to be made. I find it to be very similar to shooting on film in that sense.

Re: uploading to camera

Really depends on the specific camera in question. Alexas take ARRI Looks, their own proprietary format which you have to make in their free software ARRI Look Creator, which is not very flexible. The nice thing about these Looks is that they are part of the camera metadata, so they should follow the footage to post. But they can’t be used in monitors or grading software, so you need to also export .cube LUTs for other uses. You put them on an SD card and save them to the camera. 

Most Sony cameras (F5/55, Venice, FX9, FX6, FS7) as well as most SmallHD monitors accept standard 33x33x33 .cube LUTs, usually on an SD card. Sony recently added an option called Advanced Rendering Transform in the Venice to apply Looks before video processing, supposedly resulting in fewer image artifacts: https://www.newsshooter.com/2020/04/30/sony-advanced-rendering-transform/

Red cameras only accept 17x17x17 .cube LUTs, and you have to take their IPP2 transforms into account since I don’t believe you can apply a LUT to a straight Log3G10 image in-camera yet. You also have to put them on RedMags and the process to import them into the camera is a bit convoluted.

You can also apply LUTs on a Teradek Bolt XT receiver. Not sure about Canon or Blackmagic cameras. 

Of course, you can also get a LUT box and have the DIT apply them from their cart, but at that point it’s a bit convoluted to me. Fine for studio multi-camera shooting, but you’re back to being tethered on location. What’s the point of all this wireless technology if you’re just going to end up tethered to a DIT cart? That’s assuming there will be a DIT on the job, almost never in my case. 

  • Upvote 1
Link to comment
Share on other sites

On 3/5/2021 at 12:41 PM, David Sekanina said:

I just did a quick test: adjusted the gain on a log 10 bit DPX film scan in the first serial node so the image clipped massively, then on the second serial node turned down the gain again. All clipped areas recovered. While the first node does affect the second, no data is lost in the process as far as I understand. I'm not a colorist either.

If you apply a LUT to a node and the highlights clip, you could not recover them in the next serial node. 

Link to comment
Share on other sites

15 hours ago, Ryan Emanuel said:

The thing is, no lut should ever do that.

Agreed. But that's how they work, at least in Resolve. That is one of the reasons a lot of people no longer use LUTs. The best approach in Resolve is to use the Color Space Transform effect and select gamut and luminance mapping. You can apply that in an early node and you are guaranteed nothing will be lost afterwards. 

Link to comment
Share on other sites

  • Premium Member
10 hours ago, Raymond Zananiri said:

Agreed. But that's how they work, at least in Resolve. That is one of the reasons a lot of people no longer use LUTs. The best approach in Resolve is to use the Color Space Transform effect and select gamut and luminance mapping. You can apply that in an early node and you are guaranteed nothing will be lost afterwards. 

I think that’s fine if you’re not relying on a communicating a specific look to post. From a DP’s point of view, the problem is that this puts the control over the camera’s base look back into the hands of the colorist, rather than the cinematographer.

One other workflow method has been suggested by Michael Cioni of Frame IO: 

https://blog.frame.io/2020/04/23/protecting-your-image-hurlbut-academy/

https://youtu.be/IxmfkcXlnDc/t=51m38s

1. Use a standardized Display LUT for the bulk of the transform work from Log to Rec.709, Rec.2020, P3, etc. This can be changed independently based on the display and output required.

2. Make a Creative Show LUT that gets applied to all clips on top of the Display LUT which broadly communicates the DP’s creative intent. 

3. Make ASC CDLs with DIT on a per-scene basis that get baked into the metadata and follow the footage all the way thru post. The CDLs would be for smaller, more specific corrections that communicate the DP’s intent.

I think this makes a lot of sense for larger projects. It’s probably too complicated for the types of things I’m shooting, but I like the intent to give the cinematographer as much creative control as they want over the image while shooting, without causing headaches for post down the line.

Edited by Satsuki Murashige
Formatting
Link to comment
Share on other sites

On 3/8/2021 at 12:50 AM, Raymond Zananiri said:

Agreed. But that's how they work, at least in Resolve. That is one of the reasons a lot of people no longer use LUTs. The best approach in Resolve is to use the Color Space Transform effect and select gamut and luminance mapping. You can apply that in an early node and you are guaranteed nothing will be lost afterwards. 

I really hope not, they are not the same thing.  CST is basically a 1,1,1 lut for 3D and a 10 lut for 1D, while 3D luts can be any resolution, you can have 128,128,128 with 2 million individual coordinates in the color space that can all be independently adjusted.  3d Luts are far more powerful.  CST matches color space primaries, but just because pure red, green and blue are matched, ie three edges of the cube are matched doesn't mean the inside of the cube is matched.  The true transform that maps one camera perfectly to another, is non-linear, so CST is ruled out cuz it can only do linear operations.  The goal is to get your approximation transform as close to the true transform as possible.  Thats literally the definition of a data scientist's job, this issue is statistics its not really color grading.  Lets say you take some camera samples to a data scientist and the run a sequential neural networks to approximate every non-linearity in the color space transform.  Their trained model will take pixel coordinate inputs from the first camera and output where that coordinate should be transformed to match the second, but you don't want to pass images through cuz millions samples would take forever to get processed.  So you pass an identity lut through like a 33x33x33 with only around 30,000 samples.  Also the neural networks can be normalized from 0-1 so no clipping. This is gonna be the future for color processing and how to solve that problem.  We gotta just bring the right people to the table, a colorist won't be able to do this.

Link to comment
Share on other sites

On 3/9/2021 at 7:23 AM, Ryan Emanuel said:

I really hope not, they are not the same thing.  CST is basically a 1,1,1 lut for 3D and a 10 lut for 1D, while 3D luts can be any resolution, you can have 128,128,128 with 2 million individual coordinates in the color space that can all be independently adjusted.  3d Luts are far more powerful.

That's very interesting. Because almost everyone now prefers to use CST to 3d LUTS in Resolve.  I guess because of the convenience of the luminance and gamut mapping features. I am talking of course about the basic specific camera to Rec 709 transforms. Custom LUTS are a different story of course. 

Are you then asserting that applying a Log C to Rec 709 LUT by right clicking on the node in Resolve should give better results than using the CST FX function? Perhaps I need to do some tests. 

Link to comment
Share on other sites

On 3/10/2021 at 5:12 PM, Raymond Zananiri said:

That's very interesting. Because almost everyone now prefers to use CST to 3d LUTS in Resolve.  I guess because of the convenience of the luminance and gamut mapping features. I am talking of course about the basic specific camera to Rec 709 transforms. Custom LUTS are a different story of course. 

Are you then asserting that applying a Log C to Rec 709 LUT by right clicking on the node in Resolve should give better results than using the CST FX function? Perhaps I need to do some tests. 

Luminance and gamut mapping is just normalization, take every value, divide by the max in the range and multiply by the max of the new range. Same with gamut mapping.  You can do this with luts, Resolve just doesn't allow you to, to influence  more dependency on the software, (my opinion).  Thats why davinci isn't the best software to work with luts.  It can only apply them really, you can't trouble shoot and qualify or normalize luts in Davinci.  Also the Arri rec 709 lut is certainly different from a CST rec 709 gamma and color space conversion.  Not necessarily better, just different looks made in completely different ways. 

Link to comment
Share on other sites

  • Premium Member
On 3/2/2021 at 6:58 PM, Satsuki Murashige said:

In a nutshell, this is what I like about film capture: 

5203 50D, 40mm Cooke Speed Panchro.

Lovely frames Sat! I know everyone loves the grain of 500T these days, but it's the slower, fine-grained stocks that have always held a special place in my heart. Vision 3 50D is just so pretty.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...