Jump to content

It's 2021 and digital capture still looks like sh


Karim D. Ghantous

Recommended Posts

  • Premium Member
1 hour ago, Ryan Emanuel said:

When you say digital can't match film, you are really saying there does not exist a function that can transform digital camera RGB color space, into film color space, it can't even be approximated.   You are saying that there is nothing in computational geometry, supervised machine learning, or vector field interpolation that can solve the problem.  That's probably not true, harder problems have been solved or approximated. A data scientist would probably say this problem is a piece of cake, DP's just need to talk to the right people.

it is not about taking digital image and trying to add a varying amount of "film look" over it.  The whole point of the "film look" is having a more or less imperfect image and then appreciating it as is or even enhancing the effect. The whole idea of digital image capture is to make "as real as possible" representation of the reality and adding imperfections over digital image just makes it crappy digital image. It is like looking the reality through a dirty and dusty window, trying to see what is going on outside. Most of the audience would rather have the window cleaned so that they could just see better what is going on...

Personally I only add "film grain" or other imperfections on a digital image if there is VFX or other stuff which needs to be hidden so that the audience is not distracted by it. For that use the "film emulation" works pretty well and helps selling the VFX much better so that the audience can concentrate on the story. Otherwise I rather appreciate both image capturing methods as is, trying to take advantage of their differences rather than trying to hide them (trying to hide the differences just leads the end result being mediocre and dull, being half something half nothing like a raw image which is not graded yet)

Link to comment
Share on other sites

46 minutes ago, aapo lettinen said:

it is not about taking digital image and trying to add a varying amount of "film look" over it.  The whole point of the "film look" is having a more or less imperfect image and then appreciating it as is or even enhancing the effect. The whole idea of digital image capture is to make "as real as possible" representation of the reality and adding imperfections over digital image just makes it crappy digital image. It is like looking the reality through a dirty and dusty window, trying to see what is going on outside. Most of the audience would rather have the window cleaned so that they could just see better what is going on...

Personally I only add "film grain" or other imperfections on a digital image if there is VFX or other stuff which needs to be hidden so that the audience is not distracted by it. For that use the "film emulation" works pretty well and helps selling the VFX much better so that the audience can concentrate on the story. Otherwise I rather appreciate both image capturing methods as is, trying to take advantage of their differences rather than trying to hide them (trying to hide the differences just leads the end result being mediocre and dull, being half something half nothing like a raw image which is not graded yet)

A lot of issues get conflated, there is color reproduction, noise structure, halation artifacts, and a few other pertinent classifications for a look match.  They all need to be discussed separately since the challenges are very different, when they get muddied together, its hard to make sense out of anything. It sounds like what you are talking about is noise structure and not color reproduction. Having one noise structure and layering another on top won't make a match.  You need to take one noise structure and transform it to approximate another. Noise is basically errors.  The distribution function that maps the variance of the errors can be transformed just like color spaces. Denoising is mapping the variance close to zero, but it can also be mapped to another distribution.  If you add grain in software its probably just adding normally distributed noise, but film might not have a normal distribution structure for errors, it might have a different shaped distribution function.  So you need someone who understands statistics, distribution functions, and signal processing to make the match. Again a colorist can't do that, and you don't know how film convert approximated the distribution, they could be right, they could be wrong. If you want to know its right, a data scientist or developer for signal processing can help.   The democratization of all looks is coming whether we like it or not.  Any look will be available on any camera that captures enough accurate data, once the right technicians are brought into the convo.

Link to comment
Share on other sites

2 hours ago, aapo lettinen said:

it is not about taking digital image and trying to add a varying amount of "film look" over it.  The whole point of the "film look" is having a more or less imperfect image and then appreciating it as is or even enhancing the effect. The whole idea of digital image capture is to make "as real as possible" representation of the reality and adding imperfections over digital image just makes it crappy digital image. It is like looking the reality through a dirty and dusty window, trying to see what is going on outside. Most of the audience would rather have the window cleaned so that they could just see better what is going on...

Personally I only add "film grain" or other imperfections on a digital image if there is VFX or other stuff which needs to be hidden so that the audience is not distracted by it. For that use the "film emulation" works pretty well and helps selling the VFX much better so that the audience can concentrate on the story. Otherwise I rather appreciate both image capturing methods as is, trying to take advantage of their differences rather than trying to hide them (trying to hide the differences just leads the end result being mediocre and dull, being half something half nothing like a raw image which is not graded yet)

 

They need to make digital film grain organic like real film. The grain filters apply it all over like a screen. 

Here is an example I did of digital grain. But most of the time I don't like adding grain that much.

Steeplechase : Daniel D. Teoli Jr.. : Free Download, Borrow, and Streaming : Internet Archive

The issue with digital blacks and deep shadows is that digital tries to make sense of it and it looks muddy. They should have a setting on digital to just drop out the blacks and stop trying to make sense of it. Or at least allow adjustment to drop the blacks down in varying degrees.

Digital is much sharper than film when comparing apples to apples. Digital also needs to be dumbed down in sharpness and it may look more like film.

I did extensive film vs digital tests with still cameras.  35mm negative film, flatbed scanned = about 3 or 4 MP with a P&S camera. Sadly Tumblr banned me in 2019 and deleted all 48 of my websites of which the digital vs film test website was among them.

On the forums you always find the die-hard film people. The 'never digital' ones that say they will give up shooting if they ever stop making film.  I am torn between the two mediums myself. My own shooting is all digital now. Yet I work with film daily with my archival work.

Film never fails to excite me. I found a 16mm reel last week, The Initiation, an old 'nudie cutie' from 1924. I hope to get it scanned for you within a few weeks. But I guess I could get just as excited looking through some found hard drive. Only difference is the hard drive probably won't be in good shape if it sat somewhere for 80 years.

Here is one shot at high ISO. If you blow it up it has some nice looking grain structure that is native to the sensor. 

The Lucky Chops - Little Dicky : Daniel D. Teoli Jr : Free Download, Borrow, and Streaming : Internet Archive

But you wont get this look with moving pictures unless you PP each frame or have a computer with AI that can reproduce each frame to the previous one. Lots of PP on this one. I shot wide open at f1.4 No added grain though, just contrast grading.

Edited by Daniel D. Teoli Jr.
Link to comment
Share on other sites

6 hours ago, Ryan Emanuel said:

technology is allowing further and further transformation of color spaces,

Digital technology is already capable of accurately reproducing film color spaces; if it wasn’t, film scanners would be useless. Steve Yedlin, ASC (among others) has devised and demonstrated his approach for transforming digital primaries to match film. He has created gamma tables to match film. He has written his own randomized adaptive grain algorithms. His work is extremely impressive.

6 hours ago, Ryan Emanuel said:

When you say digital can't match film, you are really saying there does not exist a function that can transform digital camera RGB color space, into film color space, it can't even be approximated. 

No, what they are saying is that they refuse to believe that it’s possible, that you will not convince them, that no matter how mathematically perfect the emulation is, they will somehow still be able to tell the difference.

Link to comment
Share on other sites

  • Premium Member
4 hours ago, Ryan Emanuel said:

 It sounds like what you are talking about is noise structure and not color reproduction.

yes I am mainly talking about the noise structure and other imperfections like micro dust, slight mechanical instability and different MTF response to the finer details. Everything else like color reproduction goes unnoticed for 99.9% of the audience and they will forget about it in 30 seconds so it is not the main area of interest when making any comparison between the formats.

Additionally, like Stuart said, the color reproduction of the current digital cameras is fine. If there is anything fundamentally wrong with the colors then it has been a intentional choice by the filmmakers or alternatively they cheaped on the budget somewhere.

Link to comment
Share on other sites

3 minutes ago, aapo lettinen said:

yes I am mainly talking about the noise structure and other imperfections like micro dust, slight mechanical instability and different MTF response to the finer details. Everything else like color reproduction goes unnoticed for 99.9% of the audience and they will forget about it in 30 seconds so it is not the main area of interest when making any comparison between the formats.

Additionally, like Stuart said, the color reproduction of the current digital cameras is fine. If there is anything fundamentally wrong with the colors then it has been a intentional choice by the filmmakers or alternatively they cheaped on the budget somewhere.

I guess my question is why are people saying it can't be done, wouldn't it benefit everybody if it could.  Like I was saying we should bring the problem to the people who can solve it.  If you want unique spatial fidelity responses at different levels of detail for different texture patterns, that question sounds like a convolutional neural network problem, a deep learning data scientist can probably solve it.  The hard part is matching pixel response for sample images, if you could match the pixels even for a small region of the chip on the two formats, a well constructed CNN would do the rest. People are gonna try stuff like that, my hope is that there are cinematographers there. 

Personally I don't know if color goes unnoticed, I think thats a pretty big component and still pretty difficult to transform really.  You either need to automate thousands of color sample colors on both formats that span the entirety of the space and interpolate, or you work with a sparse data set and extrapolate anything outside the sample hull.  That part is tricky, its one thing to linearly interpolate when you have sample targets near every possible point, but when you don't the chances of a linear extrapolation getting close to true values is pretty low.  So you need some non linear extrapolation algorithm that matches the generalized trend of how each color changes on their hue/saturation vector.  Thats a pretty iterative process, you get a lot wrong before you get anything right.

Link to comment
Share on other sites

15 hours ago, Stuart Brereton said:

It’s funny that we never hear complaints about the problems that film has with under exposure detail.

This is not news, and we've always known that digital RAW files are better at this. Film users would love it if film had a couple of more stops of DR. It doesn't matter where those stops are, either. Technically, the Monstro has more DR than film but it too fails with light sources IIRC.

7 hours ago, Ryan Emanuel said:

 A data scientist would probably say this problem is a piece of cake, DP's just need to talk to the right people.

You don't need a data scientist. ? You just create a LUT. Fuji has film profiles built into their cameras. So there's no problem as far as colour goes. 

Link to comment
Share on other sites

  • Premium Member
2 hours ago, Stuart Brereton said:

Digital technology is already capable of accurately reproducing film color spaces; if it wasn’t, film scanners would be useless. Steve Yedlin, ASC (among others) has devised and demonstrated his approach for transforming digital primaries to match film. He has created gamma tables to match film. He has written his own randomized adaptive grain algorithms. His work is extremely impressive.

 

 

I have always had issues with Yedlin's work.  Yes, what he did was impressive. But to what end?  He won't share the workflow, or his computer programming with others.  He proved it is possible to match digital and film captures in his controlled examples.  He did not prove it was easy in every circumstance.  In the end, I found no point in his work.  Perhaps for his personal style and workflow, it helps him be a better cinematographer. 

I fight to shoot on film because I can't do the same.  I can't personally make digital look like film.   Perhaps if he released his work to the masses, film would die.  Who can tell what would happen. 

I assume the youtube crowd would quickly make useless videos with the technology, but professional cinematographers would continue using their own workflows, film or digital.  I would.

 

  • Upvote 1
Link to comment
Share on other sites

54 minutes ago, Karim D. Ghantous said:

You don't need a data scientist. ? You just create a LUT. Fuji has film profiles built into their cameras. So there's no problem as far as colour goes. 

It depends, if you can't do your taxes in the software, its not the ideal place to make a look up table.  

Link to comment
Share on other sites

  • Premium Member
On 2/28/2021 at 3:51 PM, Robin R Probyn said:

add a bit of Walter Mitty syndrome

You love that phrase, it's an attack on people who have ACTUAL aspirations outside of money. People who put their heart and soul into projects they create, for no financial gain. Not "hired guns" but actual creatives, which is the pure essence of filmmaking. 

On 2/28/2021 at 3:51 PM, Robin R Probyn said:

and unfortunately that whats happening on this once great forum ..

Reminiscing about something that really wasn't what you remember it to be.

On 2/28/2021 at 3:51 PM, Robin R Probyn said:

"Eventually those comments will overrun the forum, that will be a great day."    overrun ? yikes ..

I mean young people are the future and many more of them are interested in this subject than older people. 

Link to comment
Share on other sites

2 hours ago, Jay Young said:

He won't share the workflow, or his computer programming with others. 

I don't think that anyone owes us anything. It's not that hard to make a LUT. We don't need Yedlin to make it for us. Even then, a LUT is not going to solve the light source problem.

I like sharing, although I will keep one or two small secrets to myself for the longer term. If I can come up with something to mitigate the light source problem, I guarantee you I will be rushing back here to share it.

A few years ago, someone on the Red forum said that the LUT is not the problem, it's the characteristics of movement that's the hard thing to match, if that makes any sense. Maybe that can be solved, too - I have an idea about that, but I can't share it unless I try it for myself first. I don't shoot a lot of video so don't hold your breath!

1 hour ago, Ryan Emanuel said:

It depends, if you can't do your taxes in the software, its not the ideal place to make a look up table.  

I am not very knowledgeable about cinema software, but Baselight and Resolve will let you create a LUT. You just need a colour chart, plus a few frames of film scanned properly, and boom, you're off. Just search for "how to make a LUT in x/y/z" and you're off.

Again, nobody needs an ASC DP, or a film school, to tell them how to get what they want. Forget all that. You can do a lot either by yourself or with a group of casual but dedicated collaborators on Web forums.

Not that long ago, most people made their own clothes from patterns. Before that, some people built their own homes with logs. If 1940s Westerners with no TV, or air conditioning, or advanced communications can make their own clothes, **(obscenity removed)** hell, we can make our own LUTs. 

TELL ME I'M WRONG.

Link to comment
Share on other sites

  • Premium Member
2 hours ago, Karim D. Ghantous said:

TELL ME I'M WRONG.

Anyone can make their own LUTs in Resolve. I’m quite fond of this workflow in order to preview customized Rec.709 looks in-camera, and have recommended it before. 

But you cannot accurately recreate the look of celluloid capture with just a LUT. LUTs can only change the colors that were captured. They cannot add detail that was not captured (like clipped color channels or color depth in the shadows), nor can they reproduce layered effects (like anti-halation backing failure on high contrast edges or variable film grain). 

This is why Mr. Yedlin’s ‘secret sauce’ includes compositing layers.

Link to comment
Share on other sites

6 hours ago, Karim D. Ghantous said:

This is not news, and we've always known that digital RAW files are better at this. Film users would love it if film had a couple of more stops of DR. 

Clipped highlights with digital is not news either, but you saw fit to start a thread about it. You’ve just illustrated my point for me; film’s shortcomings are ignored, while digital is constantly complained about in order to make unfavorable comparisons.

  • Upvote 1
Link to comment
Share on other sites

14 hours ago, Jay Young said:

I have always had issues with Yedlin's work.  Yes, what he did was impressive. But to what end?  He won't share the workflow, or his computer programming with others. In the end, I found no point in his work.  

 

The point of his work was to provide himself with a workflow that he felt recreated the look of film. As he has used it professionally on films like Knives Out, I guess he feels he succeeded. Why should the fact that it’s a proprietary process mean that it doesn’t count?

Link to comment
Share on other sites

  • Premium Member

In a nutshell, this is what I like about film capture: 

5203 50D, 40mm Cooke Speed Panchro.

84CB710F-155B-426C-A989-1D2F5FFE22E4.thumb.jpeg.1661330afc2837c9dd9f28d7ed68a093.jpeg

764A2DB9-ACB4-404B-8749-832BF1A30090.thumb.jpeg.939c977831189f15a19bfaa67a1ac10e.jpeg

386606D8-3A0E-4F11-89D2-1D99108474FE.thumb.jpeg.6f6f385103c328aab843443dd81b1de5.jpeg

DA2553AF-F09C-421B-8E72-2B00B85FD9F4.thumb.jpeg.06a03ccf92158a19b443d6ccd515f3e5.jpeg

3446E1D8-076B-4447-9CFB-3C4130A80D82.thumb.jpeg.ab702bf2ae9e4bfa172ba9fb87f60f10.jpeg

D7DEA10F-8345-4133-9DC5-C37203889853.thumb.jpeg.5ee5ce3615259f42eb849c9465cebc49.jpeg

D9D8C909-D603-494A-8B6E-94BE5C075327.thumb.jpeg.03491e195d0f6983d979ce2db78acaf6.jpeg

0378DF4C-0CD2-4EEB-95FB-1F047ACF7D69.thumb.jpeg.6826c6db846eb0cf59a8732d35a23fdc.jpeg

7E7545D7-57C6-4EDD-B3ED-3071DC83F1C0.thumb.jpeg.602d57d37bc16b973cf41bd3ab6bc6c3.jpeg
 

5219 500T, Atlas Orion 40mm anamorphic.

DFA73515-7E6B-4E76-AAFC-A475F1EB71D0.thumb.jpeg.40c1332b9c75c7f6ce36afdeda9d173f.jpeg

1156048E-272C-45FE-B48F-FA8E155EF0D7.thumb.jpeg.41dc786fef75bc9c219b8a2fa26bce1a.jpeg
 

5219 500T, Cooke S4 Mini 32mm, 50mm, 75mm.

77F88239-E00D-45DE-A31D-99CA8B911BD4.thumb.jpeg.57971992bf1631428912f8913a31319b.jpeg

97FFD68E-F3A6-4A79-94E5-50640F196114.thumb.jpeg.2cfbf92df7a56f02b7967994ef6ba8ac.jpeg

011BA834-950E-48C3-91E5-142C4C76E16B.thumb.jpeg.247afaa72c2ad2a723b0c784730483e7.jpeg

97C1F9CA-46D8-46D9-A3B7-27FEAAE6D7D6.thumb.jpeg.73179c0ed59fa22f0fb89162be745852.jpeg
 

Can you get the same look with a digital camera? Near enough, as long as you capture enough data and spend enough time pushing it around in post. Personally, I would rather just shoot on film if I wanted this look though (well, I guess I did)...

 

  • Like 1
  • Upvote 2
Link to comment
Share on other sites

1 hour ago, Satsuki Murashige said:

But you cannot accurately recreate the look of celluloid capture with just a LUT. LUTs can only change the colors that were captured. They cannot add detail that was not captured (like clipped color channels or color depth in the shadows), nor can they reproduce layered effects (like anti-halation backing failure on high contrast edges or variable film grain). 

This is why Mr. Yedlin’s ‘secret sauce’ includes compositing layers.

I'm pretty sure you can emulate halation. At least in Baselight you can.

1 hour ago, Stuart Brereton said:

Clipped highlights with digital is not news either, but you saw fit to start a thread about it. 

Over 20 years of progress and we still can't manage to capture light sources properly, even in daylight. Even Sony hasn't done it yet. That's the point of the thread.

29 minutes ago, Satsuki Murashige said:

In a nutshell, this is what I like about film capture: 

5203 50D, 40mm Cooke Speed Panchro.

5219 500T, Atlas Orion 40mm anamorphic.

5219 500T, Cooke S4 Mini 32mm, 50mm, 75mm.

Can you get the same look with a digital camera? Near enough, as long as you capture enough data and spend enough time pushing it around in post. Personally, I would rather just shoot on film if I wanted this look though (well, I guess I did)...

 

The wedding shots were not to my taste. The second two were okay, but the lens ruined them for me. The final ones were very nice indeed.

Link to comment
Share on other sites

OK. Can we get more specific?

I watched this test long time ago (Alexa Mini vs 5219). It is not really well done but starting at 2:30 we get to what interests me the most: Hard, cross light on the face. Please tell me how I could get the face of the girl to look like 2:53 and 3:25 with any digital camera? 

I am not being being rhetorical at all. I would really like to get the same results with an Alexa, Venice or Red or any other modern digital camera. I really don't know how. That sheen and dimensionality and sensuality on the face of the girl ends up being flat with digital. 

I don't want to ever shoot on film. It is incredibly expensive. I just want to get that specific quality with digital: a very gradual shift between one tonal value to the next and to the next. Is that something that could be done in post?

Please help!!

Link to comment
Share on other sites

3 hours ago, Karim D. Ghantous said:

Over 20 years of progress and we still can't manage to capture light sources properly, even in daylight. Even Sony hasn't done it yet. That's the point of the thread.

And over 100 years of film, and it still can’t capture shadow detail properly. You have a huge double standard here.

Link to comment
Share on other sites

6 hours ago, Raymond Zananiri said:

Please tell me how I could get the face of the girl to look like 2:53 and 3:25 with any digital camera? 

Raymond, looking at the film shot at 3:25, and the equivalent Alexa shot at 3:04, I'd say that the Alexa has the flat look of highlights that have clipped and been brought down to compensate. There's a lack of detail in there, and the slightly gray look that highlights take on when they been pulled back too far. It would have been useful to see the LogC gamma version of this shot to see if it was clipped there too.

It's common knowledge that digital cameras clip after peak white, rather than rolling off smoothly as film does. Not all digital cameras are created equal though, and cameras like the GH5 (which prompted this thread) and Arri Alexa have different capabilities. 

The short answer to your question is that you need to expose for your highlights. Shoot with LOG gamma. Make sure nothing is clipping. If that means your shadows go too dark, add fill. If you can't add fill, then at least you know you have the ability to lift your shadows in post to a far greater degree than you would with film.

  • Upvote 1
Link to comment
Share on other sites

13 hours ago, Jay Young said:

 

I have always had issues with Yedlin's work.  Yes, what he did was impressive. But to what end?  He won't share the workflow, or his computer programming with others.  He proved it is possible to match digital and film captures in his controlled examples.  He did not prove it was easy in every circumstance.  In the end, I found no point in his work.  Perhaps for his personal style and workflow, it helps him be a better cinematographer. 

 

 

His old algorithms are on github. Davinci in its newest iteration copied some of the idea.

Link to comment
Share on other sites

11 hours ago, Karim D. Ghantous said:

I am not very knowledgeable about cinema software, but Baselight and Resolve will let you create a LUT. You just need a colour chart, plus a few frames of film scanned properly, and boom, you're off. Just search for "how to make a LUT in x/y/z" and you're off.

Again, nobody needs an ASC DP, or a film school, to tell them how to get what they want. Forget all that. You can do a lot either by yourself or with a group of casual but dedicated collaborators on Web forums.

 I would say there's literally zero places whether online or in film schools(other than AFI) that teach filmmakers what a look up table actually is.  A lut stores 6 columns of tens or hundreds of thousands of coordinate data in a 2d list (hence the taxes).  There are three inputs columns hidden in the indices, and three outputs that are the 2d list.  3 ins and 3 outs is basically a vector field.  So any algorithms that apply to vector fields can be used for LUTs and color spaces. Most of the tools in davinci are linear operations, color matches are non linear functions.  Usually color match color chart tools in software are projection or sum of least squares linear regression algorithms, they are still linear.  Linear will always have significant error since the true function for the color space transform is non linear.  If we want to solve some of these problems, we have to choose the right variables to work with.  

Like the halation thing.  Highlight roll off and halation are not the same variable. Halation is the luminance artifacts that occurs at very high contrast edges, its not dependent on highlight roll off in the color space, so LUTs can't help you there.  Edges are spatially dependent so then you need to convolve i.e. filter the effect spatially.  You have to transform the image into the gradient domain where each pixel value changes to the difference in brightness between the pixel and the 8 surrounding neighbors.  On high contrast edges that difference will be close to one, for all low contrast pixels will be close to zero.  You have to run some tests to see how how large the halation is with respect to a point source and find a function that would approximate the shape and style of the halation.  If the point source is has a 30 pixel diameter in the shot, is the halation diameter 60 pixel, 120 pixels, is it a circle or a star?  Once you have that, activate the halation approximation at high values in the gradient domain. You can take this paragraph to a developer and they will make you a halation algorithm as long as you have the budget for the film samples. You gotta have the right variables that the problem depends on though.  If you tell the developer the halation is from the lens, any algorithm they write is gonna be wrong.  If a DP doesn't want to dive into these rabbit holes thats all good, literally zero careers are dependent on this stuff, but if we want to our craft to progress in the correct way, we have to be careful with what variables we say an effect is dependent on.

Link to comment
Share on other sites

This is what a lut actually is, sorry for the low res. Its a vector field, each arrow represents the transform for a coordinate in the color space volume.  The lut is the arri rec709 33x lut subsampled down by a factor of 50 so you can actually see the different vectors. It would be hard to imagine that a colorist could capture all those vectors directions with color grading tools even subsampled for the entire space.  I just don't know why there really isn't post software that helps you visualize the luts to see where the color is "flowing" in different segmentations of the lut.  Visualizations like this(in higher rez and rotatable of course) would make the VF transforms more intuitive. You can also see from the perspective of the still that some of the trends of how the vectors are changing through the space are curves they are not straight lines, so linear operations won't capture the complexity.

 

download (2).png

Edited by Ryan Emanuel
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...