Jump to content

Film and skin tones


Nicolas Courdouan

Recommended Posts

I have always heard that as far as skin tones were concerned, film had a noticeable advantage over digital.

 

Unfortunately (for me), I cannot tell the difference. I can see variables in the overall rendition of colours depending on the film stock or camera used, but nothing that makes the skin tones particularly stand out on one or the other.

 

I don't know if this is because my eye is not trained well enough, or because the amount of grading and post-production that images undergo nowadays are able to nullify that advantage, but I'm still curious:

 

- What does one mean when they say that film has a "better rendition" of skin tones than digital?

 

- What is/are the property/ies of film that allow this? Can it be explained scientifically?

Link to comment
Share on other sites

I'll use 'still 35mm' Film film vs Digital film to suggest some differences. To some observers, there is a 'greenish' emphasis of Fuji film vs Kodak, Kodak having perhaps a 'yellowish' emphasis. I use 'emphasis' rather than 'cast', and perhaps this is an almost subliminal evaluation.

 

To be sure most color casts can be 'dialed' out, but even when matching one skin area to another, there would be other areas that didn't get the same mount of light, and there would be subtile differences in the 'color'.

 

The same is true for digital sensors, and what critieria the engineers used for determining the sensor's response to color, and the resulting color gaumet the camera yielded.

 

The Wife and I shot Kodak color still negative for years, but when we switched to Fuji, it was because the 'color' had a more 'natural/pleasing' effect.

 

When we finally bought into digital still 35mm cameras, we chose the Fuji DSLR S series. The color that camera yielded was 'better' for us than Nikon (which we had used for Film film for years... so Canon was not really an option as we would have had to buy all new lenses...).

 

Even so, at the time Canon's produced a 'yellowish' cast image and Nikon produced a sort of 'violet' twinged cast.

 

Even now, having participated in a couple of 'canon' shoots, I'm still left with the 'yellow' twinge impression... perhaps I'm more sensitive to yellows... but the Wife's Nikon D600 does not have that 'violet' twinge I saw years ago. (I've not been allowed to test it much with a color chart, as I'm too 'careless' with equipment... but I digress...).

 

When Fuji and Kodak were both making Film film stock, one could see differences between the two, and while all the material was graded, or printed to various standards there were still differences.

 

Again, the same is true for high end Digital Film cameras one to the other, as well as when compared to Film film.

 

Within broad limits I have no problem with most 'digital' original captured Hollywood films. In other threads it has been mentioned that over the years Film film stocks have changed, and some cinematographer's have lamented the 'changes'... but for better or worse change happens, and one needs to understand what has changed and use that to create images that support the story of the Film.

Edited by John E Clark
Link to comment
Share on other sites

Thank you for your reply John... I'm still a little confused. As expected certain film stocks offer a better rendition than others when it comes to colours, especially the midtones, which is what human skin often falls into.

 

I still have a hard time understanding why film stocks - overall - are said to offer better skin tones rendition than digital - overall too. And why people claiming this single out "skin tones" rather than something far more global and generic, i.e. the entire colour space. Is it that human skin has specific properties that lend themselves well to being captured on film but not on digital? If it that film grain gives better texture to our skin?

 

Does that mean that skin tones captured on film are always more pleasing to the human eye, and if so what makes them more pleasing? Or that they are always more faithful to the actual person's skin tone? But if so, why can't it be a matter of lighting, or fail that, grading? Is it that they are always more/less yellow on film? More/less green?

 

What is, objectively, this better rendition of skin tones that everyone is talking about as if it were an obvious and glaring difference?

 

With the amount and variety of film stocks and digital cameras around, and the different results they all produce regarding colours, I just find it extremely curious that one would say that Film (again, in general - all film stocks) makes for a better choice when it comes to skin tones than Digital (all formats, all cameras). This is just mind boggling to me. Especially, like I said before, with the amount of post-processing that goes into a movie picture nowadays, and the important part that lights, filters, and lenses play in the rendition of colours.

 

I mean, who can look at that still from Terminator Salvation (shot on 35mm) and say that the skin tones look better than they would have had it been shot on digital? It all seems highly subjective to me, if not completely cryptic.

 

26041237_.jpg

Edited by Nicolas Courdouan
Link to comment
Share on other sites

The 'differences' and the the evaluation of 'better', leads to the philosophical debate on Film film vs. Digital film. Since I know there is a difference and accept that difference and prefer 'digital' overe 'film' for a number of reasons beyond the skin tone consideration, I stand in the digital camp.

 

I also strongly consider using monochrome aka B&W for my own narrative work. I may choose to go with color, but with B&W I don't have to worry so much about ineffable sense of color relative to human skin tones (as well as specific colors of lights for that matter... although there are effects on the resulting grey tones due to colors of lights... most people are not so sensitive to such subtileties.)

 

As a note, 'skin tone standards' often were of 'white' women, and relatively few 'standards', at least in the US, were developed for latin/mediterranean/black 'skin tones'.

Link to comment
Share on other sites

My impression (beyond the more pleasing textural qualities that film offers, which can be kinder to an actors face, as it has some inherent painterly like qualities and diffusion like qualities built in) has to do with the fact — and I’m assuming this is still a fact today — that film still captures more color information. And it is that “nuanced” color information that can capture the subtle ‘color’ of skin in a way that is more pleasing to the eye, because frankly, it is made up of more gradations of all the colors in flesh tones. It also feels like film can handle all the subtle make up of red, blues, and yellows of skin at the same time, versus taking a dramatic shift in one direction or the other when trying to grade it, as well. When digital sensors first came out, there was certainly a kind of a ‘band aid’ skin tone color. Almost like beige had a few shades and that was it, with the sensor throwing out or not being able to capture the other subtle colors that make up a flesh tone. I feel digital has gotten much better, but in my opinion is still does not reproduced skin in a way that film does, and perhaps it never will. I also feel there is a ‘softness’ to film, in that it can render a close-up of a face that is both sharp and soft at the same time. Some of has to do with what was discussed at the beginning of this paragraph I believe, and also perhaps, may have to do with the fact that each frame of film has the particles of the grain shifting to a different position, so it is not lingering and producing a hard ‘edge’, like say what a pixel in the same position may do.



An analogy may be why does linoleum look different from tile, or imitation plastic look different from leather, even if most of the colors are the same. It is texture, nuanced color information and ‘the feel’ of organic materials that again are hard to quantify.



Does this mean skin tones and talent always look horrible with digital and beautiful with film? Certainly not so. And there is of course a subjective nature to this discussion, as is there a emotional response that is unquantifiable when viewing a frame shot with digital versus film.



I think if you are trying to tell a certain story, and you feel your eye is not trained enough to discern the differences above, shoot a test in both film and digital. Hang both of these tests up on a wall side by side (or next to each other on a monitor, I suppose), and ask yourself — and others whom you want to bring into the creative discussion — which one evokes more accurately the emotional response to that of the story you are trying to tell. Which one creates that atmosphere better. That is what it comes down to. And that is why an artist working in this medium should have both tools available for them to chose from, just as a painter can choose between oil or acrylic paint, or pastels or charcoal, etc. And that is why it matters that film isn’t pushed out of the equation by accountants, stock holders, or other matters that have nothing to do with art. There should always be that choice. But this of course is just my opinion.



-T


Link to comment
Share on other sites

Thank you both for your replies.

 

I was only asking this out of sheer curiosity since I've been digging up some old threads on this forums and articles elsewhere, and came upon several statements from respected people claiming that film was noticeably better when it came to human skin tones, even though these same people were not particularly pro-film or pro-digital.

Link to comment
Share on other sites

Skin tones are considered the guiding talking point because in the realm of human aesthetic interpretation of an image, an "off" skin tone can look jarring. We are constantly looking at the human subject, and therefore that is the benchmark. Because there are billions of extremely fine pieces of color information, there are billion points of one color rolling into the next. With the case of film, these small particles can capture it and reproduce it accurately because its inherent to the design of celluloid. The smaller those particles, the more color information you can capture. With a digital sensor, you rely on photosites that not only must capture light and color information, they must also convert it into an electronic signal. Part of the reason Arri has stuck by 3K in their Alexa imager is because larger pixels mean more color information can be captured. This is also why the Alexa is the gold standard for digital in 2014. Their larger pixels can interpret that color more organically, and mimic film.

 

I hope that makes sense. Its a matter of physics more than anything else. As digital imaging advances, so will its ability to interpret color information, latitude, etc.

 

One crude example: take a photo of a beautiful sunset with your iPhone, an FF digital camera, and then take one with a medium format slide. get the slide developed and hold it up to a lightbox. You'll notice way more color information, because MF slide film physically contains more particles to capture color and light.

Edited by Kenny N Suleimanagich
Link to comment
Share on other sites

  • Premium Member

Larger photosites are more efficient at gathering light and thus improve sensitivity, but they have nothing to do with improving color -- if that were the case then a camera with only four giant photosites would have the best color in the world.

 

If anything, more photosites, just as with more grains (like with large format film), devoted to the same-sized area of complex color hues such as skin, would be more likely to resolve all of those subtle colors without just blending things into one shade.

 

Color accuracy in the Alexa is more due to the color dyes of the Bayer filter and the OLPF and the color science behind their conversion to RGB. It's not due to being 3K instead of 4K or 5K.

Link to comment
Share on other sites

While I don't disagree, my point was that photosites and color information work in tandem with one another. Advancements in digital imaging rely on color science and light-response working to produce a certain favorable effect. Color science, as you mention, has a large part to do with this. As the dynamic range of photosites improves, so too does the color rendition.

Edited by Kenny N Suleimanagich
Link to comment
Share on other sites

I could take the position that nothing less than a 8x10 contact print from an 8x10 negative is the 'only true and accurate' representation of any scene, relative to photographic reproduction. All this clamor over a puny 35mm negative being processed through several interneg/interpos then a print projected to a screen 30x50 (or whatever) is just noise...

 

But each medium has various strengths. To be sure that until digital recording got to the 2K level, there were some pretty obvious limitations, but as it has gone beyond that for high end cameras, first for stills then for moving pictures, and the 'intensity' resolution/depth gone beyond 8 bits, things have changed dramatically.

 

I would also note, that when digitizing for DVD's first started transfering 'old' films, from 'original' negatives, all manner of 'defects' started to appear, even more so on Bluray transfers. Makeup lines, etc. so what people were talking about 'skin tones' was previously seen through a process that 'hid' some amount of detail.

 

And perhaps because even for Film film captured movies, a digital process is expected, less makeup seems to be the trend, and so, one is seeing 'real' skin tones and not pancake and powder.

Edited by John E Clark
Link to comment
Share on other sites

  • Premium Member

While I don't disagree, my point was that photosites and color information work in tandem with one another. Advancements in digital imaging rely on color science and light-response working to produce a certain favorable effect. Color science, as you mention, has a large part to do with this. As the dynamic range of photosites improves, so too does the color rendition.

 

Not sure if I agree - photosites are monochrome after all. Color accuracy is more the result of the transmission of the dyes in the Bayer filter in front of the sensor, plus any corrections like for IR that the OLPF makes. As for a wider dynamic range creating better color rendition, I would think that would mainly just be true for colors that reach the crush and clip points, not for colors that don't extend much beyond the middle.

 

Art Adams has written about why he thinks people prefer the skin tones on the Alexa -- it's mainly how the Alexa color science deals with saturation in overexposed areas, which is to mimic more closely how film tends to desaturate colors as the subject gets overexposed.

Link to comment
Share on other sites

I've been digging up some old threads on this forums and articles elsewhere, and came upon several statements from respected people claiming that film was noticeably better when it came to human skin tones

I think there's your answer, it's an old thread. Digital cameras, like RED, originally had very odd looking skin tone rendition. It was often referred to as 'plasticky'. Thankfully, most high end cameras these days have addressed this issue.

Link to comment
Share on other sites

 

Not sure if I agree - photosites are monochrome after all. Color accuracy is more the result of the transmission of the dyes in the Bayer filter in front of the sensor, plus any corrections like for IR that the OLPF makes. As for a wider dynamic range creating better color rendition, I would think that would mainly just be true for colors that reach the crush and clip points, not for colors that don't extend much beyond the middle.

 

Art Adams has written about why he thinks people prefer the skin tones on the Alexa -- it's mainly how the Alexa color science deals with saturation in overexposed areas, which is to mimic more closely how film tends to desaturate colors as the subject gets overexposed.

When I read what you write, I completely agree, so perhaps its because I’m not articulating myself well enough. I hope the original poster got some useful information out of this discussion, nevertheless. Always a lot to learn on this forum!

Link to comment
Share on other sites

I think there's your answer, it's an old thread. Digital cameras, like RED, originally had very odd looking skin tone rendition. It was often referred to as 'plasticky'. Thankfully, most high end cameras these days have addressed this issue.

 

But what caused the weird skin tones? The most recent Red footage I've seen still looks plasticky.

Link to comment
Share on other sites

  • Premium Member

"Plasticky" is a somewhat separate issue from color -- the lack of grain/noise combined with things like clippier highlights and over smoothness from factors like compression can give skin a plastic quality (you can take footage shot on film and get that "plasticky" quality by overusing digital noise reduction, for example). Of course, skin color reduced to a monotone single shade can add to that effect, but that can be caused by too much make-up just as easily by digital factors.

 

I don't think current Red footage looks like that, particularly not stuff shot on the new Dragon sensor. But the clean grain-free digital look is there of course, and that contributes to some people's negative reaction. The thing is that film grain has the effect of making the eye see sharpness and detail that isn't really there, by adding a layer of sharp grain as a surface texture, which is why some people feel that footage shot on 50 ASA stock can seem softer than footage shot on 500 ASA stock.

 

As for the color response of the older digital cameras -- excluding the cameras which shoot 8-bit 4:2:0, which will always be problematic for color richness -- again, the issues are mainly the spectral response of the color dyes in the Bayer filter (RGB filters that are less narrow in what wavelengths they allow to pass will tend to increase exposure at the expense of crosstalk between colors, somewhat muting the colors and making it hard to capture pure colors) plus the color science behind the conversion of raw to RGB. Some of the earlier Bayer-filtered single-sensor cameras could not get good color saturation in each channel and had a lot of crosstalk, creating somewhat blander skin tones. And many of the 3-sensor RGB cameras, which in theory should have yielded better color, before the single-sensor 35mm cameras hit the market, were using reducing color information to 4:2:2, or worse, 4:1:1 or 4:2:0. And I'm sure some of them opted for less narrow-cut RGB filters in order to increase the amount of light to hit each sensor.

 

Also, I find that "weird skin tones" is partially the result of decisions made in color-correction -- when you shoot film in white light and make a print, it's not hard to quickly achieve a good skin tone because that's what the film stock was mainly designed to deliver in a print. Whereas when you digitally color-correct digital or film images, often it seems like you have to play with the colors to achieve what looks like good skin tones on the monitor, it's more of a subjective thing done by eye, the electronic image doesn't just give you good skin tones as automatically as with the neg-to-position photochemical route. At least, that's just my experience -- I find that in digital color-correction suites, we essentially have to "create" skin tone color ("I think it has too much magenta now...") It's like an eyeball thing.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...