Jump to content

S Log for Dummies


Mayer Chalom

Recommended Posts

Hello all. I'm interested in learning more about S Log but I'm quite confused by the various information about what it is and it's practical use.

 

Forgive me for being blunt, but what is S-Log? Is it a file format? A kind of Color space?

 

In addition how does S Log correspond with the post process with LUTs ?

 

Thanks.

Link to comment
Share on other sites

  • Premium Member

It's a color space; a very wide one, to try to get the most information off of a sensor as possible; akin to "film" mode on a black magic camera (when shooting in prores) or LogC on an Alexa.

It records as very washed out image, flat, and boring, to which you can apply a Lut (and you'd not want to monitor on set in S Log).

 

It's always, in my opinion, a good idea to record in the log formats to keep as much of the image from clipping (high or low) as possible.

 

That's a very basic overview.

  • Upvote 1
Link to comment
Share on other sites

I wouldn't call it a color space. That term is so overused/misused.

S-Log is just how some Sony cameras encode the linear signals from its sensor. The raw, or matrix-manipulated but still linear, RGB signals are massaged with a mathematical function for efficient encoding. The differences between Sony's S-Log, Arri's LogC, Canon-Log, and Kodak's Cineon functions are mathematically puny.

 

To understand what this coding does it is best to think at first in monochrome: black&white photograpy. Suppose a good sensor captures 13 stops of luminance. How did the eye discriminate within those 13 stops? The very popular view -- but thoroughly discredited by Hecht in 1924 -- is that we see logarithmically, so we discriminate an equal number of steps of luminance, like 50, within each stop. So we would like to encode the sensor output in 13×50=650 values. This can be done in 10-bit if we alot an approximately equal number of values per stop. The simplest log encoding codes the bottom output of the sensor as 0; codes the output from 1/50 stop above bottom as 1; codes the output from 2/50 stop above bottom as 2, etc. Call the first step the perceptual discrimination encoding step.

 

Then at the other end of the image chain, when the 10-bit coding must be displayed, comes a pictorial aesthetic decoding step. The code values will yield luminances again, but it is extremely unlikely that the display mode will cover 13 stops. A B&W photo print covers less than 7 stops. A projected DCP in a very dark theatre will unlikely have more than 9 stops of luminance within any frame. Yet there is the wish to hint more of the original scene's luminance range -- nicely captured by the camera -- into the limited display. This is done by toe and shoulder rolloff. Ansel Adams among others wrote about the importance of toe and shoulder in photography. The toe in the final print comes from the print paper (and its processing). The shoulder in the final print comes from a combination of the toe of the original negative and the shoulder of the print paper (and their processings). The photographic artist is in the darkroom for both steps, and is in control. Tone compression toward the high and low extremes has figured in all pictorial art.

 

With high-class video and digital cinema it is possible to keep the two steps separate, so that the camera maker engineers an aesthetically neutral encoding and the artist applies full control to toe and shoulder, among else, thereafter. (10-bit offers enough code values to put 78 into each stop so the artist can contrastify any part of the range and still have the necessary 50 for smooth gradation.) Camera makers unfortunately second-guess what should be aesthetic decisions. Thus the proliferation of hardly different log encodings. Competent post-people can undo the camera makers' encoding intrusions, returning to "scene referred" images, and work freshly.

 

Admittedly it's more complicated in color, but it is still desirable that camera makers keep their noses out of aesthetic decisions. The one decision they can't avoid is the spectral sensitivities of their RGB sensors. These they must begin publishing, as they proudly publish their customized log encoding functions.

Link to comment
Share on other sites

  • Premium Member

The crude way of thinking about gamma is to equate it with contrast and to think of Log as a very low-contrast image. It's a bit more complex than that since there is also some non-linear shape to the gamma (the natural tendency of a sensor is to respond to increases in light values in a linear manner.)

 

Film's response to light can be plotted on an S-shaped gamma curve, linear in the mid tones but flatter in the shadows and highlights, allowing luminance detail to fall to black and burn out to white more gradually (the flatter the gamma, the lower contrast the image is). Log storage of digital sensor's outputs give it that S-shape response similar to film.

 

S-Log just means "Sony" Log, their version that resembles the classic Kodak Cineon Log image you get in a scan of a film negative.

Link to comment
Share on other sites

  • Premium Member

Interestingly enough, Sony have just release S-Log 3, which is sufficiently similar to Arri's Log C that similar postproduction settings can often be used for both. I hope wholeheartedly that this represents a convergence of the ways that might eventually lead to standardisation.

 

P

Link to comment
Share on other sites

I think it's best to think of LOG encoding as a compression scheme, rather than a color space.

 

To explain this, let's assume that our recording format can record only 100 tones in each color, red green, and blue.

 

If we look at a grey ramp consisting of only 100 steps, we can just barely see the steps upon close examination, but it's smooth enough for us to perceive it as a continuous tone ramp.

 

If our camera can see only 5 stops of dynamic range, and we record a linear image of 100 steps, the result will look quite normal for the bulk of the image, but highlights, and blacks may well clip, making the image seem electronic. This is the way live TV is basically done. The midtones all look good, but we often lost detail in a white shirt... But is was a good match to our TVs that can show 5 stops with convincing realism.

 

But, what if our camera sensors can see 12 stops of dynamic range? If we record all this data, we can roll off the lighlights and shadows in a film like way, in post, see the detail at the extremes and have a "film" like photograph.

 

In 2005 Panasonic decided to implement this idea in the Varicam. It was called "film rec" mode and recorded all 12 stops into the 100 steps (really 256) of tones that the camera could record. The result was a very low contrast dark image that could have a curve applied in post to create a pleasing photographic image.

 

But there was a little problem. The "meat" of the image was recorded to steps 10-50 on the scale. 50-100 contained only highlights. When a curve was applied in post to create a normal looking photograph, steps 10-50 got stretched to 10-90. And the result was that this created gaps in the tone ramp that became obvious as banding or posterization.

 

So how to avoid this banding? The simple way is to record 1000 steps of tones instead of 100. But this would require files be 100 times bigger!

 

LOG to the rescue! By applying a Log curve to th image before recording, the "meat" of the image could use steps 10-85 of the recording. The idea is that it is very difficult to see gaps in tones in the near whites and whites which now only have 15 tones to represent 3 or 4 stops.

 

In post the LOG curve is reversed, with highlights and shadows gently rolled off, but it all fits in a 100 step, compact (low data) recording.

 

So the idea behind LOG recording is to fit a very large amount of data into a small bucket, while giving the most detail to the part of the tone curve which we most easily perceive loss off detail.

 

So LOG is a way to record a large color space into a small recording in an efficient way so that we don't perceive the loss of data. Eg. A compression scheme.

Link to comment
Share on other sites

  • Premium Member

The funny thing is that Log makes sense for 10-bit or 12-bit recordings when you have a lot of dynamic range to store, but for 8-bit recordings, you run the risk of banding, and in 16-bit recordings, you have plenty of steps to store the range in without resorting to log.

Link to comment
Share on other sites

The funny thing is that Log makes sense for 10-bit or 12-bit recordings when you have a lot of dynamic range to store, but for 8-bit recordings, you run the risk of banding, and in 16-bit recordings, you have plenty of steps to store the range in without resorting to log.

Yes, of course, by I was trying to simplify and make a clear explanation...if you were referring to my post :)

Link to comment
Share on other sites

  • Premium Member

Can I ask a stupid question?

 

It's awesome my camera records so much great data…why the heck does any editing program I use, reduce the perceived bit rate to 8 bit, showing systemic banding.

 

Check out the black and white transitions in this video I posted on youtube. Output came right from DaVinci in 12 bit rendering mode, the final Pro Res HQ file looked just like this:

 

 

I'm really struggling to get what comes out of the camera through post production and have it retain the same dynamics.

Link to comment
Share on other sites

The funny thing is that Log makes sense for 10-bit or 12-bit recordings when you have a lot of dynamic range to store, but for 8-bit recordings, you run the risk of banding, and in 16-bit recordings, you have plenty of steps to store the range in without resorting to log.

 

All this implies is that even Log encodings in 8-bit can't save the image from banding. The problem is with 8-bit itself. It offers 255 steps when the eye demands about 332 steps for 1000:1 contrast ratio 140 nit monitor image. The ideal coding is the one that makes those 332 steps with no waste. The ideal coding is derived from Hecht (1924) in my 2009 paper. Apologies in advance for the paper's using the milliLambert unit and for not mentioning any of the Log encodings.

Link to comment
Share on other sites

Can I ask a stupid question?

 

It's awesome my camera records so much great data…why the heck does any editing program I use, reduce the perceived bit rate to 8 bit, showing systemic banding.

 

Check out the black and white transitions in this video I posted on youtube. Output came right from DaVinci in 12 bit rendering mode, the final Pro Res HQ file looked just like this:

 

I'm really struggling to get what comes out of the camera through post production and have it retain the same dynamics.

 

I use FCP7 and have wondered about the same thing. Why do fadeouts made in FCP7 look jumpy?

I let FCP make a "cross dissolve" on a white slug 955 frames long. This was a fadeout from 100% to 0%. Codec was 10-bit uncompressed which allowed accurate inspection of frames in the binary file. So far as I checked, in 6 places, it made a perfect fadeout with Y'=1019 in the first frame, Y'=1018 in the second frame, etc., etc., until the last frame, which should have been Y'=64 but was Y'=65 instead. That's a forgiveable error. Here it is: 1019-64_dissolve_10bit.mov

 

Yet it looks jumpy when I play it back in either FCP7 or quicktime. The problem is not in the file, so it must be in the player+operating system+graphic card+monitor. Most players are crap. Most monitors are 8-bit.

 

You are right to be wary of filters used in video editing. Many reduce your 10-bit video to 8-bit, then do their thing, then pad it to 10-bit. I caught the rather expensive Video Purifier denoiser doing that. I've found many Apple video filters to be amateurish botches.

Edited by Dennis Couzin
Link to comment
Share on other sites

  • Premium Member

It's probably a timing error. Bear in mind that the computer display is probably locked to 60Hz updates, but even if you feed it a 60fps video file, small timing variances between the way the video file is decoded and the way the graphics card updates its display memory may mean that some frames are displayed for more than one monitor update, and some may be skipped entirely. The only solution to this is a proper video I/O board (which may just mean a Blackmagic Intensity).

 

Bear in mind also that various settings will create video files with luma in the 64-940 range, as opposed to the full 0-1024 range, and other settings can cause playback software to stretch, crop, or compress this range on playback too. This can get hugely complicated depending on your software.

 

P

Link to comment
Share on other sites

I made the fadeout test just 64 pixels square, hoping this would alleviate processing stress. Being a non-standard size might have had the opposite effect!

 

The point of my reply to Tyler Purcell's "check out the black and white transitions in this video ... the final Pro Res HQ file looked just like this" is that his final ProRes probably isn't at fault, but rather his playback. One way to check his final ProRes is to play it better than his "player+operating system+graphic card+monitor" now does. Another way is to determine that his editing system's filters are healthy. That way he knows as he edits that he's not degrading his picture. Those degradations are usually small, like the 10-bit ->8-bit -> 10-bit detour mentioned, and might not be apparent even in a perfect viewing at that stage, but they have cumulative effects and should be avoided if possible.

 

It's not just "settings" that change how the video Y' range is figured: 64-940 and the ranges having super-whites and/or sub-blacks. Many video filters just do this for you, without any indication. They do it to you. I wrote about some sadly funny examples in another forum.

Edited by Dennis Couzin
Link to comment
Share on other sites

  • 3 weeks later...

Can I ask a stupid question?

 

It's awesome my camera records so much great data…why the heck does any editing program I use, reduce the perceived bit rate to 8 bit, showing systemic banding.

 

Not a bad question at all. Media Composer, FCP, Premiere Pro, they can all work in 10 bit. In fact, some of the Mercury Engine based processing in Premiere can be done in 32 bit float precision.

 

However, if by "perceived" you mean why the file output from any of these editing programs may look crappy, the answer generally lies in the delivery codec you are using. If using H.264 you are dropping the storage bit depth to 8 bit and severely compressing the image.

Link to comment
Share on other sites

...if by "perceived" you mean why the file output from any of these editing programs may look crappy, the answer generally lies in the delivery codec you are using. If using H.264 you are dropping the storage bit depth to 8 bit and severely compressing the image.

 

Tyler Purcell said "the final Pro Res HQ file looked just like this", so his is not a problem of the delivery codec.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...