Jump to content

Recommended Posts

I’ve got something I’ve been wondering for over and over this past weekend!



I read that downscailing from 4K to HD can produce the equivalent of 10-bit luma.


Like to say: 8bits 4K = 10bits HD.



Is that in fact true? how is it possible?




the GH4 records externally 10bits 4:2:2, the Sony A7 II with an external recorder only 8bit 4:2:0, so I wonder I shoot with the Atomos Shogun (not real 4K but UHD I think) and then I downscale it to to I get a true 10bits 4:2:2, with S-Log3, nicer to grade with, that would work cos of 8bits 4K=10bits HD? or it doesn't work this way?




thank you :)

Link to comment
Share on other sites

  • Premium Member
how is it possible?

 

Averaging.

 

If we have a 3840x2160 UHD input image and a 1920x1080 HD output image, we have four eight-bit pixels in the input for each ten-bit pixel in the output.

 

Assuming that we scale the UHD input image by averaging four pixels together (which is not quite how it's done, but a near enough approximation for this discussion), we can simply add them together. The maximum count of an 8-bit pixel is 256. The maximum value of a 10-bit pixel is 1024, which is four times 256. As such the output pixel can be at any of 1024 values, with true 10-bit information.

 

Similar things occur with lower-resolution colour-difference images as in 4:2:2 video. You can either consider a 4:2:2 UHD to 4:2:2 HD conversion as giving you 10-bit 4:2:2, or 9-bit 4:4:4 output. A 4:2:2 UHD colour image is only 1920x2160, so you are averaging only two input pixels per output pixel if you want 1920x1080 output. If the input pixels are 8-bit, the maximum count for two of them is 512, the maximum 9-bit number.

 

As I say this is something of an approximation, but there is genuine advantage to be had by converting 8-bit 4:2:2 UHD to 10-bit 4:4:4 HD.

 

P

Link to comment
Share on other sites

It's possible because for every single pixel of the 2K image to be populated, there are four pixels of the source signal that need to populate that single pixel

 

The simplest way of doing that is to add the four numbers together and divide by 4.

 

If the source numbers are 8 bit, the division by 4 means one would need 10 bits to properly represent the result of that division.

 

C

 

EDIT: Or what Phil just said ahead of my post.

Edited by Carl Looper
Link to comment
Share on other sites

  • Premium Member
The simplest way of doing that is to add the four numbers together and divide by 4.

 

And of course if we're going from 8 to 10, we don't even need to divide by four!

 

Just for the sake of adding some more information, a command for the free tool ffmpeg would look something like this (and this is fairly well untested, use at your own risk)

c:\bin\ffmpeg -i D:\Project\Footage\Roll1\oneg\input_2160_8.MOV -c:a copy -c:v prores_ks -profile:v 4444 -pix_fmt yuv444p10le -s 1920x1080 D:\Project\Footage\Roll1\oneg\output_1080_10.MOV
Link to comment
Share on other sites

And of course if we're going from 8 to 10, we don't even need to divide by four!

 

Oh yes, certainly. The division is purely conceptual. You only need an actual division operation if the input data has been normalised (to a range of 0 to 1), eg. when programming a graphics card this is often the case. And so any additions one does requires that such be followed with a division, ie. to get the result back into the 0 to 1 range, ready for output and subsequent re-quantisation.

 

In CPU programming one will often want to avoid this and use integer operations that avoid any division (as such is computationally expensive).

 

Either way it does help to start with the conceptual idea first - treating all pixel values as normalised (regardless of input/output formats). And have the conceptual division there. And then optimise that according to whatever data pipeline one is otherwise using. While not necessary for simple processing tasks it becomes invaluable when developing more complex pixel processing.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

And of course you wouldn't actually be doing that, anyway. You'd be low-pass filtering then decimating, which would have broadly the same effect.

 

But anyway!

 

P

 

Oh for sure. Averaging is just the cheapest way of doing it. It's the easiest to program and the cheapest to compute. But it's only a starting point for further work.

For far better visual results you'll want to be exploring frequency theory and what that has to offer. Fourier transforms, etc. One can end up with intermediate data requiring a lot more than just 10 bits to represent. The final output to some specific bit depth is just to satisfy whatever limitations a file format or display hardware requires. And with file formats involving compression it's not even in bit depth terms.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

 

Sony A7 II with an external recorder only 8bit 4:2:0

 

 

:( So sad that such a brilliant camera has an end-of-the-line bottleneck like this, the sensor is amazing.

I think it's more for money reasons (selling higher end cameras) than actual design restrictions :wacko:

 

I guess it's still great for work without grading, like documentary.

 

EDIT : sorry I was talking about the A7S II.

Edited by Tom Yanowitz
Link to comment
Share on other sites

  • 3 weeks later...

thank you Carl and Phil for the time you took to explain me more about it!

 

 

Tom, you say it's sad that such a brilliant camera has an end-of-the-line bottleneck like this, the sensor is amazing. That's why I was wondering, you shoot with it 4K 8bit and then you downscale it to get a full HD 10bit, if you really don't need a 4K/2K final version, when the client is ok with a full HD, you then can grade a nicer 10bit full HD am I right?

Link to comment
Share on other sites

The best program to use currently in down scaling is the free version of Resolve from Blackmnagic Design. Several people have used it and have verified that if done properly, UHD can downscale to 1080p perfectly and retain it's bit values, giving you an effective 4:4:4 8-bit image from a 4:2:0 8-bit source. Likewise, C4k can also downscale to 2k perfectly as well, also ending up as a 4:4:4 8-bit image from 4:2:0 source.

So, we know that 4:2:0 scales to 4:4:4 near perfectly, but the real debate is rather 8-bit can become 10-bit. Some say it can, some say it can't. Some say it becomes something in-between, like 9-bit of sorts. While I'm no scientist, my personal opinion is that while color information can be averaged together nicely, luma (the 8/10 bit part) is a different story. Since you didn't start with more luma, I fail to see how you could end up with more. There might be some perceived luma scaling going on, but I think that is just a result of compressing the 8-bit luma issues like banding into a smaller frame, which sort of helps to 'hide them', much like how down-scaling 4k to 2k reduces noise and increases sharpness.

Link to comment
Share on other sites

One doesn't end up with more information. One ends up with the same information.

 

When scaling from 4K to 2K, there is only a necessary loss of sharpness (loss of definition). There is no necessary loss of depth information. The depth information of 4 x 8 bit pixels is exactly the same depth information as a 1 x 10 bit pixel. So when scaling from 4K 8 bit to 2K, one should use (at least) 10 bit pixels for the 2K - if one doesn't want to lose any of the depth information.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

  • 2 weeks later...

I guess the big question - does GH4 A/D really convert the signal in 8 bits or 10 bits - then does it output the digital signal in true 10 bits or just padding the two leading

bits with 0 then declare it is 10 bits output.

Link to comment
Share on other sites

I guess the big question - does GH4 A/D really convert the signal in 8 bits or 10 bits - then does it output the digital signal in true 10 bits or just padding the two leading

bits with 0 then declare it is 10 bits output.

 

The GH4 records 8 bit internally. It outputs TRUE 10-bit 4:2:2 through the HDMI port for external recording.

Edited by Landon D. Parks
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...