Jump to content

Can anybody give an introduction to the concept of bitrate?


Recommended Posts

Hello everybody, I still did not get a chance to go in depth of the notion of bitrate through the dedicated digital cinematography manual that I've just purchased, but I'd like to have an introduction to what it exactly is and how it effects the footage, so that in the meanwhile I do not completely ignore this concept while reading about cameras and their features.

Thank you in advance for the help!

Link to comment
Share on other sites

  • Premium Member

In practical photographic terms, bit rate sort of affects the "steppiness" of what would be gradations in the image, because you are breaking down a continuous analog change from bright to dark, or the other way around, into discrete digital blocks. So with a higher bit rate, you assign more steps or blocks or packets or chunks, whatever, to a given range of changing information, giving you smoother transitions. A common example of a low bit rate used for an image with smooth gradations, like a soft shadow across a smooth wall, or a sky that goes from a bright horizon to a darkness above, is an artifact called "banding".

 

Now sometimes with a low bit rate image it looks OK in the original recording but as soon as you color-correct and try to stretch some exposure or color information up or down, the banding artifacts kick in.

 

This is just one issue and obviously bit rates affect things like audio too.

 

This article explains some of this:

http://daredreamermag.com/2012/08/22/understanding-8bit-vs-10bit/

Link to comment
Share on other sites

The Bit Rate of a file simply refers to the amount of data the file contains per unit of time, typically a second. So a standard definition DV file (a highly compressed acquisition format) is 25mbps, or megabits per second (note the small 'b' -- that means megabits, not megaBytes). A typical HD Blu-ray AVC encode is also somewhere around 25mbps, but it's a different compression scheme (AVC, a variant of H.264).

 

Bit Rate is not a universal measure of picture quality. Hell, it really shouldn't be used as a measure of picture quality at all, because it's only relevant when you're talking about the same type of file (for example, two MPEG2 files, or two AVC files), and when you're talking about the same encoding tools. And there are many other factors that affect the final quality of a compressed file besides the bit rate (including the quality of the source footage, the quality of the encoding algorithm, the quality of any scaling algorithms used, whether or not there's low pass filtering happening, whether or not there's artificial sharpening, and on and on).

 

Back in the early days of DVD, one couldn't expect to get a decent looking picture at a bit rate below about 6Mbps (DVD MPEG2). By 2005 or so, encoders had come out that were capable of encoding at sub-6mbps rates without significant quality loss. By 2015, we can encode a pretty good looking MPEG2 file at around 4.5Mbps, if the source is clean. But this is 100% dependent upon the encoder. You can't do that in Apple Compressor. You can do it in Cinemacraft.

 

With Blu-ray, you're talking about files that have 4x the resolution of standard definition, but with a codec like AVC and a proper encoder to do the compression, you can easily make a really good looking encode at an average bitrate of 12-15Mbps. That's only double the bit rate of DVD, but for 4x the data. This is because AVC and MPEG 2 are totally different, and can't be directly compared. The bit rate of an MPEG2 file has absolutely no relevance when talking about the bitrate of an AVC file, if the intention is to compare the quality. People do this all the time, though.

 

All that being said, the effect of a lower bitrate file compared to a higher bit rate file of the same compression type typically results in blocking, or quantization artifacts. See this wikipedia article for an example: https://en.wikipedia.org/wiki/Compression_artifact

 

Some of these artifacts can result in banding, as David Mullen mentioned. But banding is also a result of poor (or no) dithering from 10 bit sources to the typical 8 bit files used for display in formats like DVD.

 

Bit rate is one parameter of many within an encoder. In many cases, all you need to do is set this higher to get a better image. But with formats like DVD, Blu-ray, or even web streaming, where you have caps on how much data can be streamed or how much space the files can take up, lower bit rates are required. And this is where good encoders shine, because they can handle it.

 

-perry

 

 

  • Upvote 2
Link to comment
Share on other sites

thank you guys, I have a couple more questions:

- David says he was probbaly talking about bit depth rather than bitrate, but in the article he posted they talk about bit rate, so I'm feeling kind of confused if the two terms are interchangable or not.

- Is one term more connected to post production?

- For example, in the Canon C500 when shooting 2K is given the possibility of choosing between RGB444 12BIT or 10 BIT. What is the camera referring to? Bitrate of Bit depth?

Link to comment
Share on other sites

I believe thats "bit depth" .. i.e. giving you more variations/shades of each of the colour channels.. RGB.. the other would be Mbps.. e.g... 50 Mbps is the minimum for HD broadcast in most countries.. going up into the hundreds .. the amount of data recorded per second .. as Perry says..

 

10 and 12 are pretty high.. 8 bit is to be avoided for LOG .. as it will fall apart in any grade... which is the whole point of shooting log in the first place.. e.g. 8 bit will give you total 16.8 millions colors.. 10 bit 1 billion !

Edited by Robin R Probyn
Link to comment
Share on other sites

Though the word ‘bit’ is also used in this term, Bit Depth actually describes something completely different from Mbps.. Bit Depth, aka Color Depth, describes the amount of information stored in each pixel of data. As you increase bit depth, you also increase the number of colors that can be represented. In the case of an 8-bit RGB image, each pixel has 8-bits of data per color (RGB), so for each color channel the pixel has 28 = 256 possible variations. In the case of a 10-bit RGB image, each color channel would have 210 = 1024 variations. -

 

So in 8 bit you have 256 X 256 X 256 = 16.8 M approx colour variations

 

10 bit 1024 X 1024 X 1024 = 1 billion approx colour variations

Edited by Robin R Probyn
Link to comment
Share on other sites

  • Premium Member

I'm feeling kind of confused if the two terms are interchangable or not.

Nope, Perry did a great job explaining.

 

For example, in the Canon C500 when shooting 2K is given the possibility of choosing between RGB444 12BIT or 10 BIT. What is the camera referring to? Bitrate of Bit depth?

Anything that says "BIT" is bit depth, which is determined by the number of bits of data that represent the channels from full black to full white. There are three channels we work with: Red, Blue and Green.

 

If you have 8 bits per channel, you'd have 256 steps between full black and full white on each channel. This is visibly seen when you pull two of the channels away to create a black and white image.

 

If you have 10 bits per channel, you'd have 1024 steps between full black and full white on each channel. This is much harder to see visually. Even if you make it black and white, you may see other issues within the image before noticing those steps. Though a trained eye, decent monitor and high enough resolution image, even 10 bit images can show hard steps.

 

This is why the best digital cinema cameras shoot 12 bit (or greater). This has 4095 steps between full black and full white. At 12 bit, you're pretty much going above what most of our current presentation devices are capable of achieving.

 

- Is one term more connected to post production?

bit depth (8,10,12) (the steps between full black and full white)

color space (4:2:0, 4:2:2, 4:4:4) (chroma compression)

bit rate (Mbps variable) (Complete package compression)

 

The only reason cameras have 8 bit, 4:2:0, 50Mbps settings, is because they simply can't process or store data fast enough. It's a way to "compress" the image into a nice and tidy, usable package that doesn't take up a lot of space.

 

These three things are all you need to know when it comes to standard compressed files like Pro Res, AVC/XAVC, DV, .h264, etc. So if your camera shoots a compressed format like XAVC or you're delivering files after finishing, these are the formats you'll need to understand. Some other cameras have "RAW" shooting capabilities, which allow for full bandwidth on bit depth and color space, so those numbers are almost irrelevant at that point. The only number you worry about with RAW cameras is the bit rate, which can usually be adjusted.

  • Upvote 1
Link to comment
Share on other sites

They're different.

 

Gamut refers to the range of colors a device is capable of displaying, which is limited by the display technology. If the display, for example, can't properly show deep blacks without crushing them all into the same value, or it can't display particular shades of a given color, that's a limitation of the display and that in turn limits the display's gamut.

 

But the file you're looking at on that display could hold color values well outside of the limits of that screen. Bit Depth doesn't really care about the gamut of the display, it's just the potential number of colors each pixel in the file could be.

 

-perry

  • Upvote 1
Link to comment
Share on other sites

so gamut is more related to the physical capabilities of the monitor (or device) while bit depth is more connected to the potentialities, is that correct? And if it is might it be possible that the bit rate overpass the capabilities of the dispay's gamut?

And how is gamut measured? And what is the point of shooting with a 12 bit depth if then the content might be screened on a normal tv or pc monitor? Does it have something to do with color correction?

Link to comment
Share on other sites

  • Premium Member

You're thinking too much into this. Gamut is not a technical word, it simple means a range of something.

 

So in the case of cinema, we can show you a Gamut chart which maybe shows the visible eye's color rage vs RAW RGB.

 

Or maybe, we'd show you a Gamut chart of RGB and in side that, show the lower bit rate's.

 

https://en.wikipedia.org/wiki/Gamut

Link to comment
Share on other sites

  • Premium Member

https://en.wikipedia.org/wiki/CIE_1931_color_space

https://en.wikipedia.org/wiki/Rec._709

https://en.wikipedia.org/wiki/Rec._2020

 

Not sure specifically what the difference between color space and color gamut is, though this Technicolor page attempts to define them:

 

http://www.technicolor.com/en/solutions-services/technology/technology-licensing/image-color/color-certification/color-certification-process/color-spaces

 

WHAT ARE COLOR SPACES AND WHAT IS A GAMUT?

Gamut:
A gamut is the range of colors that a particular device can capture or show.
Color Space:
A color space is a predefined specification that delineates a particular group of colors.
A color space is mapped into a device’s gamut so the device’s colors correspond to real-world colors.
Not sure what any of this has to do with bit depth though, you can store these colors at various bit depths though most 8-bit cameras are probably restricted to capturing not much more than Rec.709.
Link to comment
Share on other sites

4444,422,420 etc refer to color resolution, not color space.

 

Color space is the range of colors that can be recorded as data.

 

Bit depth determines the number of color values that can be recorded as data.

 

All of the above can be mixed depending on the recording format...

  • Upvote 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...