Jump to content

Recommended Posts

Hi everybody, I'm reading about color sampling and I'm getting a lil confused about the difference between RGB and YCbCr. From what I've understood they are both color spaces, but YCbCr comes handy because full bandwidth RGB signals have a lot of color redundancy and they are not economically efficient for storage and transmission. One of the things I do not undestand is when in cameras happens the passage from RGB to YCbCr. For example, if I'm shooting with a high end level camera like Alexa or Epic what is the color space? And if I have a chence to choose between the two which one should I choose? And another thing, I've seen that in high end level monitors there is a hdsdi input but also different inputs for YCbCr signals, whit differents connectors for the different color components. why is that?

Thanks so much in advance for the help,

Davide

Link to comment
Share on other sites

  • Premium Member

You're talking about colour subsampling.

 

A colour space determines what colours an imaging system is capable of representing. Simply put, we may know that (some) colour images are represented by red, green and blue channels. The colour space defines which red, which green and which blue. If we use a comparatively pale and unsaturated red, for instance, there may be some deep reds that our camera system cannot show properly; they are out of gamut. The most common example of this is that the Rec. 709 monitor specification defines a fairly pale green. It is difficult to represent the colour of (for instance) a tropical sea on a Rec. 709 display. It just comes out a cyan.

 

Colour subsampling takes advantage of the fact that the human visual system has a lot of light-sensitive elements which detect brightness, and a rather smaller number that detect colour. If we use RGB, we must store all three channels at full resolution. If we separate the brightness (Y) and colour-difference (Cr and Cb) channels, we can choose to store the colour information at lower resolution, which is not visible to the human eye. The mathematics are designed so that conversion between YCrCb and RGB (and back again) just involves basic multiplication, addition and subtraction.

 

These things also apply to analogue signals, which is where the three separate inputs come in.

 

The lower colour resolution may be visible to things other than the human eye, such as software designed to perform green screen compositing. For this reason, some devices allow RGB recordings to be made. The Alexa, for instance, can record YCrCb 4:2:2 or RGB 4:4:4 signals (and raw).

 

P

Link to comment
Share on other sites

  • Premium Member

422 is a YCrCb format

444 is an RGB format

 

If you use RGB you will be storing all three channels at full resolution.

 

With 422, you are cutting the resolution of the R and B channels in half. This may not sound like a big deal, but it's very visible, you see it all the time when watching broadcast television, internet streaming, DVD or BluRay since they are all compress the Red and Blue channels so much. If it doesn't bother you, there is a lot of space to be saved. It bothers me greatly, which is why I try to shoot in 444 or RAW. It's rare to see anyone shoot with Red or Alexa and use 422 mode. Both cameras have 444 RGB and RAW modes, which delivers the least amount of manipulation between the imager and the written file.

 

The original way to breakdown the data was through a component cable, separating each channel on it's own cable. This is an analog system and is RGB based. As things slowly moved to digital technology, we saw the component cable disappear and in it's place came a single cable called SDI (serial digital interface) which was sent on a locking BNC cable. SDI is a magnificent system because it sends not just video but audio as well, something component couldn't do because it was analog. This allowed broadcasters to remove a lot of clutter from their racks and maintain higher quality without converting from digital to analog in every piece of equipment.

 

The original HD standard in the US was 1080i/60 10 bit 422. The original HDSDI spec was 1.5g which is only enough for the original HD standard. To get a full 444 RGB color space, you had to use two 1.5G cables in a double stream mode. This would give you 10 bit 444 from an HDCAM SR deck to a 444 monitor.

 

Today however, mostly all current standards can carry 4k 10 bit 444 down a single cable. This is due to higher bandwidth electronics. HDSDI today is upwards of 12G and HDMI 2 is 18G, so plenty of bandwidth to go. HDMI 2 can also handle 12 bit material as well, but the hardware to display that is very expensive at this moment.

Link to comment
Share on other sites

  • Premium Member

There is such a thing as YCrCb 444 where luminance and colors are equal in the amount of information for each. It's apparently not the same thing as RGB in terms of how color is stored but in terms of quality, they are probably similar.

Thanks for mentioning that, it's true, but it wasn't a camera or recording format. If I recall it was more of a equipment to equipment format for the purpose of more accurate grading.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...