Jump to content

What's it all mean?! 4:4:4.............


Erin Dadeo

Recommended Posts

Hi everyone, I'm Erin and while I've been lurking as a guest for quite some time, I finally decided to post a question that's been plaguing me. Just an FYI; I'm nearly done with my undergraduate studies in cinematography and have been shooting a great deal of S16, but am trying to learn about video more and more as well.

 

Here goes:

 

What does it mean when I read (in regards to video) 4:4:4, or 4:2:2, or 4:1:1, or............what's this in reference to, what do those numbers mean, one versus the other, etc? What are some of the combinations one is inclined to come across, advantages, disadvantages..........

 

What's linear versus log; as in, 8 bit linear, 10 bit log, etc? Again, what are some combinations I'm likely to find?

 

What's DPX stand for?

 

I've been trying to research this on my own, but to no avail, and I see these pop up more and more and I feel woefully out of my league when they do. What do these numbers mean for production AND for post? Also, if anyone can point me to a definitive resource to learn from for these, that would be wonderful.

 

Thanks so much for any help and I'm glad to finally be "official".

 

Erin

Link to comment
Share on other sites

>>What does it mean when I read (in regards to video) 4:4:4, or 4:2:2, or 4:1:1,

 

Color subsampling.

 

I found this post on another forum, I do not take credit for the following explanation, but I find it the perfect answer to this question:

 

4:2:0 and 4:1:1 [and 4:4:4] refer to the sampling rate of the color signal. Since DV is a component signal consisting of the Luminance (Y) component and two Color components (B-Y), (R-Y). The three numbers refer to the sampling scheme used for each of the components. Each number is a multiplier used on a basic frequency of 3.375 MHz.

 

 

examples

 

4:1:1 means that the sampling rates for the Y,R-Y and B-Y Signals are 13.5 MHz:3.375 MHz:3.375 MHz respectively.

 

4:2:0 means that the sampling rates for the Y,R-Y and B-Y Signals are 13.5 MHz:6.75 MHz:0 MHz respectively.

 

In professional production studios another scheme 4:2:2 is the standard. It gives the best compromise between bandwidth and system cost for professional use.

 

The Sampling rate basically defines the maximum technically possible resolution or bandwidth of a digital system. The maximum bandwidth can not exceed HALF the sampling frequency this is known as shannon's sampling theorem. Therefore the bandwidth of a 4:2:0 luminance signal is half of 13.5 MHz= 6.75 MHz. Due to the use of real world filters and other limitations of the circuits used the actual bandwidth will be more like 5.7 Mhz in this case. The same calculations made for the color signal will yield a color bandwidth of around 2.5 MHz for the color signal in a 4:2:0 system and about 1.3 MHz in a 4:1:1 system.

 

The zero in 4:2:0 actually means that only one color component is recorded for each line of video signal. Red and Blue being recorded on alternate lines in a field i.e. Line 1 = only Blue, Line 3 = only Red etc. The color signals transmitted in each line are derived from the existing (usually 4:2:2) color information by averaging the color signal of two consecutive lines in a field. This means the vertical color resolution of a 4:2:0 Signal is only half that of the luminance resolution. If a 4:2:0 signal is copied in the uncompressed domain i.e. over SDI or analog the vertical resolution is reduced further with every generation. this loss can be avoided by copying in the compressed domain using IEE1394 or SDTI.

 

In a 4:1:1 system the color resolution is only one quarter of the Luminance signal's resolution which is about the same performace as an encoded PAL signal. 4:1:1 can be copied over uncomressed signal paths like SDI or analog several times however without any further degradation.

 

DV equipment in the NTSC world uses 4:1:1 sampling for both consumer DV, DV Cam (Sony) and DVCpro (Panasonic)

 

In the PAL world Consumer DV and DVcam (Sony) use 4:2:0 and DVCpro (Panasonic) uses 4:1:1

 

An important point to note is that if the two schemes 4:2:0 and 4:1:1 are cascaded the result will be 4:1:0 the worst possible solution.

 

In summary, the higher the numbers, the more color information is contained in the signals. Lower numbers cause missing information to be created via interpolation.

Link to comment
Share on other sites

  • Premium Member
4:2:0 means that the sampling rates for the Y,R-Y and B-Y Signals are 13.5 MHz:6.75 MHz:0 MHz respectively.

This is incorrect. You couldn't use 0 Hz for sampling B-Y. That would be the same thing as discarding the blue record entirely. What really happened here is a kind of notation mutation in the transition from analog to digital.

 

In the analog line scanning world of the 1950's - 1970's, sampling color differences with half the resolution given to luminance gave rise to the 4:2:2 notation. This works because the human visual system only needs about half as much resolution for color as it needs in luminance. This was well known during the development of NTSC-2 color, and is what made it possible to do color with just a single subcarrier shoehorned into the old B&W system.

 

In the digital world, you can now reduce chroma resolution both horizontally and vertically to half the luminance resolution. Because in the old line scanning notation, both color differences were always the same, the notation changed to substituting a zero for the B-Y to indicate reducing chroma resolution both horizontally and vertically.

 

Today 4:2:2 means that luminance pixels are grouped in pairs horizontally, and for each pair only the average color information is kept. 4:2:0 means that for groups of four luminance pixels, arranged two high by two wide, only one average set of color values is used.

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

> What's linear versus log;

 

That's a mathematics question more than a filmmaking one. In an image file with log data, the pixel value is equal to the natural logarithm of, broadly speaking, the brightness it's intended to represent. You should look up information on what a natural logarithm is in mathematics. Broadly, all this means that progressively brighter regions are represented by ever smaller increases in pixel value. This concentrates colour resolution in the hilights and maintains the desirable gentle rolloff as the maximum represented brightness is approached.

 

This is particularly important when you're using Cineon-style imaging, which rather arbitrarily defines a value around 95 as the maximum density or darkest area, and something over 600 as effective white. This means that you "lose" a lot of detail over the white point - although data over white is always processed anyway, the distribution of data is such that you can get away with a smaller bit depth for similar results. A 10-bit log image is often considered to be of effectively equal colour resolution to a 16-bit linear.

 

> What's DPX stand for?

 

Digital picture exchange. It's an expansion of the Cineon file format, presumably developed by Kodak, which supports a combination of bit depths and transfer curves. The principal extension it has over Cineon (.cin files) is the inclusion of metadata including timecode and keykode.

 

Phil

Link to comment
Share on other sites

Hm, thanks for the correction Mr. Sprung; After reading what I quoted, something seemed odd about the 0MHz figure, as like you said, that would mean there was no blue information at all. Now it makes more sense. ;)

 

The rest of the quote is correct though, right?

 

Oh and to Mr. Pytlak, once again, great links!

Link to comment
Share on other sites

  • Premium Member
Hm, thanks for the correction Mr. Sprung; After reading what I quoted, something seemed odd about the 0MHz figure, as like you said, that would mean there was no blue information at all. Now it makes more sense. ;)

 

The rest of the quote is correct though, right?

 

Oh and to Mr. Pytlak, once again, great links!

The rest of it isn't so much outright wrong as it's just confused and confusing. It seems to be from someone who almost understands this stuff. It uses terms like "bandwidth" in strange ways. It seems to be talking about the Nyquist Theorem, and credits it to Shannon. Strangely enough, it really was Claude Shannon, not Harry Nyquist, who discovered it. But it's called Nyquist's Theorem. I looked at John P's URL from TV Technology, which is a much better source.

 

John is truly the duke of URL's here. I tried Google to find a simple explanation of 4:2:2 and all that, and all I came up with was a bunch of manure:

 

http://amos.shop.com/amos/cc/pcd/5905470/p...37287/ccsyn/260

 

 

 

-- J.S.

Link to comment
Share on other sites

Hi,

 

> What's linear versus log;

 

That's a mathematics question more than a filmmaking one. In an image file with log data, the pixel value is equal to the natural logarithm of, broadly speaking, the brightness it's intended to represent. You should look up information on what a natural logarithm is in mathematics. Broadly, all this means that progressively brighter regions are represented by ever smaller increases in pixel value. This concentrates colour resolution in the hilights and maintains the desirable gentle rolloff as the maximum represented brightness is approached.

 

This is particularly important when you're using Cineon-style imaging, which rather arbitrarily defines a value around 95 as the maximum density or darkest area, and something over 600 as effective white. This means that you "lose" a lot of detail over the white point - although data over white is always processed anyway, the distribution of data is such that you can get away with a smaller bit depth for similar results. A 10-bit log image is often considered to be of effectively equal colour resolution to a 16-bit linear.

 

> What's DPX stand for?

 

Digital picture exchange. It's an expansion of the Cineon file format, presumably developed by Kodak, which supports a combination of bit depths and transfer curves. The principal extension it has over Cineon (.cin files) is the inclusion of metadata including timecode and keykode.

 

Phil

i've been facing a strange problem....after scanning on spirit 2k....while attempting to assembel the edl on fire/smoke..the edl is not assembeling the edit...it is "refusing" to link scanned images(DPX TC xsists for these images) to the assembel edl...different forms of edls have been tried..like taking a film to PAL option edl from avid 24fps partition...but to no avail yet.....?

Link to comment
Share on other sites

  • Premium Member

Hi,

 

> after scanning on spirit 2k....while attempting to assembel the edl on

> fire/smoke..the edl is not assembeling the edit...it is "refusing" to link scanned

> images(DPX TC xsists for these images) to the assembel edl...

 

When you say it's "refusing," is the Smoke giving you some error message or is it assembling a bunch of garbage? If it's saying it can't do the conform for some reason then it's presumably an EDL format issue. Otherwise:

 

Check to see where you're getting your various numbers from - DPX embedded information on timecode or keykode, and where that information came from - keykode reader correctly set up, cuts very close to flashed neg where keykode has accumulated incorrectly, punch index correctly recognised as zero timecode etc etc.

 

Need more info to comment further. If in doubt conform it on something else to check?

 

Mr. Rodriguez?

 

Phil

Link to comment
Share on other sites

  • Premium Member
I tried Google to find a simple explanation of 4:2:2 and all that, and all I came up with was a bunch of manure:

 

http://amos.shop.com/amos/cc/pcd/5905470/p...37287/ccsyn/260

 

Some film fans might say you found a good description of subsampled digital images. ;)

It's important that we don't give the impression here that subsampling is always a bad thing. It's just a tool that's appropriate for some tasks, but not others.

 

For instance, suppose we have a channel with limited bandwidth (the analog term) or bit rate (the digital term). Through this channel we can send a 1920 x 1080 progressive picture using 4:2:0. Suppose we want to send 4:4:4 instead, but keeping the same frame rate and bit depth. We'll have to give up some resolution to do it:

 

In 4:2:0, for each two by two block of pixels, we have four luminance samples, and one each for the red and blue color differences. That's a total of six. In 4:4:4, the same block would have four samples for red, four for green, and four for blue, for a total of twelve. So we'd only be able to send half as many pixels.

 

1920 x 1080 = 2,073,600 pixels in 4:2:0. But in 4:4:4, we'd only be able to pass 1,036,800 through the channel. 1358 x 763 = 1,036,154 is about what we'd be able to get.

 

4:2:0 works fine if you're just going to look at the picture, or edit it. The main problem with it is in doing blue or green screen composites. In that case, you're using chroma information -- which has only half the resolution of luma information -- to make decisions about where the matte line goes. So, you get sloppy fuzzy mattes with subsampled video. Its kinda like using a pair of pliers where you really should have used a socket wrench.

 

Because green makes the biggest contribution to luminance -- over 70%, depending on what system you're using -- sometimes you can get a sharper matte by using a luminance key on stuff that was shot green screen. It doesn't hurt to test both and see which works best.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

It's also a case of what you're willing to put up with and how much you're willing to pay for it. You can put the Hubble space telescope's imager behind the lens and transport every pixel if you really want to. Practicality steps in.

 

Similarly, people will tell you that you can't do certain things - you can't shoot out 8 bit images to film, you can't broadcast PD-150 stuff, you can't chromakey with miniDV. Actually you can do all these things - and hey, with sufficient care, they might even look acceptable.

 

Phil

Link to comment
Share on other sites

  • Premium Member
...with sufficient care, they might even look acceptable.

 

But will they look GOOD?

 

The "bar is being lowered" on many fronts.

 

I subscribed to digital cable in our area, mostly for the extra channels it offered. Usually, the quality on the digital tier was much worse than the analog signals being carried. I became so frustrated with the poor image quality (mostly due to compression artifacts) that I eventually dropped the digital tier.

 

Even the analog channels often violate good broadcast practice, with white levels and saturated colors going into clip because no one is really watching what goes out the pipe anymore.

 

The image quality, lighting, composition, and editing of many of the "reality shows" and local newsgathering continue to reach new lows, barely rising above amateur home videos.

 

We've all seen cinema presentation quality suffer when xenon lamps are run until they no longer ignite or they explode, basic projector and sound maintenance is neglected, and projection is relegated to minimum-wage "booth ushers" who barely understand how to thread a projector and push the right buttons.

 

Rant over. :(

Link to comment
Share on other sites

  • Premium Member

Hi,

 

On a related subject - I just watched "I, Robot" on a projector with a clearly out-of-sync shutter. There was pronounced vertical blurring on hilights in the bottom third of the frame, particularly visible with white text on black during the opening titles. I was aghast.

 

Phil

Link to comment
Share on other sites

  • Premium Member
You can put the Hubble space telescope's imager behind the lens and transport every pixel if you really want to.

Strangely enough, it's been done. Lockheed-Martin's camera project exists because they had the imaging technology from the KH-12 spy satellites and Hubble (which is basically a KH-12 shooting the reverse).

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
Hi,

 

On a related subject - I just watched "I, Robot" on a projector with a clearly out-of-sync shutter. There was pronounced vertical blurring on hilights in the bottom third of the frame, particularly visible with white text on black during the opening titles. I was aghast.

 

Phil

What they need is a loop of SMPTE RP-40 test film. It's an excellent thing, it'll show you exactly what's wrong with any projector.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
What they need is a loop of SMPTE RP-40 test film. It's an excellent thing, it'll show you exactly what's wrong with any projector.

 

They also need a "real" projectionist, who knows how to adjust shutter timing, which is an easy 5-minute adjustment on most projectors. Some projectors (e.g., Century) even have a simple knob to adjust timing. White-on-black closing titles are a fine way to see any travel ghosting, although I agree the SMPTE RP 40 test film (35-PA) is the best all-around tool for optimizing the projector.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...