Jump to content

The full story behind interlaced footage


Guest Daniel J. Ashley-Smith

Recommended Posts

Guest Daniel J. Ashley-Smith

I've got myself very confused here. (Nothing new)

 

My original understanding of interlaced was:

 

There are 2 fields shot every 1/25th of a second.

On each field it contains a whole frame, but there is a line of resolution missing for every line of resolution there is.

 

Some thing like this-

 

Field 1.

 

------------------------------------ (Image)

????????????????????????????????????? (No image)

------------------------------------ (Image)

????????????????????????????????????? (No image)

------------------------------------

 

Field 2.

 

???????????????????????????????????? (No image)

------------------------------------ (Image)

???????????????????????????????????? (No image)

------------------------------------ (Image)

????????????????????????????????????

 

 

And these two fields are both interlaced, to create a whole frame. (Each field fills in the gaps of the other)

 

Now what?s got me confused is why is the resolution so much different to a progressive scan? i.e. why does 1080i have the same res as 720p?

 

I mean, both fields are making a whole frame. Only difference is with interlaced you?re capturing half the frame 1/50th quicker than the other half. Creating a 50fps look. I?d of thought the resolution would be exactly the same.

 

And what exactly is deinterlacing doing to the image? I know it lowers the quality. Does it stretch one field down over the other field? Therefore it halves the resolution but leaves you with a progressive scan?

 

If someone could explain this then that would be great. Tnx.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

Your understanding of interlaced scan is correct.

 

The reason interlaced stuff has lower effective resolution is that the cameras have to filter out very sharp vertical detail or it flickers like crazy. You can see this if you put one of your still photos (which has way higher res than the video) into Premiere - see how flickery and ringingly sharp it looks? You have to apply a very small vertical blur (way less than one pixel) to smooth it out, which kills the resolution. My camera has a setting called "super V" which seems (going by what it looks like) to stop it doing this smoothing, which makes the image flicker like crazy - but does make it more suitable for certain kinds of deinterlacing.

 

There are different ways to deinterlace. The most primitive is to simply copy field one into field two - that is, copy the odd lines into the even lines, or vice versa. You get perfect progressive scan, but half the vertical resolution and half the shutter angle.

 

A slightly better way is to interpolate field B from field A - basically, take just the first field (which looks half the height) then do a bicubic or bilinear resample - like Photoshop does - to try and make up the difference. Smoother, but not actually any sharper. There are various kinds of interpolation, some of which look better than others.

 

The best way to do it is to use a smart deinterlacer which will observe the content of the image and intelligently apply interpolation only to those areas with combing. This is an imperfect science, unfortunately, but it means that unmoving areas of the image don't get softened by field blending.

 

Phil

Link to comment
Share on other sites

Think of it this way. The first line of the second field, that is, the second line of the interlaced frame, occurs a whole frame later than the first line of the first field.

 

With non-interlaced, the second line of the frame occurs immediately after the first.

 

It is just as if you have two seperate frames, one after the other. A lot of movement can occur during that time.

 

1080i, I don't believe, is equivalent to 720p. I think it would be the same as 540p, or half, because each half of the 1080 frame is displayed one at a time.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

> 1080i, I don't believe, is equivalent to 720p. I think it would be the same as

> 540p, or half, because each half of the 1080 frame is displayed one at a time.

 

I think we're talking about antiflicker interpolation and its affect on perceived resolution here. I've heard people say that 1080i has similar perceived resolution to 720p due to this.

 

Phil

Link to comment
Share on other sites

  • Premium Member
1080i, I don't believe, is equivalent to 720p.  I think it would be the same as 540p, or half, because each half of the 1080 frame is displayed one at a time.

If you filter the vertical resolution down to exactly half, 0.5 times that of progressive, 1080i would indeed be equal to 540p. That would also completely eliminate line flicker or small area flicker or twitter, whatever you want to call it. If you try to present full 1080p resolution in 1080i, 1.0 times the resolution of progressive, you get absolutely intolerable line flicker.

 

The thing that made interlace a good idea back in the analog broadcasting to CRT's days is that you can use some number between 0.5 and 1.0 and get a little more sharpness at the cost of a little more flicker. The tradeoff is usually around 0.60 to 0.65, and commonly called the "Kell" factor, though Ray Kell didn't invent it, and didn't like having his name attached to it. Subjectively, you gain more in sharpness than you lose in flicker with a Kell factor in that range. BTW, 1080 x 0.65 = 702, just a tad bit less than 720p.

 

In the analog days, interlace got you 65% of the sharpness using 50% of the bandwidth you would have needed for progressive. In the digital era, interlace is a mistake. It really effs up digital compression. Interlace is a very crude form of lossy compression. Digital methods such as Discrete Cosine Transform can do a better job, far more elegant and powerful than interlace.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • 3 weeks later...
Digital methods such as Discrete Cosine Transform can do a better job, far more elegant and powerful than interlace.

 

Yes, people say MPEG compression is 30% more efficient in progressive.

 

Also, motion estimation (mainly camera or global movement in MPEG2) is easier.

 

The main "problem" of interlace, is that the missing information is unrecoverable, because the time code of 2 fields are different. There is no way (other than interpolation) to recover the odd lines of the even frame. The odd lines at this particular time code have just been lost forever :(

 

(my mother tongue is not english:)

Link to comment
Share on other sites

  • Premium Member

Hi,

 

DCT and interlaced scan are not really comparable as compression methods. DCT actually reduces the amount of information you have to send overall; interlace simply smooths out the flow of data so that it's more constant (less "bursty" in computer terms), but doesn't inherently remove anything that you'd see if you did exactly the same thing as progressive.

 

The reason this was done was to make the flow of information better match the rate at which it could be painted onto the screen, so you don't end up with a long dark period, lots of flicker, visible rollerblinding, or other problems. It's not compression.

 

The low-bandwidth UV subcarrier is a form of compression, but that's more akin (directly akin, in fact) to colour subsampling in YUV video.

 

Phil

Link to comment
Share on other sites

The reason this was done was to make the flow of information better match the rate at which it could be painted onto the screen, so you don't end up with a long dark period, lots of flicker, visible rollerblinding, or other problems. It's not compression.

 

Yes. Interlace is not compression. Interlace is a kind of "line subsampling".

 

time

------Tfield1------Tfield0------Tfield1------Tfield0------Tfield1------Tfield0--->

 

pixels

----odd lines---even lines---odd lines---even lines---odd lines---even lines

 

What I meant, is that the even lines of the image at time Tfield1 are lost forever.

You may have the spatial information of even lines only at the previous (or the next) Tfield0 time, but this is 20ms after (or before). Hence the interpolation when we deinterlace.

 

(for NTSC field order is reversed and timing is 16,6 ms).

 

Interlace was a very clever bandwidth reduction trick, because the eye does not care to much about the missing lines :)

 

Will interlace survive ? is 1080i better than 720p ? maybe marketting has more influence on the subject than our technical opinions ...

Link to comment
Share on other sites

  • 3 weeks later...
  • Premium Member
If you filter the vertical resolution down to exactly half, 0.5 times that of progressive, 1080i would indeed be equal to 540p.  That would also completely eliminate line flicker or small area flicker or twitter, whatever you want to call it.  If you try to present full 1080p resolution in 1080i, 1.0 times the resolution of progressive, you get absolutely intolerable line flicker. 

 

The thing that made interlace a good idea back in the analog broadcasting to CRT's days is that you can use some number between 0.5 and 1.0 and get a little more sharpness at the cost of a little more flicker.  The tradeoff is usually around 0.60 to 0.65, and commonly called the "Kell" factor, though Ray Kell didn't invent it, and didn't like having his name attached to it.  Subjectively, you gain more in sharpness than you lose in flicker with a Kell factor in that range.  BTW, 1080 x 0.65 = 702, just a tad bit less than 720p.

 

In the analog days, interlace got you 65% of the sharpness using 50% of the bandwidth you would have needed for progressive.  In the digital era, interlace is a mistake.  It really effs up digital compression.  Interlace is a very crude form of lossy compression.  Digital methods such as Discrete Cosine Transform can do a better job, far more elegant and powerful than interlace.

-- J.S.

 

Just because digital can't handle certain kinds of analog doesn't mean that analog is the problem.

Link to comment
Share on other sites

Just because digital can't handle certain kinds of analog doesn't mean that analog is the problem.

 

You are right !

But transmission & storage (i.e. compression), capture (high quality material from film or progressive cameras), and displays are progressive :rolleyes:

 

Also deinterlace is tricky, especially in a "low cost" TV set.

 

practical reasons are pushy !

Link to comment
Share on other sites

  • Premium Member
Just because digital can't handle certain kinds of analog doesn't mean that analog is the problem.

OK, I can agree with that. I'm not saying that analog is somehow wrong. I am saying that interlace is appropriate in analog, but does more harm than good in digital.

 

Ride a horse if you like, but don't put a saddle on your car -- no, go ahead, put a saddle on your car if you want to. Just accept that the vast majority will choose not saddle their cars or saddle digital images with interlace.

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
In the analog days, interlace got you 65% of the sharpness using 50% of the bandwidth you would have needed for progressive.  In the digital era, interlace is a mistake.  It really effs up digital compression.  Interlace is a very crude form of lossy compression.  Digital methods such as Discrete Cosine Transform can do a better job, far more elegant and powerful than interlace.

-- J.S.

 

Hi Phil, this is the exact quote I was referring to.

 

Aren't we just a microstep away from saying that stupid sound waves f'up digital compression?

Link to comment
Share on other sites

  • Premium Member

Hi,

 

No. Personally I'd disagree with the idea that interlaced images are a problem for DCT compression. The two main ways of dealing with it (either compress as two half-heigh images, or do what DV does) are simple enough workarounds and work nicely - only on a couple of occasions have I had issues with DV's difference-estimation setup.

 

Yes, progressive is more straightforward, but interlace is hardly a huge issue.

 

What d'you mean by "stupid sound waves"?

 

Phil

Link to comment
Share on other sites

Personally I'd disagree with the idea that interlaced images are a problem for DCT compression. The two main ways of dealing with it (either compress as two half-heigh images, or do what DV does)...

 

I don't know what DV (please tell me in a few words) do, but fields are not as friendly as progressive frames, because they are subsampled with no filtering, and DCT assumes signal continuity.

 

We would get better results by sooting with a "half vertical resolution" progressive camera...and up-scale the image after decompression B) maybe this is what dv does ?

 

Of course these techniques were not available in the old days of interlace, but now, we could choose this way instead of interlacing a material which is very difficult to deinterlace properly :rolleyes: (scaling is easier than deinterlacing).

 

Sincerely.

David.

Link to comment
Share on other sites

  • Premium Member
What d'you mean by "stupid sound waves"?

 

Phil

 

I was joking.

 

I just get the sense that if something doesn't transfer well to digital it's never digital's fault. Many of the lower end digital cameras have inferior audio inputs yet I read people dissing the mikes rather than the camera.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

> I just get the sense that if something doesn't transfer well to digital it's

> never digital's fault.

 

That may or may not be the case, but you haven't presented any very good examples.

 

> Many of the lower end digital cameras have inferior audio inputs yet I

> read people dissing the mikes rather than the camera.

 

You won't find anyone who knows what they're doing, doing that. You've also read a lot of people say that a PD-150 is "broadcast quality" (whatever the hell that is) and nobody's making that contention. This is all strawman.

 

Also:

 

> I don't know what DV (please tell me in a few words) do

 

It looks at the degree of difference between alternate rows of every macroblock, and makes a qualitative decision as to whether to compress it as an 8x8 bitmap or two 4x8 bitmaps. This means that the algorithm is directly applicable to progressive-scan images with no alteration.

 

> but fields are not as friendly as progressive frames, because they are

> subsampled with no filtering, and DCT assumes signal continuity.

 

Yes, you could make an argument that the greater geometric offset between consecutive lines of a field would create larger DCT coefficients than the smaller offset of consecutive lines of a frame. It would be interesting to see how the zigzag sampling pattern used by most DCT implementations is affected by this - theoretically it should be reduced by 50%, and how other large offsets introduced by DSP (such as edge enhancement) compare to this caveat.

 

Having said that, the main reason people are clamouring for progressive scan is not that they think it will cause the images to compress better!

 

> We would get better results by sooting with a "half vertical resolution"

> progressive camera...and up-scale the image after decompression

> maybe this is what dv does ?

 

No, that's not what DV does, and it wouldn't get you better pictures! If you particularly want to you can just drop a field and see, that's how primitive deinterlacers do it!

 

Phil

Link to comment
Share on other sites

  • Premium Member
Hi,

 

> I just get the sense that if something doesn't transfer well to digital it's

> never digital's fault.

 

That may or may not be the case, but you haven't presented any very good examples.

 

 

 

 

The very next thing I posted had to do with audio inputs and how many diss the mikes insteads of the cameras on digital camcorders. Just because you know the truth does not mean the majority of shooters out there know, ergo, the statement "just because something doesn't transfer well to digital it's never digital's fault" because the majority of those who shoot digital believe this.

 

In addition, I was using the interlace comment several posts earlier as another example.

 

Then we can go on to the ridiculous concept that most NLE systems don't offer true EE pathway presets, an absolute precursor to any type of substantive and reliable method of quality control.

 

I'm still waiting to see someone come up with actual individualized keyboard dials for basic video controls such as chroma, brightness, set-up, and hue.

Link to comment
Share on other sites

  • Premium Member

Hi,

 

> I'm still waiting to see someone come up with actual individualized keyboard dials for

> basic video controls such as chroma, brightness, set-up, and hue.

 

Every serious NLE has these things. You want keyboard controls for them? To have keyboard controls for every function in the application you'd have a keyboard the size of the desk (See: Da Vinci, Pogle.)

 

Anyway, you're essentially saying "Within this very small subset of things I have identified that don't have feature X, feature X is missing." Well, great!

 

Phil

Link to comment
Share on other sites

Having said that, the main reason people are clamouring for progressive scan is not that they think it will cause the images to compress better!

 

hdforum argue with kell factor that 1080i is better (for Europe) than 720p.

I disagree :P

 

it wouldn't get you better pictures! If you particularly want to you can just drop a field and see, that's how primitive deinterlacers do it!

 

[with PAL numbers]

The test is quite easy with VirtualDub :

-take a progressive video (Vp)

-build an interlaced video (Vi)

-build a resampled progressive video (Vr)

 

Now, compare :

-Vr scaled properly to recover correct aspect ratio

-Vi deinterlaced properly (using DScaler)

 

Vr is better.

 

I have been looking closely at slow motion machines (like EVS) and building full frames from interlaced material is really difficult, even with smart image processing trying to overcome the prehistoric subsampling of interlacing (even with a broadcast camera which DSP's are working hard to make it acceptable).

 

This is only my european developper opinion.

Sincerely.

Link to comment
Share on other sites

  • Premium Member

Look at it another way: If Garcia had kept his mouth shut back in 1928 and not patented interlace, and if nobody else had thought of it before now, is there a ghost of a chance that anybody could sell the idea in a progressive digital world?

 

 

 

-- J.S.

Link to comment
Share on other sites

  • Premium Member
Hi,

 

> I'm still waiting to see someone come up with actual individualized keyboard dials for

> basic video controls such as chroma, brightness, set-up, and hue.

 

Every serious NLE has these things. You want keyboard controls for them? To have keyboard controls for every function in the application you'd have a keyboard the size of the desk (See: Da Vinci, Pogle.)

 

Anyway, you're essentially saying "Within this very small subset of things I have identified that don't have feature X, feature X is missing." Well, great!

 

Phil

 

Wow have you opened up a can of worms! A small subset? The very basic building blocks of all video productions are your 4 basic video levels known as brightness, color, set-up, hue. A cinematographers primary value to a production is the optimization of those four video values while shooting.

 

It's the NLE systems that then come into play and scream out that EVERY shot must be altered, composited, bounced, floated, warped, digitally zoomed in etc...

 

All I'm asking for are four lousy video knobs on the actual keyboard that would function in real time without having to pull down a dang on-screen menu. And two more knobs for the source audio with bass and treble control. Is that really too much to ask for?

 

As for Desksize keyboard, as monitors keep increasing in screen size and double monitors are no longer an option but a necessity, we could call that a monitor version of a desksize keyboard, no?

 

I doubt the keyboard would be the size of a desk, surely it would not have to be any larger than a Panasonic MX-50.

Link to comment
Share on other sites

  • Premium Member
hdforum argue with kell factor that 1080i is better (for Europe) than 720p.

I disagree  :P

[with PAL numbers]

The test is quite easy with VirtualDub :

-take a progressive video (Vp)

-build an interlaced video (Vi)

-build a resampled progressive video (Vr)

 

Now, compare :

-Vr scaled properly to recover correct aspect ratio

-Vi deinterlaced properly (using DScaler)

 

Vr is better.

 

I have been looking closely at slow motion machines (like EVS) and building full frames from interlaced material is really difficult, even with smart image processing trying to overcome the prehistoric subsampling of interlacing (even with a broadcast camera which DSP's are working hard to make it acceptable).

 

This is only my european developper opinion.

Sincerely.

 

 

But you've left out a separate issue. Film transferred to interlaced analog can be transferred at whatever speed is necessary, the end result being one doesn't have to change the speed of analog video at all, one just does it during the film transfer session.

 

FILM RULES! Film treats all video formats equally.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...