Jump to content

Question...on the use over 'Oversampling'?


Recommended Posts

Hi guys, kind of new here, how are you all doing?

 

I've been meaning to ask this and I don't know if this is the right place to do so (or even the right forum), but I was wondering, what is the purposes of 'Oversampling'?

 

I'm talking about in terms of images, which I'm sure definitely relates to film (analogous or digital). I have my theories on why and what I sometimes explain to people, like having more information to work with, reduce the amount of aliasing, reducing any artefacts introduced through any image processing (please correct me if I am wrong on this matter!), but I really would like to know and hear it from you guys.

 

What are the benefits? Any disadvantages? When does it come in? Is it the same as 'Upsampling' since the two seem so similar :D

 

I ask this also because I intend on using this technique for my final major project at university and I indeed would love to know more about it, from the pros especially ;)

 

I really hope you lot don't mind me asking, and I hope and look forward hearing back from you all too! :)

Link to comment
Share on other sites

Beyond what you already know, I can only help with your definition of upsampling. Upsampling is really the opposite of oversampling, because you are taking a smaller image and extrapolating the information to end up with a larger image, while oversampling is starting large and ending smaller. A Blu-ray player up-converting a DVD is a good example of upsampling. Depending on the quality of the original image and the processor used to upsample, the results can look great, or not so much.

Link to comment
Share on other sites

But taking an image from larger to smaller is actually Downsampling. Whilst that shares a process that is done in Oversampling, how does it differ from Upsampling?

 

This is what I am trying to understand, cause Upsampling directly refers to interpolating and extrapolating, Oversampling still deals with data that becomes larger the original. For example, if one was to say 35mm was 4K but decides to scan at 8K for it to be image processed at, no kind of interpolation is been applied here, so how does one describe this?

 

I hope everyone understand what I'm trying to say here! :D

Link to comment
Share on other sites

Upsampling is increasing the sampling rate of a signal

Downsampling is reducing the sampling rate of a signal

Oversampling is sampling a signal with a frequency significantly higher than the Nyquist rate

 

I would take it that your example would be upsampling, unless you sampled the original 35mm film at 8k.

Link to comment
Share on other sites

yeah, it's not the clearest terminology - at least to me... I read on wikipedia also that upsampling 'increases the sample rate', but the way I figure it the signal was sampled already, you are really just interpolating the data and playing it back as if it were sampled at a higher rate (or not, in the case of slowmo, stretched imagery etc...)

 

Oversampling I would hesitate to call 'opposite' of either up or down sampling - it's more of an idea or concept as opposed to a process or algorithm that could be implemented more directly in code as up and down sampling can.

 

If you take a photo at a resolution much higher than you'll ever need you're effectively oversampling, why would you do it? all the reasons you've outlined yourself ... Why wouldn't you? extra storage and processing issues etc... But the process of making your data adhere to a common standard among other sets of data, be it a frame rate, pixel dimensions and/or bit depth may well require up or (hopefully) downsampling. In the case of oversampled data it will require downsampling as that defined the data to be oversampled in the first place.

 

As you can imagine, oversampling can happen by accident and justified in hindsight if you really want :rolleyes:

 

I wouldn't think too much on it ... or maybe a better way to think about it is that you can trust other people will have differing interpretations, quite a few of them inconsistent with others, and quite a few inconsistent with themselves :lol:

Link to comment
Share on other sites

Oversampling I would hesitate to call 'opposite' of either up or down sampling - it's more of an idea or concept as opposed to a process or algorithm that could be implemented more directly in code as up and down sampling can.

 

You put it much better than I. What I meant is that upsampling and downsampling are the opposites of each other (obviously), while oversampling would be sampling higher than is necessary because you plan on downsampling it. This was done for 2012 Lawrence of Arabia restoration, which was scanned at 8K, but mastered in 4K.

Link to comment
Share on other sites

This was done for 2012 Lawrence of Arabia restoration, which was scanned at 8K, but mastered in 4K.

 

This is EXACTLY what I am referring to! The thing is with Oversampling I guess is that you are 'generating' more data than what is there theoretically. In our cases of visual or audio, you are 'capturing' more than what is probably there. This 'capturing' term can be though very ambiguous and I'm having a hard time how to put this in better words or to describe...or to really understand this.

 

Like...'Nyquist rate'???

Link to comment
Share on other sites

Looking at your average TV programme on a standard definition television, the productions shot on HD tend to have shaper looking images than those shot on SD. This is a form of oversampling, although it's not planned as such by the producers.

 

Special effects companies did 4K scans on 35mm film neg (They discovered that the 2k scans weren't extracting all the information from the 35mm neg), so 8k sounds reasonable for 65mm for a 4k master, plus it may allow for a possible 8k release in the future.

Link to comment
Share on other sites

This is exactly true, all of which I am aware, similarly this kinda relates back to optical effects where all originated from 70mm and then printed back down to 35mm for the master.

 

and also (still relevant) I love the fact how some of these old shows that were shot on (Super)16mm are been transferred at much higher resolutions and for it only to be released on Blu-ray, the images are stunning, brings new life to them and really do show the power of film.

 

This whole Oversampling business that I try researching online always refers to the audio meanings and definition. Are there any articles or books based on this topic? The very closest I've gotten to was the Nokia Lumia 1020, it uses (dont know if its true) a 41MP sensor to capture and then it is downsampled to the final desired image.

Link to comment
Share on other sites

 

This is EXACTLY what I am referring to! The thing is with Oversampling I guess is that you are 'generating' more data than what is there theoretically. In our cases of visual or audio, you are 'capturing' more than what is probably there. This 'capturing' term can be though very ambiguous and I'm having a hard time how to put this in better words or to describe...or to really understand this.

 

Like...'Nyquist rate'???

 

In terms of data you first need to decide what is meaningful to you - if film grain is meaningful to you then 8k or higher could make sense, especially in the larger formats - heck, you can scan one grain at 8k if you really want ...

 

tmax1_resize.jpg

 

(http://www.optics.rochester.edu/workgroups/cml/opt307/spr04/jidong/)

 

Please please lets not start a film vs. digital debate here - I'm just bringing this up to provide some context, that we can hopefully just nod along to or maybe prompt some more research/thought and each/all go our merry way - ok??

 

Anyhoo, film grain for some (and I can certainly play the part when prompted to by a digital zealot) is the image... And following from that it imparts a quality that digital cannot. So for some, 8k when scanning larger formats makes a lot of sense. If you disagree, then fine, good for you, don't do it :)

 

Not quite sure what you mean by 'generating' ? You're sampling, at whatever rate, a sample is a sample. What samples do you define as generated and which aren't ? What is the limiting factor? is it something subjective perhaps ? ;)

 

Nyquist rate - yeah, the wikipedia article isn't great, reads like it's been written by people trying to out-clever-erate the last contributor. I get them myself, but I really can't explain it without words, actually not so much explain it, but show the same plots of samples that I saw when the penny dropped (properly) for me - I'll try to hunt them down ... If I recall it's actually aliasing which is the outcome of too low a sampling frequency that is more illuminating:

 

Aliasing.gif

 

figure1_20091202123113.JPG

Link to comment
Share on other sites

  • Premium Member

Hi guys, kind of new here, how are you all doing?

 

I've been meaning to ask this and I don't know if this is the right place to do so (or even the right forum), but I was wondering, what is the purposes of 'Oversampling'?

 

I'm talking about in terms of images, which I'm sure definitely relates to film (analogous or digital). I have my theories on why and what I sometimes explain to people, like having more information to work with, reduce the amount of aliasing, reducing any artefacts introduced through any image processing (please correct me if I am wrong on this matter!), but I really would like to know and hear it from you guys.

 

What are the benefits? Any disadvantages? When does it come in? Is it the same as 'Upsampling' since the two seem so similar :D

 

I ask this also because I intend on using this technique for my final major project at university and I indeed would love to know more about it, from the pros especially ;)

 

I really hope you lot don't mind me asking, and I hope and look forward hearing back from you all too! :)

A major advantage of over-sampling is that it allows you to substitute a software-defined electronic low-pass filter for the optical low-pass filter used by all digital cameras.

 

Contrary to what is commonly asserted by a disturbingly large number of so-called "experts" on this and other forums, the maximum number of lines of image resolution that can be recovered from a discrete-photosite sensor (that is, one made up of a large array of photodiodes) is exactly half the number of active pixels.

 

That is, for example, a "full HD" CCD sensor with 1,920 rows of 1080 pixels, can only ever resolve 960 vertical lines horizontally and 540 lines vertically. ("Lines" in this case means white lines on a black background, for example 960 black lines on white background = "1080 lines")

 

A so-called "4K" sensor can only resolve 2,000 lines; the terms 4K and 4,000 lines are NOT interchangeable.

 

It is utterly impossible for any sampling system to meaningfully record any data past half the pixel count. This is known as the "Nyquist Limit" after the engineer Harry Nyquist, who was a pioneer of sampling theory.

 

If you do attempt to sample past the Nyquist limit, the result is meaningless noise, which conveys no useful picture information, and forces the viewer's brain to work harder "editing out" the spurious information. It also makes more work for any subsequent video compression system, and loads the resultant file with irrelevant random data which incurs a considerable data overhead, because it can't be easily compressed.

 

With an Analog-to-Digital Converter encoding an analog video signal, the solution is to insert an electronic low-pass filter in the analog video line, which simply removes any video frequencies past the Nyquist limit. It is quite easy and cheap to build a "Brick Wall" filter, which simply means it has a completely flat response right up to its cutoff frequency, and then virtually no response after that. That way the entire video signal up to the Nyquist limit is preserved, and the final output signal has the theoretical maximum amount of detail that can be carried by the number of pixels used.

 

When it comes to the optical image sampling system used in an digital video camera, an analogous system has to be used, since it is quite likely that the lens will be able to resolve fine detail beyond the Nyquist limit, which will result in ugly "artifacts" in the final image.

 

In this case an "Optical Low-Pass Filter" has to be used. This is different from an ordinary diffusion filter in that (ideally at any rate) the filtering effect should kick in abruptly at the Nyquist point, and below that point it should have no visible effect. So if for example you were zooming back from a test chart made up of fine vertical lines, the lines would be seen to be getting closer and closer together, and then abruptly "snap" to grey.

 

The problem is, there is no known technology that can produce an optical low-pass filter with anything like the performance of an electronic low-pass filter.

You certainly can't get anything like a "Brick Wall" response. The cutoff is always going to be a fairly long and gentle slope, rather than the desired "Cliff" edge cutoff.

 

This presents a severe design problem. If you want to ensure that all Post-Nyquist detail is scrubbed from the optical image, the cutoff slope is going to have to start a long way back in the Pre-Nyquist region, which results in a softer image. Although you can alleviate this by high-frequency boosting of the sampled signal, this will then bring up the noise, effectively lowering the signal-to-noise ratio.

 

If you move the cutoff point further out, the picture will be sharper, but you then run the risk of adding artifacts to the sampled image. All modern camera designs are a compromise between native image sharpness and artifact generation.

 

However, you can overcome all these problems (and some others) by over-sampling the image.

 

For example, if your final product is going to be 1920 x 1080 HD, you might design a camera sensor with say 3,000 x 1,500 pixels. That would necessitate an Optical Low Pass filter with a cutoff of about 1,500 lines (horizontally) rather that the 960 required for 1920 pixels.

 

It's not hard to design a optical filter which is flat to about 960 lines and cutting off almost completely at about 1,500 lines.

 

As far as being a 3,000 x 1,500 pixel image goes, because of the characteristics of the optical low pass filter, it wouldn't look all that much sharper than "full" 1920 x 1080. The difference between that and a camera with a 1920 x 1080 sensor is that the response up to 960 lines would now be dead flat, instead of starting to drop off around say 600 lines.

 

For your final product, you can then apply a razor-sharp electronic low-pass filter to slice off everything above 960 lines horizontally and 540 lines vertically. Your 3,000 x 1,500 output signal then would contain all the detail it is possible for a 1920 x 1080 signal to contain. After that it can be resampled back to actual 1920 x 1080, with the same image resolution.

 

Apart from this, most images can be improved with a judicious amount of "Detail Correction", basically drawing thin black or white lines around regions with an abrupt brightness change, to produce an illusion of greater sharpness. However this can only ever be a computer's "best guess" as to what the original scene actually looked like, and it often guesses wrong.

 

An example is an actor's face with a bit of "5 O'clock shadow". On his face are going to be a number of small black dots from the emerging stubble, but there are also likely to be small skin blemishes such as freckles, which appear as slightly larger areas, not as dark.

 

The trouble is, on an inferior imaging system the freckles and bits of stubble are going to wind up looking much the same to the image sensor and it's entirely possible for the same amount of detail correction to be applied to them, correctly to the stubble, and incorrectly to the freckles.

 

The result is the typically crap picture you get from cheap TV cameras with cheap lenses.

 

However if you oversample the image, the detail correction software has a much greater chance of correctly working out which parts of the image actually need sharpening, and which parts don't. The final detail correction is still applied in the final output resolution, but it's more like the computer is able to put on a pair of glasses and examine the data more closely.

 

 

 

 

 

 

 

 

 

 

 

 

 

  • Thanks 1
Link to comment
Share on other sites

I've been posting a lot of online references, footnotes and bibliographies myself lately ... Seems like a farce, no marker is ever going to take a sufficiently long stream of percent signs, ampersands, question marks, colons and random letters and actually type it in to a browser to check.

 

I find gleaning information from the internet is like channel surfing - I read some 'fact' on wikipedia, then I hunt it down to verify it say from two other sources, when I reach a possible source I speed scan it looking for only text that will verify my fact, once I have done this twice, I take whichever one has the longest/most hard to read and type in URL and use it as my reference. Make sure you hand it in on paper.

 

Before the internet, it's not like everything that was written is true ... It all was built and depended somewhat on other writings anyway.

 

reference shmeference :P

 

But yeah, sure ;)

Link to comment
Share on other sites

  • Premium Member

Guys...this is all EXTREMELY helpful! I'm really appreciative of that!

 

Is it okay to use you lot as reference for my university's dissertation? aka thesis? :D

You can use any of my post.

Except where I said: for example 960 black lines on white background = "1080 lines")

 

which should have been: for example 960 black lines on white background = "1920 lines" :D

Link to comment
Share on other sites

  • Premium Member

I've been posting a lot of online references, footnotes and bibliographies myself lately ... Seems like a farce, no marker is ever going to take a sufficiently long stream of percent signs, ampersands, question marks, colons and random letters and actually type it in to a browser to check.

 

I find gleaning information from the internet is like channel surfing - I read some 'fact' on wikipedia, then I hunt it down to verify it say from two other sources, when I reach a possible source I speed scan it looking for only text that will verify my fact, once I have done this twice, I take whichever one has the longest/most hard to read and type in URL and use it as my reference. Make sure you hand it in on paper.

 

Before the internet, it's not like everything that was written is true ... It all was built and depended somewhat on other writings anyway.

 

reference shmeference :P

 

But yeah, sure ;)

My Niece has just completed a three year Arts degree, and most of the course notes she was given on multimedia production were complete bunk; I don't know where they get these clueless lecturers from.

 

Anyway, I helped her with a lot of her essays, but I was worried that what I told her might be deemed to be "wrong" even though I know damned well that it was accurate.

 

"No," she said. "All I do is google a couple of sentences, and I invariably find a publication that supports what you said. So I just quote those..."

 

She recently graduated with top marks....

Mind you, none of the stuff she had to write about has much to do with her chosen career path!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...