Jump to content

Question for the experts !!


Yahya Nael

Recommended Posts

It is easy to keep metaphysicians busy. Ask them:

 

What is the relation of "material embodiments" to their "images"?

What makes material thing A the embodiment of image B?

Two different material things can be the embodiments of the same image. Can one material thing be the embodiment of two images?

Can there be the "zero" embodiment? Can there be a minimal embodiment? A maximal embodiment?

Images have attributes. Of what kinds?

Images have attributes that may or may not be visible in the image's embodiments. "An image is not to be found in its material embodiment". Is it perhaps to be found in the totality of all its material embodiments? Is it to be found some other way, or in no way?

When we "treat the image as an objective reality independent of it's material embodiment" what kind of independence is this?

 

Carl is "proud to be metaphysical". Oh-oh, being metaphysical is not the same as being a metaphysician. Maybe these questions will not keep him busy.

 

Who has slaughtered the word "theory"?

 

And the saddest thing is that it contributes nothing to either the art or science of cinema.

Link to comment
Share on other sites

The "material embodiment" is a phrase selected to categorise those aspects of an image described by phrases such as "ink on paper" or "screen pixel" or "silver halide crystal", or "Romain tile", or "data in memory", none of which are specifically indicative of what we otherwise mean by the image proper.

 

The goal here is to make a distinction between the image proper on the one hand, and the material components by which the image can be decomposed or recomposed.

 

The image refers to that apparition or hallucination (if one prefers such a way of speaking) when looking at what might otherwise be described in terms of: ink on paper, or screen pixels, or silver halide crystals, or Roman tiles, or image data.

 

Its that last example, "image data" which throws into the question whether it's really appropriate to describe an image in terms of looking at it's material components. How does one look at image data?

 

What one looks at, or rather, what one sees, is an image. An hallucination. An apparition. The experiential side of what we're otherwise discussing.

 

But one does not always see the entire image. Different attributes will be hidden or visible according to how the "material embodiment" is decoded.

 

The image (appartition, hallucination, mental delusion) is that which can be (interestingly) transferred from one material to another. In the process certain attributes of the image can become hidden or suppressed, or re-revealed.

 

Attributes are things like tonal gradation. Sharpness. Colour. All the terms one might use when discussing the qualities of an image. The qualities of that apparition or hallucination, or mental delusion that we otherwise call an image, or a signal.

 

I am proud to be metaphysical. Literally. To be an image, a ghost, a phantom, rather than any particular material embodiment of such.

 

I use the word philosophy with respect to what I'm saying. But then I don't use the word "philosophy" as a term of abuse. I don't abuse myself, even if I might be characterised that way for whatever purpose.

 

Dennis is obviously not content to use the word "philosophy" in the same way. For Dennis it would seem it's a term of abuse.

 

In addition to being philosopher (good grief - what a sin) I am also slaughtering theory in the process!

 

Even worse is that I am to be characterised as "sad".

 

I don't know how I can turn that last one around. That does hurt. But turn it around one must. There is no reason to accept any of these accusations in terms of their intent.

 

What we're talking about here is simply a difference to be appreciated between the image (mirage, etc) and the physical materials manipulated in order to make such an image visible, to the extent it can.

 

The difficulty I can fully appreciate. How to think of the image as independant of the materials through which one mediates it. I don't have any answers for that. Zilch. All I have is the concept of it.

 

As strange as it obviously is.

 

C

Link to comment
Share on other sites

But lets return to the question we're trying to address - as to whether we can use video as a base signal, to which grain is subsequently applied.

 

I'll try to avoid any so called "philosophical" or "metaphysical" concepts here, but I think these are ultimately required - if only as a starting point for more formal elaboration. But lets see if we can skip that step - to treat the philosophy/metaphysics as either already done, or treating it as taking us somewhere we don't want to go.

 

Science.

 

I'll reproduce the same image I did before but add a new commentary that might be more amenable to any fear of philosophy and metaphysics..

 

QuantisationNoiseEtc.jpg

 

The image labelled "ORIGINAL" in the above is an image that could have begun life as a digital image or a film image. Obviously here, on a computer screen, it's a digital copy of some original image elsewhere. For this discussion it doesn't matter whether it's previous incarnation was a digital image or a film image or a light image (obviously it was a light image as some point in it's history).

 

We treat this "ORIGINAL" image as the base signal. This image is an 8 bit image.

 

The image in the top right is the result of a transfer of the ORIGINAL to a lower bandwidth domain. To be precise the new domain is a 2 bit one. From an 8 bit signal to a 2 bit signal. Each pixel, in this new result, can only be one of four colours. Now this transfer is done by direct quantisation. It is completely analogous to a digital camera that could only capture 2 bits per pixel.

 

The reason for using such an obviously crude 'camera' is simply to make really obvious what is going on here.

 

How do we get around the contouring occuring in the top right image? Well, one way is to increase the number of bits allocated for the top right image. If we increased it to 8 bits we could get a one to one transfer of the original.

 

The other way is to dither the image prior to quantisation. The important point here is doing it prior to quantisation. It is too late doing it afterwards. The bottom left image in the above shows the result of dithering the ORIGINAL prior to quantisation.

 

And it is this bottom left image (a dithered version of the original) that is then transferred, to become the bottom right image, using exactly the same "2 bit camera" as that used to create the top right image.

 

Lets compare this pipeline to what happens if we do the dithering after quantisation. If we take the top right image (direct quantisation from the original) and dither that instead, we get the following:

 

DeleuzePost.png

 

As should be evident, the contoured image is still there. The dithering has done nothing!

 

We need to dither a signal, prior to quantisation. We can't achieve what we're after by doing it afterwards.

 

Now here's the mental leap. Bear with me. Imagine, if you will, the top left as a standin for a light signal. Imagine the top right image is a digital camera version of this light signal. Certainly here it is a cruddy digital camera - but that in no way affects what is being demonstrated here. It just makes it more obvious what is otherwise going on. Imagine the bottom left image is a film version of the same original light image. Now obviously it would be completely unfair to compare the bottom left image with the top right image. But if we now transfer the film to digital using the same cruddy camera, they are now both in the same domain - within a 2 bit domain - having been mediated by the same cruddy digital camera.

 

And here it is now fair to compare both images: the top right and the bottom right, both of which are the result of employing the same "cruddy camera" (it's bandwidth being 2 bits per pixel in both cases) but one of which is obviously doing a better job of transcoding the original light signal.

 

Simply by passing through a pre-dither stage.

 

We can increase the resolution and bandwidth of our digital camera and it wouldn't change anything. The dithering in film is of infinite precision. It can act as pre-dither to whatever resolution you make the digital camera.

 

Hopefully this is a more uncontroversial commentary on what is being expressed. Or at least one that can be negotiated and critiqued without resorting to accusations of unscientificness, truthlessness, etc.

 

cheers

 

C

Link to comment
Share on other sites

This is one of the craziest threads I’ve ever read on here.

 

The film technicians have been on the constant pursuit of positively positioning the perforated strip.

Video sensors now should begin to wiggle around?

 

I’m out, friends.

 

Computer Graphic processes have had to deal with 'how to simulate' Real Life lighting effects for as long as the concept of computer graphics has been around.

 

As far as I know, the movement of animation from individual 'cells' to computer generated 3-d, near 'real' renderings, has not sent the viewers home lamenting the lost art of cell painting...

 

In fact, since Luxo Jr... 4-quadrant animation stories have been filling the theaters...

 

For me there are certain artifacts of the rasterization process which Digital films have, that 'should' be dealt with. The break up of 'jaggies' which has been a problem for CGI effects for years, and there has been significant solutions for that 'problem', is probably for me the most significant item.

 

In terms of capture, getting to better dynamic range lessens artifacts on that account. And there may be some number of guidelines that digital capture is required to conform to, to minimize others.

 

Film film capture had a number of guidelines as well to minimize the temporal artifacts of the 'still image' at 24 fps, which of course Digital may have similar problems, and of course in that regard, there's the ever popular 'global shutter' issue... but even if that were solved... somebody would complain about the difference of the shutter blade effect on Film vs. whatever the digital global shutter may produce...

 

At which point, I consider most of those to be 2nd or 3rd order effects, and not something that truly detracts from the image to the point of the material being 'unwatchable'.

Edited by John E Clark
Link to comment
Share on other sites

Carl Looper's quartet of pictures deserves our attention.

 

original original to 4 colours

original + 10% noise original + 10% noise to 4 colours

 

He means the example to show that "the grain doesn't do anything other than mediate the transfer of tone between one bandwidth and another."

I was surprised by how he exemplified reduction of bandwidth, by transforming the 8-bit images on the left to 2-bit images on the right. In our video context where images are always 8-bit, 10-bit or 12-bit, I was expecting reduction of bandwidth to be in the spatial domain -- either by softening the image through loss of high spatial frequencies, or by resampling at lower resolution. So our first request of Carl should be for examples where 8-bit remains 8-bit.

 

But allowing that he has reduced bandwidth by reducing bit depth, we should wonder whether the noise in Carl's example is similar enough to film grain noise to sustain the example. If his lower left image had film grain noise the example actually wouldn't work. It works because the noise involves such large modulation that the far-apart levels in the 2-bit images are reached, achieving the dithering. Actual grainy film involves much smaller modulation. Where the 7276 (my favorite stock) data sheet says RMS granularity = 9 this means that the microdensitometer, set up to emulate human visual acuity at 12x magnification, finds the film density 1 varying = 1±0.009 rms. This puny variation cannot even register in an 8-bit image, much less traverse 2-bit levels.

 

So reducing bandwidth by reducing bit depth was a poor example.

 

Grand flourishes like "adding noise does not affect the image" don't get us far. There was much brilliant work done on image noise starting in the 1950's by Rose, Fellgett, Zweig, and many others. Did any of these scientists rely on grand flourishes? There's a nice picture of different models of film grain on page 75 of "Image Science" by Dainty & Shaw. (I must have been remembering that picture when I described models, but overlooked the optical softening step.) Kodak printed a small pamphlet "Understanding Graininess and Granularity" and there's discussion in Campbell's "Photographic Theory for the Motion Picture Cameraman" easily accessible to cinematographers

We should demand more attention to the particulars of film grain. It's noise, but it's our special noise with its special character revealed in its Wiener spectrum or by how it looks.

Edited by Dennis Couzin
Link to comment
Share on other sites

This is one of the craziest threads I’ve ever read on here.

 

The film technicians have been on the constant pursuit of positively positioning the perforated strip.

Video sensors now should begin to wiggle around?

 

I’m out, friends.

 

Crazy thread, but Carl understood the point of wiggling the sensor, which Simon didn't. Of course film frames should be perfectly registered. But the grains in the frames are not perfectly registered. They jump like crazy. The reason for wiggling the sensor is not to wiggle the image but only to wiggle the positions of the pixels which capture the image. That is, to make the video pixels somewhat more like film grains.

 

My Jumpy Pixel Experiment of 2009 involved jogging the sensor around by a random 0/10, 1/10, 2/10, ... 9/10 of a pixel horizontally and vertically, frame-by-frame, and then jogging the projector's DLP chip around by a compensatory amount so the image stayed put on the screen while the pixels comprising the image jump about. Look at one corner of each of these six pictures to understand what was done.

2tdbb8o12f1utgf6g.jpg

Actually, observers of this experiment should not know what was done to the images, lest their responses be biased by their beliefs. Non-film people, including children, have been the best observers.

 

Links to the experiment were in post #16, but I've since added an option.

First read the readme here.

Then download either the "None" version or the "ProRes" version.

 

Link to comment
Share on other sites

Carl Looper's quartet of pictures deserves our attention.

 

original original to 4 colours

original + 10% noise original + 10% noise to 4 colours

 

He means the example to show that "the grain doesn't do anything other than mediate the transfer of tone between one bandwidth and another."

I was surprised by how he exemplified reduction of bandwidth, by transforming the 8-bit images on the left to 2-bit images on the right. In our video context where images are always 8-bit, 10-bit or 12-bit, I was expecting reduction of bandwidth to be in the spatial domain -- either by softening the image through loss of high spatial frequencies, or by resampling at lower resolution. So our first request of Carl should be for examples where 8-bit remains 8-bit.

 

But allowing that he has reduced bandwidth by reducing bit depth, we should wonder whether the noise in Carl's example is similar enough to film grain noise to sustain the example. If his lower left image had film grain noise the example actually wouldn't work. It works because the noise involves such large modulation that the far-apart levels in the 2-bit images are reached, achieving the dithering. Actual grainy film involves much smaller modulation. Where the 7276 (my favorite stock) data sheet says RMS granularity = 9 this means that the microdensitometer, set up to emulate human visual acuity at 12x magnification, finds the film density 1 varying = 1±0.009 rms. This puny variation cannot even register in an 8-bit image, much less traverse 2-bit levels.

 

So reducing bandwidth by reducing bit depth was a poor example.

 

Grand flourishes like "adding noise does not affect the image" don't get us far. There was much brilliant work done on image noise starting in the 1950's by Rose, Fellgett, Zweig, and many others. Did any of these scientists rely on grand flourishes? There's a nice picture of different models of film grain on page 75 of "Image Science" by Dainty & Shaw. (I must have been remembering that picture when I described models, but overlooked the optical softening step.) Kodak printed a small pamphlet "Understanding Graininess and Granularity" and there's discussion in Campbell's "Photographic Theory for the Motion Picture Cameraman" easily accessible to cinematographers

 

We should demand more attention to the particulars of film grain. It's noise, but it's our special noise with its special character revealed in its Wiener spectrum or by how it looks.

 

The purpose of using a cruddy 2 bit 'camera', as mentioned, is to make it exceedingly obvious to anyone with actual eyeballs, to actually see the effect of such. One can always throw more bits at it and smooth it all over and claim "nothing to see here".

 

We don't want to do that. We want to see the ugly truth.

 

I am old enough to remember digital cameras that were 2 bit cameras. Nothing in principle has changed between then and now. We can increase or decrease the bandwidth in terms of pixel count and/or bit depth. By keeping pixel count the same it becomes part of what we otherwise call "all else being equal". All else being equal, in terms of tone transfer, dithering prior to quantisation is evidently more effective than doing it afterwards. Indeed, doing it afterwards has no effect at all (in terms of our goal).

 

In a film/digital version of this experiment, one will need to make the appropriate mapping. But the general principle will hold.

 

The appropriate mapping between this experiment, and an equivalent one done between film and digital, will need to avoid conflating noise modulation, in the digital domain, with grain modulation in the film domain. But if the appropriate mappings are made, the principle will remain intact.

 

As an aid to mapping between the two domains (short of actually doing a practical experiment with real film and a real digital camera) we can discuss this here in theory. Or as Dennis seems keen to suggest: in "theory".

 

Whichever way we characterise it, we're here with whatever word skills and image pasting skills we can muster. And it is ultimately shorthand for more detailed work (that Dennis has provided). and/or actual experimentation with film. But here, we are somewhat limited to grand gestures, (as Dennis so puts it). That is to be expected. But even within such a limited space, where such is inevitable, we can get a feeling for where the truth might subsequently be found.

 

Or if Dennis wants to nitpick we might equally fail to get a sense of where the truth might be found. (I'm Australian - I can't help but make digs every now and then - it's out of love - not hate)

 

Here is a micrograph of some silver halide crystals.

 

tgrainSEM.jpg

 

One thing to note is that the distribution of the crystals is random. There is no robot assembling the crystals into Cartesian order. Indeed the reverse is desirable. The more random the better. Now not only is the position of each crystal random but so to is their size and shape (within certain bounds).

 

It is these properties of film which will modulate the transfer of a signal.

 

The modulation is in terms of position and size. Obviously there will be more attributes, but we're sticking to the important ones. I've actually run numerical simulations of this using computer generated hexagons, randomly distributed in terms of position, orientation and size, and randomly shaped (as the crystals vary between a hexagon and a triangle). I've computed photon distribution according to the laws of quantum mechanics, and illuminated the crystals using such. I've simulated the reduction of those crystals (that have registered a photon hit or two) to 'wooly' silver and removed those crystals that have missed out. In short I've pursued the theoretical physics of this quite deep. But it doesn't matter. The important attributes remained important: position and size.

 

Now in and of itself this is film (as far as we've pursued it).

 

What we're interested in is not so much film in itself (as beautiful and remarkable as it is) but it's relationship to the digital domain (equally beautiful and remarkable in it's own way). How does film dither a signal, and what are the implications for the transfer of such a dithered signal to the digital domain, vs a direct digital encoding of light.

 

Now Dennis, of all people, should actually appreciate what comes next. He has already done a mind experiment (so to speak) where he's identified the phenomena in an experiment called the jumping pixels experiment. He has closed in on a very important part of the process.

 

Now I don't want to go into a drawing program and draw this up and hopefully I won't have to. Hopefully we can use our imagination instead.

 

Imagine a single crystal in otherwise clear emulsion. We hit it with a photon or two and process it. We now have some silver atoms aggregated in the general vicinity of the original crystal. Lets call this a blob. Everywhere else is clear (dmin)

 

Now lets imagine a digital scan of this. As Dennis explores in his jumping pixels experiment the resulting signal will depend on where the sensor cell falls with respect to the signal being resampled. In our mind experiment we can ask of a sensor cell whether it's completely occupied by the silver, completely misses it. Or somewhere in between. The last category is of interest.

 

For what will occur here is that brightness of the pixels will be a function of the position of the blob.

 

There is a relationship here, we can characterise, between the position of the silver atoms and the brightness of the corresponding pixels.

 

Now an important point here is that the area a blob partly occupies, under a sensor cell, can vary infinitely more (by virtue of random position) than any bit-depth a sensor might be able to capture.

 

In terms of dithering, the film is capable of complete variation, at infinite precision, between density min and density max. for a given area of a sensor cell. You can pack in more sensor cells and look as closely at the film as you like - such as in an electron microscope, and the dither will remain just as random.

 

Now lets step back from this for a bit. Have a smoko. Have a coffee. Re-read what we've written. Look for any clumsy wording that Dennis will pick up on (and why not). See if we're inadvertently trying to sell the idea of the anti-christ anywhere. And if not, then make sure we insert such.

 

[ Insert image of the anti-christ here]

 

 

C

Link to comment
Share on other sites

So our first request of Carl should be for examples where 8-bit remains 8-bit.

 

 

What is important in the demonstration is not whether the images are any particular bit-depth, only that the outcomes be:

 

a. equal in terms of bit-depth

b. be less than the original in terms of bit-depth.

 

Bit-depth determines the precision of the tonal variation at a single location in space. Keeping the bit depths equivalent, in both outcomes, allows us to see that bit-depth isn't the only determinate involved in what is meant by tone. It allows us to see that tone is not just in any single value at any given point in space, but is also in the distribution of such across space. In the variation as a function of space.

 

Now using this same setup we can certainly increase the outcomes to 8 bits or 12 bits (or more), but we will have to increase the source to an even higher bit-depth.

 

The purpose of the experiment is to express what happens in the transfer from a higher bit-depth image (an image encoded in light) to one that that is less so (an image encoded in terms of a digital sensor).

 

What is important is not the specific values employed in the demonstration, but the principle being expressed. One can take this principle and apply it to real world systems and the principle will hold.

 

If all the images had the same bit-depth it would be suggesting that the precision of tones encoded in light were no different from the precision of tones encoded in a digital sensor. We are attempting to represent a real world scenario with an appropriate substitute.

 

In doing so, if anything, we should make the difference in bit-depths very much larger. But a small difference is enough to demonstrate that even a small difference has obvious visible implications.

 

C

Link to comment
Share on other sites

 

Crazy thread, but Carl understood the point of wiggling the sensor, which Simon didn't. Of course film frames should be perfectly registered. But the grains in the frames are not perfectly registered. They jump like crazy. The reason for wiggling the sensor is not to wiggle the image but only to wiggle the positions of the pixels which capture the image. That is, to make the video pixels somewhat more like film grains.

 

My Jumpy Pixel Experiment of 2009 involved jogging the sensor around by a random 0/10, 1/10, 2/10, ... 9/10 of a pixel horizontally and vertically, frame-by-frame, and then jogging the projector's DLP chip around by a compensatory amount so the image stayed put on the screen while the pixels comprising the image jump about. Look at one corner of each of these six pictures to understand what was done.

2tdbb8o12f1utgf6g.jpg

Actually, observers of this experiment should not know what was done to the images, lest their responses be biased by their beliefs. Non-film people, including children, have been the best observers.

 

Links to the experiment were in post #16, but I've since added an option.

First read the readme here.

Then download either the "None" version or the "ProRes" version.

 

 

Yes, this is a great experiment. Simon didn't quite get it. He read it as if the goal was to introduce wobble (!), where film technology was working to remove wobble !

 

But the wobble of the sensor is cancelled out during projection (through doing the inverse wobble) - so the image itself is not wobbling. Just the location of the sampling sites.

 

While the experiment tests a way of obtaining something similar to the way film transcodes light it also demonstrates that there is a visible difference between conventional digital transcoding of light (the control movie) and this alternative way of transcoding light (the experiment).

 

And it's very much to do with the location of the sampling sites.

 

C

Link to comment
Share on other sites

Now an important point here is that the area a blob partly occupies, under a sensor cell, can vary infinitely more (by virtue of random position) than any bit-depth a sensor might be able to capture.

 

In terms of dithering, the film is capable of complete variation, at infinite precision, between density min and density max. for a given area of a sensor cell. You can pack in more sensor cells and look as closely at the film as you like - such as in an electron microscope, and the dither will remain just as random.

 

I fail to see what this example proves. No proof is needed that film is analog. Repositioning a sllver blob partly in a pixel area continuously varies the lightfall in the pixel area. Size, as well as position, of the developed film grain is analog, so size can make the continuous variation of lightfall too. These continuous variations say nothing about any kind of precision the film may have. They only say that the pixel sensor needs infinite precision to record the lightfall. But the pixel sensor needed infinite precision to record the original optical lightfall, and on good basis we are satisfied with 10- or 12-bit recording. (See my "Hecht and digital video" (2009).) What reason is there for the pixel sensor to record the film grain's affected lightfall with such precision, since the film grain does not accurately represent the optical lightfall that matters?

 

Measurement or recording can be more or less precise. Noisy instruments are imprecise whether analog -- continuously variable -- or not. The silver blob in the example does not record the optical lightfall with any precision. Actually many blobs are required, and only then can we select an aperture, like Kodak's 48 micron aperture, to move over the collection of grains and through which to measure the circumscribed lightfall. The precision of the grainy field is determined by how little the lightfall changes while moving the aperture. Fine repositioning of individual grains within the ensemble hardly affects this. The precision is something like ±0.009 around density 1. That's is a 4% range in transmissions, so an 8-bit linear sensor is sufficient, or less than 8-bits after gamma encoding.

 

I fail to see how any of Carl's bit-depth or quantisation arguments apply to digital picturing as it is now practiced. There are many special questions like "how to fake the appearance of film grain in video?" and "what are the optical and resolution requirements for preserving film grain quality in scanning" and "how effectively can spatially dithered 8-bit video convey 10-bit video?" that need serious attention.

 

Link to comment
Share on other sites

No proof may be necessary to prove film is analog. But then no proof was being proposed anyway

 

The discussion is not in terms of by how much a film is precise in terms of transcoding the optical signal. Or by how much a sensor must be.

 

Rather the discussion is in terms of the randomness. The randomness is precise. In other words, no matter how many pixels you pack in the randomness is not eliminated. There is no threshold beyond which one can get "behind the grain" so to speak.

 

And this is a good thing. At least in terms of transferring tones.

 

It means that the dithering, which the grain performs, is always redistributing this dither energy across pixel boundaries at all scales.

 

And this energy is statistically correlated to the optical signal.

 

What a sensor lacks in terms of it's ability to directly encode a tone, the insertion of a film acquired dithered signal, ensures the tone is able to transfer anyway - no matter how many bits the sensor has or doesn't have. It will look better in terms of tonal transfer if the signal is pre-dithered.

 

The dithering allows a high bandwidth optical signal to transfer tone to a low bandwidth digital signal better than a directly encoded tone.

 

As the demo demonstrates without any of this discussion whatsoever.

 

C

Link to comment
Share on other sites

The precision of the grainy field is determined by how little the lightfall changes while moving the aperture. Fine repositioning of individual grains within the ensemble hardly affects this. The precision is something like ±0.009 around density 1. That's is a 4% range in transmissions, so an 8-bit linear sensor is sufficient, or less than 8-bits after gamma encoding.

 

I fail to see how any of Carl's bit-depth or quantisation arguments apply to digital picturing as it is now practiced. There are many special questions like "how to fake the appearance of film grain in video?" and "what are the optical and resolution requirements for preserving film grain quality in scanning" and "how effectively can spatially dithered 8-bit video convey 10-bit video?" that need serious attention.

 

 

Lets suppose the variation in amplitude, for a neutrally exposed film, around density = 1, is that which Dennis has provided a typical figure: a range of 4%. Dennis is suggesting that an 8 bit linear sensor is sufficient to capture this range.

 

But that is actually beside the point.

 

Whether the sensor is 1 bit or 100 bits, that variation of 4% will provide for a better transfer of tones.

 

Consider the following. Assume this is light exposing film of the specified granularity on the one hand, and a digital sensor on the other.

 

SignalNeutral.png

 

A subsequent transfer of the film to digital, via a one bit sensor, will look like this:

 

Signal_1bit.png

1 bit sensor

 

If the one bit sensor were illuminated by the light directly the result would be either entirely white or entirely black:

 

Signal_1bit_white.png

1 bit sensor

 

Signal_1bit_black.png

1 bit sensor

 

The dithering provides a way for the error to be localised instead of taking up the entire image space, and a correction to emerge in terms of space, ie. where the image proper resides.

 

Instead of the image being just all black, or all white, we get a spatially varying distribution that is statistically and perceptually more indicative of the original signal.

 

Signal_1bit.png

1 bit sensor

 

Signal_4bits.png

4 bit sensor

 

Signal_8bits.png

8 bit sensor

 

It is this localisation of the error, and the statistical nature of the correction, which coincides with the perception of grain locally, and the perception of an image (the correction) globally.

 

 

Now Dennis wants to pose these questions. How might we address these given what we now know or think we know?

 

1. "how to fake the appearance of film grain in video?"

2. "what are the optical and resolution requirements for preserving film grain quality in scanning"

3. "how effectively can spatially dithered 8-bit video convey 10-bit video?"

 

To the first I'd just say you can't fake film grain in video. Not if you are after a result that looks like film. The graininess in film is intimately entangled with the image it encodes. You can't disentangle the two. All you can do is fool yourself. Not anyone else.

 

To be blunt.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

1. "how to fake the appearance of film grain in video?"

2. "what are the optical and resolution requirements for preserving film grain quality in scanning"

3. "how effectively can spatially dithered 8-bit video convey 10-bit video?"

 

To the first I'd just say you can't fake film grain in video. Not if you are after a result that looks like film. The graininess in film is intimately entangled with the image it encodes. You can't disentangle the two. All you can do is fool yourself. Not anyone else.

 

Agreed that the graininess in film is entangled with the image. I described this back in post #42 as the grain noise being convolutional (as opposed to additive or multiplicative) with the image. There is no reason this convolution can't be performed ex post facto on the video image provided the video image has sufficiently higher resolution than the sought film grain simulation (as explained back in post #36). How much higher resolution to start with, or alternatively, how much resolution must be lost, in order to obtain how good simulation of film grain, is the engineering problem that should concern us. It's the usual balancing act in which the "how much" and the "how good" must be quantified.

Link to comment
Share on other sites

 

Agreed that the graininess in film is entangled with the image. I described this back in post #42 as the grain noise being convolutional (as opposed to additive or multiplicative) with the image. There is no reason this convolution can't be performed ex post facto on the video image provided the video image has sufficiently higher resolution than the sought film grain simulation (as explained back in post #36). How much higher resolution to start with, or alternatively, how much resolution must be lost, in order to obtain how good simulation of film grain, is the engineering problem that should concern us. It's the usual balancing act in which the "how much" and the "how good" must be quantified.

 

If the film grains where placed 'one max grain size layer' thick, in some sort of precision process, one would just have random size, orientation, and perhaps random gaps, hence a single 'layer' additive process would probably characterize the process.

 

However, since the emulsion is not 'one max grain size layer' thick, but allows for grain to be randomly distributed in 3-d, the path of a photon, and its interaction with the crystal(s) would follow a complex path of scattering in the media.

 

The question is, does one need to model the 3-d situation, to get a sufficiently 'noised' image, such that the human observer could not determine whether it was Real or Memorex...

 

The other issue is who is the reference 'human observer'... a pixel peeper... or Joe or Jane Smith, average viewers from Poughkeepsie...

 

I would imagine if the reference observer were the latter, that a more simplistic additive process, randomized on each frame, would be sufficient for most human observers.

 

The latter is what I've experimented with, starting with my DVX100, an SD 'video' camera, by uprezing to 4K, then adding 'grain noise', and the subsampling to HD resolution.

 

So what if it took 16 hours to render with the computer I had at the time, for a minute or so of results...

 

In the case of CGI, one of the big improvements was adding the ability to include 'subsurface' scatter to 'skin' at least, along with models that included sub surface 'muscle/skeletal' 3-d elements.

 

I don't know that going that far with Film film grain simulation would really benefit, other than the academic exercise.

 

On the other hand... if one had the resources to make a ASIC engine that would compute a zillion 'photo scatter paths' a second... perhaps...

 

Here are the MTF pictures I have in my mind when I'm thinking about the difference between Film and Digital, and are applicable to almost any 'sampled system' in general.

 

Digital MTF response:

 

 

digital.gif

 

Film MTF response:

 

film.gif

Edited by John E Clark
Link to comment
Share on other sites

We just to need to ask ourselves why we would want to add grain in the first place. Or better: what assumptions are we using to entertain the notion that adding gain would make the signal any better than it already is?

 

What are we hoping to achieve?

 

In the examples I've elaborated we have a real problem to solve: how to maintain a signal were we to transfer such to a lower bandwidth domain. The problem is the lower bandwidth domain. But in this situation we have access to the original signal, prior to transfer, so we can manipulate the original signal, prior to transfer, in such a way that the result is an appropriate correction given the problem.

 

But if our video signal is not being transferred to a lower bandwidth domain then there is no longer any problem that grain can solve. In this situation adding grain isn't serving any purpose, which includes any purpose related to film grain.

 

It is just adding errors for no reason at all.

 

C

Link to comment
Share on other sites

It is perhaps a remarkable thing, and somewhat difficult to digest, that adding errors could ever solve anything. But as I've elaborated, it certainly can. It's just a question of understanding in what context it can, and in what context it can't.

 

C

Link to comment
Share on other sites

Out of all this we can perhaps get a better grounding on what is going on between film and video. If we think video looks "flat" compared to film, and that this has something to do with grain, we could be led to believe that adding grain to video, in some clever exotic way, might make it less so. Less flat.

 

But as I hope should become a dawning awareness, the flatness is baked in during capture. To alleviate this problem requires intervening prior to capture, eg. in the way that Dennis has otherwise been exploring. Not in software, but in hardware. We can use software to simulate what might be achieved in hardware, but it won't be a substitute for the hardware. It's just a way of approximating what the hardware would do far better.

 

A peizo-jittered bio-mimicing sensor with corresponding data processing and/or appropriate projector.

 

C

Link to comment
Share on other sites

 

A peizo-jittered bio-mimicing sensor with corresponding data processing and/or appropriate projector.

 

C

 

Hey Carl, Denis,

I wish I had time to read it all, but no. I read little parts.

 

Did anyone look at the Aaton bibrating sensor, the information that they have freely published?

 

There is nothing bio mimicing about a jittering sensor. This is misleading language or concept.

 

You may feel that the jittering sensor simulates some results from a biological eye? This isn't mimicry. The human eye/brain/awareness may well be able to process incoming data from the eye in a way that may allude to or remind us of dithering, but it is a sidebar, a minor capability.

 

A human entity, seeing a direct impression of some other conscious entity, has an experience of recognition. When they see a fanciful or unesscesarily complex simulation, they may discard or distrust it.

 

So what medium would a great artist persue, to create either great art, or even a great popular work that might connect with large numbers at once?

 

The death of celuloid will mark a definate increment degradation of human cultural experience. I can't see an easment of that. Just vain delusions, smokescreens, and techniques to help us to inhabit this lesser world.

Link to comment
Share on other sites

 

There is nothing bio mimicing about a jittering sensor. This is misleading language or concept.

 

You may feel that the jittering sensor simulates some results from a biological eye? This isn't mimicry. The human eye/brain/awareness may well be able to process incoming data from the eye in a way that may allude to or remind us of dithering, but it is a sidebar, a minor capability.

 

 

The jittering aspect s one thing.

 

The bio-mimicing sensor refers to something else: a proposed correction that the sensor can make in order to neutralise aliasing that would otherwise occur in jittering a conventional sensor.

 

The context for this is in earlier posts.

 

The sensor would have cells that mimic the structure of a biological retina.

 

http://biomimicry.org/what-is-biomimicry/

 

sinclair-stammers-light-micrograph-of-mo

 

 

This proposition is not a smokescreen. Adding fake grain to video is a smokescreen.

 

C

Link to comment
Share on other sites

With respect to jittering, that too can be regarded as a form of bio-mimcry, or perhaps techno-bio-convergence.

 

The biological correlate would be a subset of saccadic motion in which the eye involuntarily jitters about, even when fixated on a target.

 

Experiments demonstrate that without this form of saccadic motion the image we otherwise perceive with our eyes will actually disappear.

 

http://www.ncbi.nlm.nih.gov/books/NBK11156/box/A1346/?report=objectonly

 

C

Link to comment
Share on other sites

 

So biomimicry has some traction as a concept. It's a vain expression of something that has been around for some time.

 

Sorry, I have I have to make fun of it. I'll try this.....

If there was one thing we might reveal that has been hidden....it's that each human consciousness contains the embedded knowledge (of how the universe is structured) that somehow appears as if lost, and the complete form of this is is visible in the world around us, and at all levels of examination within us. Noticing this, the universe might appear brilliant and confident. One might be emboldened to coin a new discipline. Something...mimicry? Oh well, something just as enthusiastic, but less inspired created this word....

 

The included picture may suggest the retina. It depends on how loose we want to be. It does not mimic it.

Link to comment
Share on other sites

.....a subset of saccadic motion in which the eye involuntarily jitters about, even when fixated on a target.

 

Experiments demonstrate that without this form of saccadic motion the image we otherwise perceive with our eyes will actually disappear.

 

So does the eye "jitter" by an angle that covers a fraction of a rod or cone...These are the thoughts that one will have if trying to relate the eye to a sensor.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...