Jump to content

Question for the experts !!


Yahya Nael

Recommended Posts

 

Hi Carl,

The PDF is included in the big ZIP file. The readme just warns that the experiment requires a monitor having at least 2000×1500 pixels and a system with Quicktime able to display a 1800 Mb/s video stream.

I worked in Apple codec "None" to make the experiment's videos with full control, but I was silly to release them in that codec. I didn't understand, or trust, other codecs back in 2009.

 

 

Thanks Dennis. Very much appreciated. Will respond in due course (and will try and go a bit lighter on the french).

 

cheers

Carl

Link to comment
Share on other sites

Like everyone in here, I've struggled to identify the key difference between digital video and film. I don't think it is in the color or tonality which often define the "film look". Provided the video camera's R,G,B spectral sensitivities are like a camera film's, video can closely match the output colors of film prints made from that camera film. Provided the video camera's dynamic range is decent video can closely match the film's tonality. One film scholar described the difference as "it's the light". That film scholar had studied physics! What did he mean? I doubt he meant that film projectors project each frame twice with the off time about equal to the on time. This means there's a little bit of residual flicker in the light (and also a slightly modified Phi-Phenomenon) in cinema. This an be simulated with 96 Hz projection. All films jiggle a little and many show small processing flaws.

 

This leaves grain vs. pixelation as the key difference between digital video and film. It must be. Random grain is an eyeful. It makes every color look different, especially black. It uses mental energy. It makes motion look different, makes time flow differently. This helps explain the aesthetic differences between 8 mm, 16 mm and 35 mm cinema which have greatly differing graininess. Image sharpness is affected by the frame-to-frame grain. Whoever has made a still picture from a film frame knows that it looks much less sharp than the movie looks when running. Publicity stills from movies are seldom taken from the movies. I once did an experiment with making the video camera's pixel array move randomly from frame-to-frame You can download the experiment's results here, but I suggest first downloading the readme here. The jumping pixels changed the expression on the character's face.

 

 

The cine image proper, we can suggest, is not in any given frame as such, but in the relationship between frames. We can suggest this might be analogous to (if not quite the same as) considerations we might give to a still image. In the still image, the image (we can argue) is not in any given pixel as such, but in the relationships between pixels. The change in value (or difference in value) between one pixel and the next. The greater the difference in value, the sharper we might call the image in that region. In the opposite direction, towards the least amount of change (identical pixels) will be the smoother components of the image. Now it's not necessary to limit this relationship to adjacent pixels, but can be expressed in terms groups of pixels as well. The set of all possible relationships between all possible groups of pixels, becomes quite immense.

 

Now we may not be conscious of all such relationships. The tip of our perceptual iceberg may find us aware of only the net effect of this, but it's not a stretch to suggest that all of the relationships within a given perceptual field, may be participating in varying degrees, below the surface of consciousness, and informing our awareness, in all sorts of ways yet to be reckoned with.

 

Exploring this area in terms of a computational corollary of such, is the stuff of machine vision research. Whether it tells us anything new about natural perception doesn't really matter. If we build a model of the eye (a camera) and use it to make films, we may not be any wiser in terms of how our own eyes work, but it provides a way of getting outside ourselves and pursuing an alternative conception of ourselves. From outside of ourselves. To the mechanical (or electronic) eye we now attach a compatible brain. Machine vision.

 

The stuff of science fiction actualised.

 

Writing software for a Kinect installation I'm reminded how different this is from the software I am writing in the early 80s. In the 80s it was towards the synthesis of completely imaginary images, even if entirely rooted in theoretical physics. Now it's the complete inverse. Algorithms for "making sense" of real world images. It is hard to know how this might characterised. Neither science nor art, or half science/half art. The technological. The technical. From the Ancient greek word for art: techne.

 

When asked by Elizabeth whether victory would be hers, in an upcoming sea battle, her astrologer looks up from his divination charts, and says: "It's hard to say. Astrology is still more an art than a science".

 

An alternative conception of the image is to situate it within a superset of perceptual mechanisms, where it is perception that will be a function of the image, rather than the other way around.

 

In this model, it will be pixels that are a function of the image, rather than the image as a function of the pixels. By way of explanation we can say that there is firstly an image before it is divided up into pixels (or roman tiles). Or in natural perception we can say that there is firstly an image, focused on our retina, before it is subdivided up amongst retinal cells, and redistributed in some organic way amongst brain cells. It is to treat the image as much as an objective reality, as a subjective one.

 

Can we pose the same in terms of the cine-image. In the still image we can imagine the difference between pixels as giveable in terms of space. But how are cine-images giveable. For a machine vision algorithm it has the benefit of working with a sequence of images already encoded. It can traverse such images in any order computing relationships between them and producing all manner of magic from motion tracking to deconvolving motion blur, and so on.

 

But in the cine-image (or natural perception for that matter) the relationship between one image and another would seem to require that at least one has passed (into memory) before it can be related to one that is currently in perception. Or that both are in memory.

 

But to use our alternative model: that an image is objectively real as much as it is subjectively so, then we can postulate the cine-image as already constituted before it is divided up into still images. The division, this time, is not into pixels in space, but into images in time. We can treat the relationship between one image and another as an objective relation. That is already there (in time) before any transcoding of such in terms of a subjective perception.

 

Now the more radical component of this model is to suggest that perception operates in time as well. It is not a question of how to reconstruct a sense of time, or movement, given a set of images passing through some notion of the present. We can suggest that a perception which occurred an nth of a second ago (so to speak) is happening at the "same time" as a perception ocurring now. We can suggest it is a limitation of language that we must speak in such a clumsy way.

 

In this way the relationship between one frame and the next is giveable. But it requires relaxing the notion of perception limited to a given point in time. But it should also be noted that our bodies are as much a part of the universe as anything else. There is no reason to suggest that relationships posable between objective material points in time are not equally poseable in terms of thought and perception.

 

Is this too much French Theory?

 

I don't know. All I know is that the questions have yet to be answered and I'm looking for clues as much as the next guy - wherever they might be found.

 

 

C

Link to comment
Share on other sites

Back to the question at hand.

 

Regardless of how perception might be theorised we can pose a relation between one frame and the next. If the frames are identical the relationship between them is one of zero difference. But lets imagine two frames where, instead of there being no difference, there is a statistically neutral difference. By this is meant each frame is a version of the same image for which each frame has pixels deviating by some random amount. Statistically it means that for every deviation in one direction there will be (somewhere) a deviation in the other direction, the effect of which is that the sum of all differences cancel each out. Not entirely, but significantly.

 

The perceptual effect is that we can see the singular image, though a kind of haze of dancing differences (grain). The image we see is in some sort of interaction between the two frames. It occupies the gap between the frames.

 

The frames "interfere" with each other to produce an image proper. Or reciprocally (to follow the alternative model) the image proper "de-interferes" into two frames.

 

Now in this model we can suggest reasons why film has the ability to produce a richer colour, or tone, than a corresponding digital sensor. The image proper (the light signal) is being sampled by the random arrangement of light sensitive particles. For each deviation from the light signal, in one direction, there is an inverse deviation in the other direction, for all possible deviations. These deviations cancel each other out in relation to the original signal (that light provides) reconstructing, or making visible the original light. Since the deviations vary randomly in terms of both position and light sensitivity, and the magnitude of the deviation, there is an "interference pattern", so to speak, which is able to suggest a stronger sense of the original light, albeit through a mild (and not unpleasant) haze of dancing differences.

 

 

C

Link to comment
Share on other sites

Much of the theory around experimental cinema of the 50' onwards takes exception to notions of the image as a reality. Even more so cine-images. There is an attempt to dismantle cinema as a con. However the experimental cinema is also a reaction to commercial cinema. On limited budgets the experimental cinema sought an alternative to the formulas of the industrial juggernaut. Or alternatives to formulas full stop.

 

But it's also arguable they developed, somewhat unfortunately, a concept of the image as a con. Of narrative as a con. Of movement as a con. Of perspective as a con. And so on. But if they worked to undermine all of these things it was certainly not by refusing to create alternatives. The experimental cinema is not short on masterpieces of cinema, even if none occupy any best film list.

 

But a prevalent philosophy within the industrial juggernaut was to also treat all of these things as a "con" of sorts - but to call such the "magic of cinema" instead. An illusion but no less worthwhile for being an illusion. A celebration of illusion. The popularity of special effects movies suggested the population at large was not adverse to this take.

 

One could almost say the model used by experimental cinema was no different from that used by the industrial machine. The model is the same: that images are illusions. From Goddard to advertising gurus an agreement. An idea as old as the hills. An ancient idea. Platonic. But two different takes on such. One celebratory and the other antagonistic. But both an expression of the same thing.

 

Against this model is posed the alternative concept: that the image is not an illusion (nor a representation), but entirely, in itself: real. Whether we are talking about images made visible through photography, cinematography, painting, graphic art, sculpture, or in any other way. Here it is not a question of whether the image represents reality, but in what way one might conceive the image itself as a reality.

 

From the point of view of industrial cinema a related concept is that called "suspension of disbelief". We put aside our disbelief in the image.

 

The only difference is that by treating the image as real in the first place, it means there's no disbelief to suspend in the first place.

 

In any case, they both point to the same thing sort of thing - that sense in which one comes out of oneself and enters the domain of the work - the domain of experience. But instead of theorising it in terms such as the "magic of cinema" or "capitalist propaganda" (to name but a few), one negotiates it as an area of further investigation. What sort of world is this world? How does it work? In what other ways might it work?

 

What else can be found around the corner in this world of "suspended disbelief", ie. other than the image understood as an illusion.

 

C

Link to comment
Share on other sites

  • Premium Member

So cine film, which is generally contact printed, shouldn't be processed per Gigabitfilm's instructions. Then, since their developer is special, how will it function for pushing? What tonality will it yield?

 

Dennis, let me simply pass on my experiences. Chemist Detlef Ludwig, Mr. Gigabitfilm, has composed a formula that allows to always develop the film to maximum density (in areas where exposed full, of course) at any gamma from about 0.3 to 2. I’ve done it. For example contact duplicate negatives from 16-mm. reversal originals as well as original camera negatives in 35 and 16. An Arriflex 35 BL II was used and a Paillard-Bolex H-16.

 

To push it is not a clever idea. The pecularity of the film is that it reacts quite harsh to underexposure, in other words, there is nothing in the deep shadows. Consequently, one exposes for those areas that one wants to show detail. Luckily overexposure is not a subject. This film-developer combination digests the intensest exposure. You just don’t have to worry about the highlights.

 

Tonality is something I understand as the grey scale rendering of colours. You have a panchro film here.

 

In 35 it is eye-opening. When I first projected a reversed original Ludwig sent me from Germany, projected with high-intensity carbon-arc light in the cinema I had back in 2001, I had to leave the projector and go down. I stood there about ten feet before the screen and looked at a landscape with trees in the fore, shaking in the wind. I recognised that the projection lens was not the best. I would have wished a little deeper blacks. For the rest it was like staring at a 3¼ by 4 inch glass slide. In the 16 format projection revealed the necessity to print on a comparably fine-grained stock, not common positive film. The negative’s grain is invisible. It’s covered by the print’s grain.

 

Another Gigabitfilm experience is the one with G 25 as 4 by 5 inch sheet film. I have a Folmer Graflex and made pictures like this one:

 

post-35633-0-91672900-1424771145_thumb.jpg

 

It takes courage to let the shadows be black. The table plate is made of two pieces of glass, one of them flashed glas. This is a 74.4 KB upload from a compressed 832 KB file from a fine scan of a 18 by 25 cm enlargement on RC photo paper.

 

For me the real difference between analog and digital is human, not technical. One has to do something for a conventional picture. Put a film in a camera and wait. If one doesn’t process the film oneself, one has to be confident to somebody else. The killing of the cinema film projectionist profession is the most depressing aspect to me. No confidence left. Film prints should remain the anchor for this industry. Without some original dye-transfer prints nobody had an idea of Technicolor like it was in history.

Link to comment
Share on other sites

Simon,

Your experiments with Gigabitfilm are reminiscent of mine, and others', with Eastman 7360 Direct MP Film. (The film is discontinued; a scan of the 1982 data sheet is here.) 7360 was very different from Gigabitfilm technically. 7360 produced a positive image with negative development, by means of the Sabattier Effect! Also 7360 was orthochromatic. But 7360 resembled Gigabitfilm in being virtually grainless. The pictures were as if shot and printed on holographic film. It was uncanny to watch 7360 original (shot with difficulty as the EI was around 0.25) and see the lens's direct image of the scene, unmediated by silver halide grains. One then recognizes the role of grain in cinema.

 

But 7360 was also different from standard cinema in its warped tonal reproduction. This is evident in the curvier-than-S characteristic curve in the data sheet. I suspect, from your picture of Frau Kopp am Montageplatz, that Gigabitfilm is also providing warped tonal reproduction. This can be settled by seeing a diffuse density characteristic curve for Gigabitfilm. (As explained earlier, that is the appropriate density since cine film will be contacted printed.)

Link to comment
Share on other sites

  • Premium Member

You are so right about grain in cinema but are presuming falsely about the characteristic curve of Gigabitfilm.

 

I’m taking a screenshot out of this discussion:

 

post-35633-0-10591900-1424794882_thumb.jpg

 

The curve toe is short. Below a certain threshold no silver is reduced, the negative is empty. From that point on upwards the film-developer combo works as linear as no other. I should express it so that the faintest shadow details rise from absolut black practically undistorted and that is the beauty of Gigabitfilm. Although being a high-key material that hardly burns out it rewards one with high fidelity at the lowest levels.

 

I hope my English is more or less usable.

 

Here another example, underexposed and lifted:

 

post-35633-0-89583200-1424795554_thumb.jpg

Edited by Simon Wyss
Link to comment
Share on other sites

You are so right about grain in cinema but are presuming falsely about the characteristic curve of Gigabitfilm.

 

I’m taking a screenshot out of this discussion:

 

attachicon.gifDensitometrie Gigabitfilm 25.jpg

 

The curve toe is short. Below a certain threshold no silver is reduced, the negative is empty. From that point on upwards the film-developer combo works as linear as no other. I should express it so that the faintest shadow details rise from absolut black practically undistorted and that is the beauty of Gigabitfilm. Although being a high-key material that hardly burns out it rewards one with high fidelity at the lowest levels.

 

 

What is the difference between this process and say Ilford Pan F and very dilute Rodinal, say 1:100, and long development times?

Link to comment
Share on other sites

  • Premium Member

The developer is way more complex than most other soups, Mr. Ludwig states forty ingredients. He found a way to arrest the first fraction of a second of the ion-atom discharge that takes place when the film comes in contact with the bath. Let’s not forget that the factor of density intensification is around 100 millions. The protons race forward at incredible speeds. The gelatine is about 0.0004" thick or thin (0,01 mm), so an army of bouncers is needed to slow them down.

 

Common films have thicker photographic layers and different silver grains. With Ilford Pan F and most other films a silver wool grows out of the granules, comparable to cotton flocks. Not so here, the silver precipitation is kept close to the tiny grains. Gigabitfilm is rather densified down on their level and among them. From that a maximum density of log 2.4 is available. Ciné positive yields log 3.1 hands down, sound recording films pull through to log 5 and more.

 

In some sense we have here something like one of the thin layers of a color film. In yet another way we have here something like a wet plate thin collodion layer on PETP base. It’s a mean film.

 

I’ve found a densogram on Gigabitfilm that I did with an old ICA densograph. The wedge goes to log 4.

 

post-35633-0-83877200-1424799324_thumb.jpg

 

Please note that the glass wedge is not the best. I have precision wedges that produce such* runs:

 

post-35633-0-82735300-1424799878_thumb.jpg

 

*Liebling, how much watch? Ten watch. Oh, such much!

Link to comment
Share on other sites

... The addition of grain to video/digital can't increase the quality of the original digital signal, but it can help to maintain attributes of the original digital signal if and when it's transferred to a lower bandwidth domain. The grain in film works in the same way. It helps to maintain the original signal (encoded in light) when transferred to a lower bandwidth domain, such as film, or video/digital.

 

This is why you can't get a film 'look' by adding grain to video/digital. But you can get a film 'look' by adding grain to the original light. That is what film does. It dithers the light, and, intentionally or otherwise, enables the signal to better transfer it's power through downstream systems. ...

 

This is interesting. I'm sure that grain can be faked in video, yielding an ersatz film which must have the "film look", provided the video is of extremely high resolution. That video is then substituting for the exposing light. But I get the point that this is not possible with lower resolution video that only seems, visually, to match the exposing light, due to the high frequency rolloff of vision.

 

Two questions arise. 1. Is this ability of the genuinely dithered image to hold up through transfers to lower bandwidth -- a Signal Processing fact -- necessary to the film look, or might a faked version look the same? 2. The analysis given, as I understand it, is for grainy photographs generally and not especially for cinema. For if the point about the difficulty of faking film grain applied only to the running cinema, but not to the single frame, then it could be disproved by using whatever method worked for the single frame and jiggling it around again and again, frame after frame.

 

I wrote in post #16 that "the random grain of film ... functions as a dynamic ground for cinema" and then claimed that "you can fake grain and have a dynamic ground for video". The latter might not require well-faked grain at all. It's unlikely the single frame needs to perfectly simulate a grainy still photograph when it will be on view for just 1/24 second.

 

A single grainy image looks not-all-there, but also definitely sketched-out. Carl's post #17 suggests how a Signal Processing analysis can explain that. Cinema requires an analysis of the running grain, what people describe as "boiling", "grinding", etc., notions exclusively dynamic. The 3-D Signal Processing analysis of that would involve correlations to human pattern perception data that is now unavailable.

Link to comment
Share on other sites

You are so right about grain in cinema but are presuming falsely about the characteristic curve of Gigabitfilm.

 

I’m taking a screenshot out of this discussion:

 

The curve toe is short. Below a certain threshold no silver is reduced, the negative is empty. From that point on upwards the film-developer combo works as linear as no other. I should express it so that the faintest shadow details rise from absolut black practically undistorted and that is the beauty of Gigabitfilm. Although being a high-key material that hardly burns out it rewards one with high fidelity at the lowest levels.

 

Simon, this is what I cautioned about in post #19. The characteristic curves you've posted were not measured using diffuse densitometry, as would be appropriate for cine negatives that will be contact printed. Dr. Wahl uses what he calls "realistic" densitometry such as a condenser photographic enlarger (or a movie optical printer) uses. "Die Dichten werden unter realen, 'vergrößerungsnahen' Bedingungen..." Since Gigabitfilm's data sheets describe the film's enlarger printing densities as being the result of an "asymmetric Callier-Effect", the diffuse density characteristic curve will not have the shape of the characteristic curve measured by Dr. Wahl. And since his details of the asymmetric Callier-Effect seem confused in the data sheet, I have no idea what the diffuse density characteristic curve looks like. Someone with a darkroom and a (typical, diffuse) densitometer should just measure it.

 

Also your praise of the extremely linear, toeless, second curve

I should express it so that the faintest shadow details rise from absolut black practically undistorted and that is the beauty of Gigabitfilm.

 

is humorous coming in this topic searching for differences between film and video. That extremely linear, toeless curve was what characterized the early video, before it tried to look like film.

Edited by Dennis Couzin
Link to comment
Share on other sites

 

Hi Carl,

The PDF is included in the big ZIP file. The readme just warns that the experiment requires a monitor having at least 2000×1500 pixels and a system with Quicktime able to display a 1800 Mb/s video stream.

I worked in Apple codec "None" to make the experiment's videos with full control, but I was silly to release them in that codec. I didn't understand, or trust, other codecs back in 2009.

 

 

This experiment is really quite interesting.

 

Its similar in effect to moving one's eye across ribbed glass while focused on an image beyond the glass. Or better: sliding a lenticular sheet across the surface of a photograph

 

When the jumping pixel video is compared to the control it is said there is discernible a change in the expression of the face.

 

It firstly demonstrates that varying a sensor cell position with respect to a signal, mediates a slightly different signal, even though the change in sensor cell positions have been cancelled out during re-sampling.

 

This certainly demonstrates the seeds for an understanding of the difference between film and digital.

 

It's important to note that in this experiment, all the sensor cells are displaced in the same direction by the same amount. This means any alteration to the signal (or face) occuring in one location is the same alteration occuring in another location. Now by the expression on someone's face can be understood a co-ordinated expression wherein areas of the face move in varying degrees of sync with each other. If the sensor cells are also moving in sync with each other, this creates an opportunity for varying correlations to occur between the sampling system and the face. Changes associated with the sampling position can become changes associated with the face.

 

We can propose a slightly different experiment along the same lines, but where each sensor cell is displaced independantly of every other, ie. in random directions by random amounts. While each sensor cell performs a change on the face (signal), each is performing a highly localised change. Globally - at the level of facial expression - these changes become statistically neutral (they cancel each other out). Or to put it another way, insofar as each point on a face does not move independantly of every other point, (a face is co-ordinated) movements of the face become distinguishable from movements of the sampling pattern.

 

C

Link to comment
Share on other sites

 

This is interesting. I'm sure that grain can be faked in video, yielding an ersatz film which must have the "film look", provided the video is of extremely high resolution. That video is then substituting for the exposing light. But I get the point that this is not possible with lower resolution video that only seems, visually, to match the exposing light, due to the high frequency rolloff of vision.

 

Two questions arise. 1. Is this ability of the genuinely dithered image to hold up through transfers to lower bandwidth -- a Signal Processing fact -- necessary to the film look, or might a faked version look the same? 2. The analysis given, as I understand it, is for grainy photographs generally and not especially for cinema. For if the point about the difficulty of faking film grain applied only to the running cinema, but not to the single frame, then it could be disproved by using whatever method worked for the single frame and jiggling it around again and again, frame after frame.

 

I wrote in post #16 that "the random grain of film ... functions as a dynamic ground for cinema" and then claimed that "you can fake grain and have a dynamic ground for video". The latter might not require well-faked grain at all. It's unlikely the single frame needs to perfectly simulate a grainy still photograph when it will be on view for just 1/24 second.

 

A single grainy image looks not-all-there, but also definitely sketched-out. Carl's post #17 suggests how a Signal Processing analysis can explain that. Cinema requires an analysis of the running grain, what people describe as "boiling", "grinding", etc., notions exclusively dynamic. The 3-D Signal Processing analysis of that would involve correlations to human pattern perception data that is now unavailable.

 

Would an extremely high resolution video be a good substitute for the light signal?

 

This question can be answered independantly of questions regarding grain. If one believes a high enough resolution video signal is a good enough substitute for a light signal then it doesn't matter how much grain you add or don't, the difference excluded, between a light signal and the video signal, makes any further analysis unnecessary.

 

Let me explain.

 

I've been asked this question before. It's of obvious interest to those who would prefer a film look, but without the sacrificing the immense ease-of-use that a video camera provides.

 

Can one get a 'film look' using a video signal as the base signal (rather than a light signal).

 

My answer would be no.

 

Adding grain to a video image doesn't alter the image. It only alters the visibility of the image. The image visible through the grain will still be a video image. Adding more grain just makes this video image less visible.

 

Film is adding grain to the light signal. And like adding grain to a video signal, it doesn't alter the light signal. The image visible through the grain will still be the light signal. Adding more grain just makes the light signal less visible.

 

So whatever difference one might suggest there is between a light signal and a video signal, or not as the case may be, remains completely unaltered by the addition of grain.

 

C

Link to comment
Share on other sites

 

We can propose a slightly different experiment along the same lines, but where each sensor cell is displaced independantly of every other, ie. in random directions by random amounts.

 

That would be a recreation of film. I was aiming at a practical method for enhancing video by incorporating a couple of piezo elements on the camera's imaging sensor and on the projector's light transducer.

Link to comment
Share on other sites

 

 

 

That would be a recreation of film. I was aiming at a practical method for enhancing video by incorporating a couple of piezo elements on the camera's imaging sensor and on the projector's light transducer.

 

 

Yes, I've proposed the same thing in the past. Jitter the sensor. And resample (or jitter the projector - great idea). But as your experiment demonstrates it's still not quite ideal.

 

But it can be improved without ending up back with film.

 

If the sensor cells are randomly arranged in a fashion similar to an organic retina, this should have the effect of neutralising any global correlations that might interfere with image perception.

 

So a sensor that looks like this (a monkey retina), and peizo jittered. And the projector performing the inverse jitter (great idea), or the data can otherwise be re-sampled for a regular grid projector.

 

sinclair-stammers-light-micrograph-of-mo

Link to comment
Share on other sites

 

Would an extremely high resolution video be a good substitute for the light signal?

 

This question can be answered independantly of questions regarding grain. If one believes a high enough resolution video signal is a good enough substitute for a light signal then it doesn't matter how much grain you add or don't, the difference excluded, between a light signal and the video signal, makes any further analysis unnecessary.

 

Let me explain.

 

I've been asked this question before. It's of obvious interest to those who would prefer a film look, but without the sacrificing the immense ease-of-use that a video camera provides.

 

Can one get a 'film look' using a video signal as the base signal (rather than a light signal).

 

My answer would be no.

 

Adding grain to a video image doesn't alter the image. It only alters the visibility of the image. The image visible through the grain will still be a video image. Adding more grain just makes this video image less visible.

 

Film is adding grain to the light signal. And like adding grain to a video signal, it doesn't alter the light signal. The image visible through the grain will still be the light signal. Adding more grain just makes the light signal less visible.

 

So whatever difference one might suggest there is between a light signal and a video signal, or not as the case may be, remains completely unaltered by the addition of grain.

 

You might be misunderstanding how grain would be "added". This is more nearly multiplicative noise, most nearly convolutional noise.

The fake grain isn't just thrown onto the video image. Remember how dot screens work in the graphic arts. The continuous tone image is printed to an array of black dots, their sizes dependent on the local brightness of the image. Now imagine dot screens with the convolvers randomly located. Now imagine how easy this is in digital image processing.

 

The dot screen isn't quite the model I'd use to fake grain since there are different sizes of photographic grain, each with its probability distribution for brightnesses turning it black. Also we'd need to model the 3-dimension nature of the emulsion. Again, no biggie for digital image processing.

 

The resolution question comes to: what video resolution is required to achieve as good as possible image with a particular grain model? The requirement might be higher than we'd like. The fake grain must dominate the underlying pixel array while being achieved in those pixels. Conversely, for a given video resolution, how coarse must the imposed fake grain pattern be? It might be coarser than we'd like.

Link to comment
Share on other sites

So a sensor that looks like this (a monkey retina), and peizo jittered.

 

You picked a nasty (unsharp vision) portion of the retina, where it's more rods than cones, but yeah, it would be nice to make our image sensor imitate our retina. Rectilinear manufacturing, and 2-dimensional addressing and image processing are in strong opposition to this.

 

But if you could have that sensor, why jiggle it (and how much excursion is needed to break the pattern)? Our retina doesn't jiggle, nor do the cones shift about. There are saccadic eye movements, but these are nowhere near 24 per second, and they are of several degrees. The retina, and the whole visual system, has its own kinds of noise, but not of the shifitng kind.

 

For the dynamic grain pattern of cinema, that we both think important, what exactly is it doing, visually and aesthetically?

Edited by Dennis Couzin
Link to comment
Share on other sites

 

You might be misunderstanding how grain would be "added". This is more nearly multiplicative noise, most nearly convolutional noise.

The fake grain isn't just thrown onto the video image. Remember how dot screens work in the graphic arts. The continuous tone image is printed to an array of black dots, their sizes dependent on the local brightness of the image. Now imagine dot screens with the convolvers randomly located. Now imagine how easy this is in digital image processing.

 

The dot screen isn't quite the model I'd use to fake grain since there are different sizes of photographic grain, each with its probability distribution for brightnesses turning it black. Also we'd need to model the 3-dimension nature of the emulsion. Again, no biggie for digital image processing.

 

The resolution question comes to: what video resolution is required to achieve as good as possible image with a particular grain model? The requirement might be higher than we'd like. The fake grain must dominate the underlying pixel array while being achieved in those pixels. Conversely, for a given video resolution, how coarse must the imposed fake grain pattern be? It might be coarser than we'd like.

 

A dot screen (and a random one is certainly good) facilitates the transformation of an image embodied in continuous tones to the same image embodied in binary ones, ie. from a varying ink density print to an ink-no ink print.

 

But the print itself is not the image. It is the material or 'ground' as painters call it. But the image is something else. The image is what is visible "through the ground" so to speak.

 

As discussed with jittering a sensor, the reason this is a good idea is because the operation is being performed on the light signal, ie. prior to it's material embodiment (as data in digital memory). Once it's embodied in material form it's too late to do anything with the signal. Jittering the signal only facilitates it's subsequent transfer to a lower bandwidth ground.

 

If we go back to the dot screen, the operation being performed there is not on the continuous tone ink as such but on the image such embodies, and it is doing such prior to that image being re-embodied in a lower bandwidth material (binary ink on paper).

 

We know that without doing so the result is an image that lacks any tonal variability. We perform the dot screening in order to preserve the tonal variability - despite the fact that the material itself has no such variability !!! It is either black or white - nothing in between.

 

The image (what we see) is different from the material.

 

Now the argument one can propose from this is that the look of film has nothing to do with grain at all. The graininess is really just a side effect of performing an operation on an image embodied in light, that transfers that image into material form on film. Perceptually we see "through this grain", and what we see is the image that was otherwise embodied in light. Which is so much more intense than seeing video through any grain we might have added to it (or multiplied it by) for whatever misguided reason we applied it.

 

C

Link to comment
Share on other sites

 

You picked a nasty (unsharp vision) portion of the retina, where it's more rods than cones, but yeah, it would be nice to make our image sensor imitate our retina. Rectilinear manufacturing, and 2-dimensional addressing and image processing are in strong opposition to this.

 

But if you could have that sensor, why jiggle it (and how much excursion is needed to break the pattern)? Our retina doesn't jiggle, nor do the cones shift about. There are saccadic eye movements, but these are nowhere near 24 per second, and they are of several degrees. The retina, and the whole visual system, has its own kinds of noise, but not of the shifitng kind.

 

For the dynamic grain pattern of cinema, that we both think important, what exactly is it doing, visually and aesthetically?

 

One day (after I'm dead no doubt) the manufacturing of sensors in the way proposed will no doubt be solved. Or we could work out how to do it today. Start the next revolution in cinema?

 

Jittering the sensor. This should stop any temporal correlations in the pattern from interfering with the temporal components of the cine-image. The wider the jitter the better.

 

C

Link to comment
Share on other sites

The print itself is not the image? The print is what the painter call the ground? Carl, here is not where to pursue the metaphysics of depiction. No real problems are solved by philosphy anymore.

 

The grainy photographic image is akin to a half-tone, except at too small a scale to be resolved by photographic lenses or by the viewing eye so instead of grains one sees a grainy field, variegated grey where the light image was a smooth grey, rough edges instead of smoothy falling ones. Graininess is an operation on the light image that we can model. The dot screen model is admittedly too crude a model.

 

It simply isn't true that "once [the image is] embodied in material form it's too late to do anything with the signal". Material form is one or another lossy encoding. What's lost is lost, but a new signal can be reconstructed from what remains. Some things that can be done with the original signal come out exactly the same when done with reconstructed signal. Some other things not.

 

Becoming grainy is a lossy operation in itself. Cine optical printing exemplifies repeated applications of the graininess operation. An optical image of one grain pattern is impressed upon a second grain pattern. Then an optical image of that thing is impressed upon a third grain pattern. The resultant of several steps can still be called a grain pattern.

 

So what happens when a video original undergoes the "becoming grainy" operation? Some grainy image will result. What are the limitations on the qualities of that grainy image? (In the worst case the operation could consist of modeling the out of focus video cast onto photographic film and reimaged in very high resolution video.)

Edited by Dennis Couzin
Link to comment
Share on other sites

The print itself is not the image? The print is what the painter call the ground? Carl, here is not where to pursue the metaphysics of depiction. No real problems are solved by philosphy anymore.

 

The grainy photographic image is akin to a half-tone, except at too small a scale to be resolved by photographic lenses or by the viewing eye so instead of grains one sees a grainy field, variegated grey where the light image was a smooth grey, rough edges instead of smoothy falling ones. Graininess is an operation on the light image that we can model. The dot screen model is admittedly too crude a model.

 

It simply isn't true that "once [the image is] embodied in material form it's too late to do anything with the signal". Material form is one or another lossy encoding. What's lost is lost, but a new signal can be reconstructed from what remains. Some things that can be done with the original signal come out exactly the same when done with reconstructed signal. Some other things not.

 

Becoming grainy is a lossy operation in itself. Cine optical printing exemplifies repeated applications of the graininess operation. An optical image of one grain pattern is impressed upon a second grain pattern. Then an optical image of that thing is impressed upon a third grain pattern. The resultant of several steps can still be called a grain pattern.

 

So what happens when a video original undergoes the "becoming grainy" operation? Some grainy image will result. What are the limitations on the qualities of that grainy image? (In the worst case the operation could consist of modeling the out of focus video cast onto photographic film and reimaged in very high resolution video.)

 

Philosophy is just a dirty word for theory.

 

If it's meant to offend I otherwise take ownership of the word, and treat it as otherwise. It is philosophy I'm doing. And I'm quite proud of it.

 

But for anyone else the term 'theory' should be employed.

 

An image is not be found in it's material embodiment. Look at the following image. Materially there are no tones here. Materially each pixel is either black or white - nothing in between (no shades of grey).

 

Dither.png

 

And yet the image we see is one that exhibits a variation in tone. Or to put it another way, by "image" is meant that aspect of the above which does exhibit such a thing. If you can't see the variation in tone you are either blind or willfuly ignoring the evidence of your eyes.

 

It is in this sense one can say the image is not to be found in the material dots.

 

Now one can consider this as some sort of illusion created in the mind. This is typically how many understand it. But that train of thought doesn't go anywhere. It's a dead end.

 

Alternatively one can consider it as something that has an objective reality. And this has potential for going somewhere.

 

The idea here is to treat the image as an objective reality independant of it's material embodiment.

 

However the material embodiment of this image, in whichever way that occurs, can make aspects of the image less visible (hidden) than other aspects. But being invisible doesn't mean it can't be exposed. But we can't make visible the original image - there is an aspect of such which is locked out depending on how we encoded it in the first place.

 

We can, however, extract different faces of the image that are locked up in the data. For example a more explicit sense of tone can be brought to the fore, using no more than the data provided above. We can trade sharpness for tone. There are better ways of doing so but the following should suffice. Our target bandwidth allows us to express tones so we can take advantage of that. Materially we have more shades to play with so we can exploit such and refactor the data in order that the the tones are made more explicit:

 

Dither2Tone.jpg

 

 

But the important point here is that the image is the same in either case. What is not the same is the visibility of it's various attributes, which is purely a function of the material data. Not the image.

 

This may very well be metaphysical, and again, if that is meant to be dirty word, I take no offense and otherwise take ownership of that word. I am proud to be metaphysical.

 

Anyone for a philosophy/metaphysical mardi gra. Let me know.

 

Whether this is science is art doesn't bother me. I'm more than happy for it to be understood in any way one likes.

 

This is just the way I explore it.

 

C

Link to comment
Share on other sites

  • Premium Member

This is one of the craziest threads I’ve ever read on here.

 

The film technicians have been on the constant pursuit of positively positioning the perforated strip.

Video sensors now should begin to wiggle around?

 

I’m out, friends.

Link to comment
Share on other sites

This is one of the craziest threads I’ve ever read on here.

 

The film technicians have been on the constant pursuit of positively positioning the perforated strip.

Video sensors now should begin to wiggle around?

 

I’m out, friends.

 

Before you go Simon - what sort of prices are Gigabitfilm stock in 16mm. And is the processing overly elaborate or within a DIY capacity. If elaborate are there set up facilities for processing such?

 

I'd love to see Gigabitfilm in Super8.

 

C

Link to comment
Share on other sites

The resolution question comes to: what video resolution is required to achieve as good as possible image with a particular grain model? The requirement might be higher than we'd like. The fake grain must dominate the underlying pixel array while being achieved in those pixels. Conversely, for a given video resolution, how coarse must the imposed fake grain pattern be? It might be coarser than we'd like.

 

The grain doesn't do anything other than mediate the transfer of tone between one bandwidth and another.

 

QuantisationNoiseEtc.jpg

 

 

The images on the right use exactly the same number of colours (or grey tones), namely 4. This reduction in colours (from the original top left) is called "quantisation".

 

But the the bottom right image has had noise added prior to quantisation. While both of the images on the right have been quantised to a palette of 4 tones, the one on the bottom right provides for a more overt sense of tonal gradation than the top right. But how is it that adding noise results in better tonal quality?

 

Well the reason is that noise is not just noise. Statistically (and this is important) noise is equivalent to zero. So adding noise is the same as adding zero. In other words the result of adding noise is no change at all. But only statistically (ie. globally). At the level of image perception. Locally (for any given pixel) the effect is noise (randomness). But globally the noise filter doesn't do anything at all. It doesn't matter if the noise is pseudo-random or organically random - the result is the same: no change.

 

Now an image (a signal, a face) is not localised in any particular pixel. The image is a statistical object - distributed throughout all of the pixels - between every pixel and every other pixel. Accordingly the addition of noise has no affect on the image, since the noise only affects the pixels (the local information). It does not affect the image. At any given location (ie. at a pixel) the noise will have a random affect (resulting in any value) but globally (statistically) the noise filter is zero (has zero affect). Globally, for every pixel that is darker by some random amount, another is lighter by an equal but opposite amount. So globally these effects cancel each other out.

 

An image and the zero side of the noise addition, occupy the same domain - a statistical space, whereas the 4 colour filter occupies local space (the pixels). If the random aspect of the noise filter affects only the pixels, and not the image, it can be used to effectively interrupt the localised effect of the 4 colour process, while having no affect on the image. This affords the image a greater opportunity to punch through the barrier otherwise introduced by the 4 colour effect. The 4 colour effect is effectively turned to junk (noise) allowing the image (signal) to become consequently more visible.

 

The way in which film is constructed provides a ready-made inbuilt noise adder. During transfer to digital - the affect of such is to turn localised information (where quantisation occurs) into noise, while having no affect on the image. And insofar as this is done at the source (during exposure), rather than in post, the image is far better sustained. It is why adding noise in post is not as effective as adding noise at the source (in the way film is manufactured). It is why Peter Jackson's claims that you can add noise (or grain) in post falls down. The noise is not what we are appreciating (not normally). It is the image. But if the image is not there to begin with, the addition of noise won't help to recover it. The noise operates on localised information in order to unblock any image (global information) that might be lurking. But if the image is not there in the first place, no amount of unblocking will find it again.

 

So in short adding noise does not affect the image. It only affects the quantisation which would otherwise (if not turned to noise) further suppress the image.

 

Carl

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...