Jump to content

What's the attraction?


John Sherman

Recommended Posts

Yes, my physics arguments are really no more than an attempt to put into words and concepts what is firstly and fundamentally obvious to the eye. They are no more than an attempt to "put one's finger on it".

 

Colorists will know from experience (but not necessarily from theory) that there are far more colours that can be extracted from an image sourced from film, than one that begins life in a digital sensor. It is not obvious why this should be the case, but it is obvious that it is the case.

 

This story is similar to that of the steam engine in which such was invented and worked before theoretical physicists fully understood why it worked. The steam engine has it's origin in experimental physics, rather than theoretical physics.

 

An important point here is that were one to get a proposed theory wrong (as is very easy to do) the steam engine will still work. It doesn't depend on the theory.

 

But if one can get the theory right, then there's an opportunity for improving the steam engine.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

The digital image has it's origin within a purely synthetic space, where the image is so by definition, rather than by observation. It is computer generated image, rather than digital photography, that will be the first expression of the digital image. The test pattern. The synthesised sine wave.

 

Circles generated by algorithms that define the meaning of a "circle" in the first place. But the word "circle" is also signifier. It can be used to describe an aspect of some observation, eg. an observation of the Moon, or Venus transiting the Sun. Look, the Moon is a circle (as distinct from a square!). Now we might argue that the Moon is not a circle (or not a sphere) because craters perturb the surface. But by reciprical but equivalent logic we could argue that a circle is not the Moon. Out of this very problem computer algorithms have evolved, to synthesise better Moons.

 

But it's obvious from this that photography must always do a better job. It is that to which the algorithm seeks to conform.

 

Digital photography arrives to occupy the algorithmic world in a way compatible with that world. It will exhibit the same smooth clean surfaces that the synthesised image is able to exhibit and make possible a certain seamlessness. Instead of algorithms conforming to the photographic image, it will be photographic images that conform to the particular aesthetic of the computed image.

 

The film image could not initially compete with this. The film image was too different from the synthesised space in which it was otherwise composited.

 

But in this difference is the power of film. It gives back to observation the power of the observable. To defy the defined.

 

C

Link to comment
Share on other sites

While I can appreciate the desire to get the best transfer possible from a format that can be hard to make it look it's best, the choice by film makers when selecting film as a medium often has little to do with sharpness of the final image.

 

Using film enables the person shooting enormous control over how the image is exposed, using stocks, lights, lenses, and processing methods will alter the images infinetly.

 

When a painter sits to recreate a scene before them, they are thinking of their interpretation of what they see. If they simply wanted a facsimile of the scene they would just take a photo.

 

Using film can be a film makers way of interpreting how a scene makes them feel. IMHO.

 

Cheers, gareth

Link to comment
Share on other sites

  • Premium Member

While I can appreciate the desire to get the best transfer possible from a format that can be hard to make it look it's best, the choice by film makers when selecting film as a medium often has little to do with sharpness of the final image.

 

Using film enables the person shooting enormous control over how the image is exposed, using stocks, lights, lenses, and processing methods will alter the images infinetly.

 

When a painter sits to recreate a scene before them, they are thinking of their interpretation of what they see. If they simply wanted a facsimile of the scene they would just take a photo.

 

Using film can be a film makers way of interpreting how a scene makes them feel. IMHO.

 

Cheers, gareth

 

Well said!

Link to comment
Share on other sites

While I can appreciate the desire to get the best transfer possible from a format that can be hard to make it look it's best, the choice by film makers when selecting film as a medium often has little to do with sharpness of the final image.

 

Using film enables the person shooting enormous control over how the image is exposed, using stocks, lights, lenses, and processing methods will alter the images infinetly.

 

When a painter sits to recreate a scene before them, they are thinking of their interpretation of what they see. If they simply wanted a facsimile of the scene they would just take a photo.

 

Using film can be a film makers way of interpreting how a scene makes them feel. IMHO.

 

Cheers, gareth

 

A photograph is not a facsimile of any scene.

 

A photograph is the scene. The scene is the image created. Our eyes, just like those of a painter, provide a means of pre-visualising the scene. The scene proper is in the photograph.

 

Using our eyes in conjunction with our imagination and many more things beside, we can re-arrange what will be the scene created, whether the result is a painting, a photograph or a motion picture.

 

The result is not an interpretation of any scene. The result is the scene.

 

C

Link to comment
Share on other sites

While I can appreciate the desire to get the best transfer possible from a format that can be hard to make it look it's best, the choice by film makers when selecting film as a medium often has little to do with sharpness of the final image.

 

Increasing the resolution of a scan isn't about getting any more sharpness in the final image (although one might), but it is about the final image. The final image is what one is creating.

 

Increasing the resolution of the scan provides one with more control over how that final image is created. In particular, increasing the resolution of the scan will give one more control over tone and/or colour.

 

Or if working to a film print as the final result there will be corresponding concerns one can also address there.

 

In Super8, of course, there is a history in which the final result is a screening of the camera original (with or without some editing being done).

 

But that doesn't change anything. The final result, be it a screening of camera original, or a 35mm print, or a video copy off a wall, is what you are creating - intentionally or otherwise.

 

I guess it's just whether you want to take any responsibility for that.

 

I should just add that there's nothing wrong with making a video copy off a wall. A lot can be created using just such a technique. My critique is with the completely bogus assumption that with Super8, there's no advantage in a 4K transfer.

 

But that's certainly not meant to suggest there are not any other ways to skin a cat.

 

C

  • Upvote 1
Link to comment
Share on other sites

Colorists will know from experience (but not necessarily from theory) that there are far more colours that can be extracted from an image sourced from film, than one that begins life in a digital sensor. It is not obvious why this should be the case, but it is obvious that it is the case.

 

If it's true it should be explainable.

 

If we mean by colors those that fill extended areas without variation, then video is capable of the finest color precision. I think a good 10-bit monitor can display all colors within its gamut and its luminance range, with perfect precision. A good video camera can produce the file for that display. There are infinitely fine gradations possible in X,Y,Z, which are numbers, but not what X,Y,Z measure, which are colors. For two colors to be two, our visual system must be able to distinguish them, and our visual system is not very precise. A famous illustration of the imprecision is the "MacAdam ellipses" diagram. Talk about 8-bit per channel video representing 16 million colors is belied by there not being 16 million colors, or at least not those 16 million colors.

 

Consider a simple example. Make our white at x=y=0.333 and give it X=Y=Z=1. Now quantize each of X,Y,Z linearly into 1000 levels (almost 10-bit). Now while adapted to white, look at all the billion other combinations like (0.123, 0.456, 0.789). Are there any colors missing? Use CIE L*a*b* space to measure how far apart our digitized colors are from their nearest neighbors. For example, how close is (0.123, 0.456, 0.789) to (0.123, 0.456, 0.790)? Delta E = 0.08, so those two colors can't be distinguished. But how close is (0.123, 0.456, 0.789) to (0.124, 0.456, 0.789)? Delta E = 0.67. So those two colors might be distinguished, so we need a somewhat finer net than this almost 10-bit XYZ net. XYZ space includes much more than the whole visual gamut. Typical RGB spaces include much less than the whole visual gamut, so our 10-bit RGB monitor should, as suggested, be precise enough to display all colors. It is a simple programming exercise to find out, if you trust the CIE L*a*b* metric.

 

There are two other ways to interpret what Carl says colorists know. One is that this is about extracting more colors from the film than from the video, not about there being more colors representable. Yeah, color film is a grainy mess, and a high resolution scan will yield quite different colors at each pixel within a "grey" patch. There will be color controls, such as "saturate", that will extract new colors from this "grey" patch even though grey shouldn't respond to "saturate". These color extractions might have no relation to what was pictured, but they are remarkable.

 

I did color grading of films, including 8 mm, in the film days, and now do color grading of videos, and I've never had the experience of "extracting" or "pulling" colors from a picture. This is because it's an experience that comes from using video color grading methods (which include, e.g., the "saturate" control) on color films. It's a medium-crossover effect. Enjoy it while you can.

 

Another interpretation is more fundamental. It is that color might be more than uniform X,Y,Z on areas can measure. Can spatial variation of color create new colors? Does everyone with eyes, not just the colorists, see colors in super-8 films which they don't see in video. A better example is that of the pointillest painters who claimed to be painting new colors. It's not hard to imagine color irregularity modifying color, or even pushing it to where untextured color can't go. (I'd like to read the eminent perceptual psychologist Floyd Ratlif's book on Signac.) Are the modifications or pushes along the usual three color dimensions, or do they introduce new dimensions of color?

Link to comment
Share on other sites

Another interpretation is more fundamental. It is that color might be more than uniform X,Y,Z on areas can measure. Can spatial variation of color create new colors?

 

Yes. It does. Painters have known this for a long time. And it's precisely such that dithering exploits.

 

Given access to the original signal one can refactor such a signal to occupy the limited palette of a typical screen and induce more colours than the number of colours that bit depth would otherwise suggest.

 

Carl

Link to comment
Share on other sites

 

Yes. It does. Painters have known this for a long time. And it's precisely such that dithering exploits.

 

Given access to the original signal one can refactor such a signal to occupy the limited palette of a typical screen and induce more colours than the number of colours that bit depth would otherwise suggest.

 

Carl

 

You missed my point of post #58. The 8-bit depth of the typical screen, is already enough to display all colors (within the limitations of the chosen primaries) with nearly full visual precision (meaning human visual precision, as explained in the post). Likewise the mixing of oil paint allowed painters to display all colors, with unlimited precision, within the limits of their chosen pigments. The pointillist painters were aiming at colors unachievable by those uniform mixed paints. Likewise, the new video or film colors that might be producible with color dither would not just be colors that extra bit depth would achieve, but newer ones. They could be outside the limitations of the monitor's primaries, or outside the region of CIE color space achievable with light, or even outside CIE color space, requiring higher dimension.

Edited by Dennis Couzin
Link to comment
Share on other sites

 

You missed my point of post #58. The 8-bit depth of the typical screen, is already enough to display all colors (within the limitations of the chosen primaries) with nearly full visual precision (meaning human visual precision, as explained in the post). Likewise the mixing of oil paint allowed painters to display all colors, with unlimited precision, within the limits of their chosen pigments. The pointillist painters were aiming at colors unachievable by those uniform mixed paints. Likewise, the new video or film colors that might be producible with color dither would not just be colors that extra bit depth would achieve, but newer ones. They could be outside the limitations of the monitor's primaries, or outside the region of CIE color space achievable with light, or even outside CIE color space, requiring higher dimension.

 

I can go along with the idea that there are colours outside of the CIE colour space that might be induceable by those from within. But there are certainly colours within CIE colour space that a typical screen misses.

 

I don't understand how one can say the 8 bit depth of a typical screen, (for each colour primary) is already enough to display all colours within the range of the chosen primaries.

 

It is obvious that a 10 bit signal encodes more colours than an 8 bit one.

 

In medical diagnostic monitors such displays are capable of exhibiting 10 bits or more directly.

 

If the argument is that we can't see such colours or greyscales, why bother making monitors that can display such?

 

And if we can see such colours (as we can) then it stands to reason, if using an 8 bit screen (per colour channel), we might want to exploit any in-between colours (or tones) capable of induction on an 8 bit screen.

 

To this end we need a signal, that is more than 8 bits per colour channel, in order to populate this induced colour space (or tonal space). The more bits per pixel the better.

 

How much more?

 

With an analog signal it doesn't matter. The signal goes all the way. There's no need to nominate a "good enough" threshold. The history of "good enough" thresholds has been completely abominable, if understandable.

 

Now whether one can populate this "in-between" space by shooting film and transferring it to digital, or by using a jittered digital sensor (etc), both would be very good.

 

From a technical point of view I'd suggest that the latter will exhibit the same qualities we admire in film. But until then film provides the qualities we are otherwise after. And that this is transferable to digital.

 

What is not possible, is to obtain these qualities with sensors the way they are currently designed. They do not exploit the "in-between" space that we're otherwise suggesting a screen is capable of displaying.

 

But perhaps the next step is to quantify, in some way, this auxillary colour space.

 

 

C

Link to comment
Share on other sites

With respect to the subject of fake film grain, added to video, in order to make it look like film (so to speak), we can suppose that any dithering done to a video, in order for it to fit into a target bit-depth, could additionally dither the signal in a way different from an optimum dithering (ie. different from one based on the difference between an original video signal and a target bit-depth).

 

For there are different ways of dithering. We could sacrifice optimum dithering for one that might remind us of film grain, and/or be perceived as such.

 

Graininess in film varies as a function of density (itself a function of exposure) so if wanting to induce a sense of film grain in video, one might very well want to dither a video source signal according to just such an insight. However the result must be sub-optimal.

 

The smallest film grains in film correspond to where there would have been the most photons, so the corresponding pixels in a video signal can be optimally dithered, but the remainder must introduce more noise in order to obtain the film grain differential.

 

Are we really appreciating the structure of the grain differential in film so much so that we'd still be appreciating it in quite the same way were it shifted across and used to modulate the relationship between a video source and a particular bit-depth display?

 

Fortunately it's possible to model film grain precisely in the way suggested, and so it can be tested in terms of what it looks like, or might be appreciated.

 

And whether we appreciate the result or not (be it compared to optimally dithered video, or film transferred to video) at least we'll know there wasn't any better solution.

 

Personally (but I'm only one person) I prefer optimal dithering of video, or film transferred to video. Both provide more colour/tones, which I appreciate far more than anything fake film grain can possibly offer. And this is purely aesthetic, ie. if ever there were a counter-critique that these options are based purely on signal fidelity. For signal fidelity is not the goal. Signal richness is the goal.

 

C

Link to comment
Share on other sites

If one looks at how optimal dithering is done there are corrections propagating out from every pixel location, interacting with every other correction occuring at every other pixel, and like waves in a pond these corrections spread out, in principle to occupy the entire image domain. The image will exhibit more colour (occupying the auxillary colour space) than a signal captured at screen res (which makes no use of the auxillary colour space). Differences between a source pixel and a target pixel start off at maximum 'error' (which is why we speak of the error being localised). Adjacent pixels are given the task of correcting the error, while at the same time contributing whatever error they possess. The corrections are always delocalising, in the sense that they spread outwards from a point. The width and the height of the image provides a limit on how much the corrections have space into which to propagate.

 

My intuition is that optimal dithering, for a difference of only 2 bits (per colour channel) is probably inducing the original video signal well before corrections run out of space into which to propagate (ignoring those pixels on the edge of the screen of course). One sees (hallucinates) the original video "through the dither" so to speak.

 

Also, based on a comparison of film transferred to video vs video transferred to video (so to speak) there is more colour (or tones) in film transferred to video (or one finds oneself unable to deny such), equally suggesting that video isn't exploiting the full capacity of auxillary colour space.

 

But until there's some mathematical proof of such I guess it will have to remain just an entirely convincing hallucination.

 

C

Link to comment
Share on other sites

  • 4 months later...

This has been an interesting thread. However, keep in mind that one person's idea and concept and passion is not the same as another's. To tell anyone who loves small gauge film, be it 8mm, Super 8mm, 9.5mm or 16mm, that they should shoot something else (film or digital) to get a better image when they already love their chosen format (chosen for their project or passion or whatever), is rude and wrong. It would be like watching a water color painter on location and telling him/her that they should use oil, or worse in a way, use an Ipad and work it with your fingers and create lasting high quality digital image of your work. It's not film versus digital here, nor one format against another, it's how each of us feels when working in whatever format we choose at the time. If this were not so, there wouldn't be any art supplies available, nor Super 8mm (or Double 8mm film for that matter0 stilfl available for us to use. All the HD scanning resolution and other technical support and artifacts discussed are all nice and relevant for when and if any of us decide to go one route or another, in digital conversion of film......but the bottom line is, do you like using Super 8mm for what it is and for how it works for you?! My answer is yes, it works fine for what I want when I'm using it. And since the film manufacturers are still producing film for this market, so it must be for the many out there currently using film, those who have just discovered film, and for those yet to discover analog filmmaking. Try throwing high tech this or that at those shooting still film formats, and also even more so, to those in the Lomography world! This forum is for Super 8mm supporters, shooters, users, as such, we need comments that support this world of ours, not to diminish or tear it apart. Just my two cents here folks. Kind regards to all on this forum, and let's keep our Super 8mm film world alive and fun as long as we can!

Link to comment
Share on other sites

  • Premium Member

Speaking of adding fake grain, which today in D.I. suites often involves a scan of film grain as a layer to be added over the digital image, I came across this article today by coincidence while reading a March 1969 issue of "American Cinematographer" about the Robert Wise movie "Star!", shot in 65mm by Ernest Laszlo. They had to create some fake b&w newsreel footage of different decades that Julie Andrews is watching in a screening room -- and I guess Laszlo decided to shoot these on modern 35mm Arriflex cameras using color negative film. L.B.Abbott of Fox's effects department then had to degrade the footage. His description:

 

"Modern emulsions have a much finer grain than the older raw stocks, so the first step was to add grain to the new footage. The most pronounced grain you can get in modern film is from the blue-sensitive record of multi-layer color stock.

 

"So we photographed a white field, took a blue-light print from it, enlarged it, scratched it up a bit, and so forth. Then we bi-packed this 'grain-modulating' film with a print of the new footage that had been shot and made a dupe from it.

 

"I've got some really bad developing machines that can be made to produce a terrible result by running them too fast. The turbulence of the solutions in incorrect and the resultant developer fluctuation produces the kind of streaks that emulate an old-time dupe.

 

"By making as many generations as you want from the dupe, you can run the contrast gamma up and down the scale. This made it possible to match the varying contrast of the stock scenes and also the gradual improvement of quality over the years, as film stocks and processing methods improved.

 

Because "Star!" was photographed on 65mm color negative stock for 70mm release printing, the final step was to re-photograph the newsreel scenes (both old and new) onto 65mm film. This Abbott did by era-projecting the scenes onto the screen used in the projection room set.

 

Some of this I find a bit confusing. A print off of a negative where the subject is a white field is mostly clear on the print, almost no density, so I'm not sure how much grain is visible. I'm guessing a "blue-light" print means a print of just the blue information but I suppose it could be referring to blue-sensitive b&w film being used to copy the color negative. But if it is a positive print of a white field, then I guess it was printed dark enough to leave some density on the print stock. Then I guess Abbott bi-packed this with a print made off of the original scene. Again, he doesn't mention when he switched to b&w emulsions for this process nor why Laszlo didn't shoot originally with b&w stocks.

  • Upvote 1
Link to comment
Share on other sites

I guess if you start with fine grain stock, you have more room to play in how you might manipulate printing to get whatever target you're aiming at. If starting from course grain stock you might find yourself running out of room to manoeuvre in matching the target. Or it might have just been adopted as the result of experimentation rather than from any pre-rationalised technique. Interesting nevertheless.

 

One of the great films I've enjoyed very much in terms of matching contemporary shot material with historical/archival material is Woody Allen's 'Zelig'.

 

It makes so much fun of the very concept (of matching) that it otherwise uses and does so technically well.

Edited by Carl Looper
Link to comment
Share on other sites

 

Some of this I find a bit confusing. A print off of a negative where the subject is a white field is mostly clear on the print, almost no density, so I'm not sure how much grain is visible. I'm guessing a "blue-light" print means a print of just the blue information but I suppose it could be referring to blue-sensitive b&w film being used to copy the color negative. But if it is a positive print of a white field, then I guess it was printed dark enough to leave some density on the print stock. Then I guess Abbott bi-packed this with a print made off of the original scene. Again, he doesn't mention when he switched to b&w emulsions for this process nor why Laszlo didn't shoot originally with b&w stocks.

 

Yes, it is a bit confusing. Although they say a "white field" it doesn't necessarily mean they were wanting the resulting neg to be black (dense), and the print from such to be white (clear). Technically I imagine they would have set the iris on the so called "white field" so it formed a mid-density result on the camera neg, ie. setting the camera iris according to a reflected meter reading of the "white" (rendering it as grey so to speak), and then making an equally mid-density blue light print from such.

 

If this is what they did in fact do, perhaps they might have better described the original field as a "grey field" to avoid confusion?

 

C

Edited by Carl Looper
Link to comment
Share on other sites

The person who started this post was not interested in super 8 film making , came more on here to tell us "how poor the format was ".

Without havng a real understand of Film in all it's formats . And how he got a super 8 camera on a Lark , Then Shocked by the price of film & process . It"s not a cheap, for us Granddads that still use this 1965 film Format .

 

This has been an interesting thread. However, keep in mind that one person's idea and concept and passion is not the same as another's. To tell anyone who loves small gauge film, be it 8mm, Super 8mm, 9.5mm or 16mm, that they should shoot something else (film or digital) to get a better image when they already love their chosen format (chosen for their project or passion or whatever), is rude and wrong. It would be like watching a water color painter on location and telling him/her that they should use oil, or worse in a way, use an Ipad and work it with your fingers and create lasting high quality digital image of your work. It's not film versus digital here, nor one format against another, it's how each of us feels when working in whatever format we choose at the time. If this were not so, there wouldn't be any art supplies available, nor Super 8mm (or Double 8mm film for that matter0 stilfl available for us to use. All the HD scanning resolution and other technical support and artifacts discussed are all nice and relevant for when and if any of us decide to go one route or another, in digital conversion of film......but the bottom line is, do you like using Super 8mm for what it is and for how it works for you?! My answer is yes, it works fine for what I want when I'm using it. And since the film manufacturers are still producing film for this market, so it must be for the many out there currently using film, those who have just discovered film, and for those yet to discover analog filmmaking. Try throwing high tech this or that at those shooting still film formats, and also even more so, to those in the Lomography world! This forum is for Super 8mm supporters, shooters, users, as such, we need comments that support this world of ours, not to diminish or tear it apart. Just my two cents here folks. Kind regards to all on this forum, and let's keep our Super 8mm film world alive and fun as long as we can!

Link to comment
Share on other sites

  • 5 weeks later...

The OP is asking why do we shoot super 8 aside from the asthetics and in his mind, low resolution of super8.

 

1. Asthetics is the first thing to consider when chosing a format. It provides its own unique asthetics.

2. "low resolution" in relation to what? or what do you mean by "resolution"?

 

Everyone hears something different when they hear the word "resoluton." For film, it's LP/mm and spatial resolution, and MTF, and perceived sharpness. You want to talk LP/mm?

 

You can't compare one-to-one film and digital. They do what they do completely differently, and produce a completely different looking product. Super8 does what it does better than anything else trying to look like Super8. :)

 

 

  • Upvote 1
Link to comment
Share on other sites

  • 1 year later...

Interesting topic. I can't resist not putting in my 2 cents.

I have yet to shoot anything on Super 8. I just picked up a mint Nizo 561 Macro. The reason I want to try this out, is for the same reasons I shoot 35mm still film. I love the look of it. I addition, I love the surprise I get after I either develop my film or get I it back from the lab. One never knows how the image is gonna turn out until its developed. It forces me to take my time in getting the perfect shot. The best photos I have ever taken were on a film camera. Another reason, is that the cameras look awesome. Some old super 8 cameras are pretty enough to be displayed in a museum! It is not a low cost hobby, but its doable enough to try it several times a year. At least for me it is. I love old tech. Don't get me wrong. Digital video is great. It is low cost and very convenient. I own a Lumix G DMC‑GH4 which I enjoy using often. I also own an iPod filled with several albums. At the end of the day, though, I rather listen to my vinyl record collection on my Technics 1200.

Link to comment
Share on other sites

This is an excellent thread. Many others have already written most of the things I want to say, but regardless of that:

 

- I love the physical aspect of film, the feeling that the images are "real", they exist "here" and not in some "virtual place" that's here but at the same time "not really here".

 

To better explain what I mean: When occasionally due to some fault electricity is not available and you are sitting there in your home not being able to access your computer and all the data there and online, you get this feeling of being pulled back to the physical reality of here and now. In that moment one realizes the worth of books, paper, pens and printed photos. Those all are "here".

 

- I also love the way film captures reality in so different way. It's almost as if it was at the same time unreal and due to that more real.

 

I haven't really grown up with watching old film originated home movies -- I think the first 8mm film I have seen projected I projected myself. So it's not about nostalgic properties for me. If I wanted to go for that, I'd have to buy an 8mm video camera ;)

 

- And many more reasons...

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...