Jump to content

Best Super 8 footage you have ever seen (I bet)


Friedemann Wachsmuth
 Share

Recommended Posts

Ahhh I see, cause I was really wondering if anyone could help me out by scanning some Super8 film I have at such resolutions and at different exposures! :D

 

Well, I could do 32K HDR of one roll of Super8 for $45000. Turn around time would be 4 weeks. But who's going to pay that?

 

C

Edited by Carl Looper
Link to comment
Share on other sites

.... The projector is projecting it's image directly onto the camera's sensor where the projected frame area is very much larger than the camera's sensor area.

 

 

This sounds really interesting. But how do you cope with the contribution from the projector lens? First thought is that it won't be sharp enough for you.

If your camera has 5K, so 2.5K column pairs, what lens sharpness would you need for a meaningful result? 2500lp/mm? I don't know if they exist. I assume not. Do we know how sharp S8 lenses are? Smaller formats did need sharper lenses. But my guess is that the available lenses will be 1/10 as sharp as what you need.

 

Or were you doing it some other way?

Link to comment
Share on other sites

 

This sounds really interesting. But how do you cope with the contribution from the projector lens? First thought is that it won't be sharp enough for you.

If your camera has 5K, so 2.5K column pairs, what lens sharpness would you need for a meaningful result? 2500lp/mm? I don't know if they exist. I assume not. Do we know how sharp S8 lenses are? Smaller formats did need sharper lenses. But my guess is that the available lenses will be 1/10 as sharp as what you need.

 

Or were you doing it some other way?

 

 

The first thing to note is that there remains image information in a higher def scan, after a lower def scan has been subtracted out. And that this is the case whether the original was exposed in focus, or otherwise.. There is also a lot of noise in the result. The question is not why the noise (as such is expected if not well understood) but why the image information? Why is there still image information?

 

The answer is that resolution is only one aspect of an image. And indeed the capture lens will play the most important role in terms of that particular attribute. The scanning lens is not a problem for conventional limit theory, as the sensor/lens relationship does not exceed conventional tolerances. Or to put it another way, it is the source image being enlarged with respect to the sensor, rather than more sensors being packed into smaller spaces.

 

Secondly, an equally important aspect is dynamic range. And limits on this do not depend on the resolution attributes of a capture lens. Where we might talk in terms of stops for analog range, we'd be talking in terms of bit-depth (bits per pixel) for the digital range.

 

Thirdly, an equally important aspect is grain (or graininess), and in particular what has become known as "grain aliasing". This is currently theorised in the same way that conventional aliasing is theorised, with similar solutions, however grain aliasing may very well require more elaboration.

 

Grain is incorrectly modelled as sitting 'below' the image, as if image information came to a stop at some nominal definition, beyond which was some abstract concept called "grain structure". But this model is highly flawed. There is no mechanism in photochemical imaging that stops the image from permeating the atomic domain of the materials involved. Indeed, it is precisely the ability of an image to permeate the atomic material that makes a photochemical image possible in the first place. The lens certainly acts as a low pass filter, but as mentioned, it is not only the spatial frequencies in an image that characterise image information. The name "grain structure" is also a misnomer. Grain is noise. Noise has no structure. If an aspect of grain is found to have structure, that aspect will not be the grain. Grain is a bit like a black hole. You can characterise the outside of such in terms of structure, and the boundary (event horizon) which frames the black hole as a structure, but beyond the event horizon the concept of structure becomes more theory than practice. In relation to grain/noise, statistics, rather than mathematics, becomes more important. While the mathematics implodes, the statistics don't.

 

If nothing else, it is towards elaboration of these aspects (and more generally the question of what is an image) that motivates higher definition scanning. It is firstly emperical in the sense that information persists, and secondly "rational" in the sense that such can inspire some new technical ways of creating and otherwise manipulating an image.

 

C

Link to comment
Share on other sites

If we focus on dynamic range the first thing to note is that a sensor pixel will have a limit in terms of the number of photons it can register. Beyond a certain level the sensor will be saturated. The solution is simply HDR sampling where the source image is captured at different exposures, in order to provide for a greater capture range. Some finesse is required in merging the data. It's not a question of simply superimposing the images. Where a pixel is saturated in one image we do not know if that is because the original coincided with the saturation point or exceeded it (becoming 'clipped' in the transfer). So we just assume it is clipped anyway and replace the value with one from another sample in which it is definitely not clipped. And this can be finessed with a graduated overlap rather than a direct switch.

 

Now given the capacity to perform HDR in this way the next question is then the variation within an pixel - before as much as after HDR merging. We can't increase the bits-per-pixel of a given sensor, but by assigning more pixels-per-mm of the original we are effectively increasing the bits-per-pixel of a lower definition scan.

 

So even if the image were no sharper through the allocation of more pixels, the effective bit-depth would still be increased. And this can make up for losses in sharpness. It doesn't make the image any sharper, but it makes the "depth" of the image stronger (as well as more amenable to post-processing work that could indeed make it sharper).

 

Finally, even if the image were no more sharper, nor possessing any more depth, grain aliasing becomes less as you increase the definition of the scan. And the less grain, the more image there is arguably revealed. How one characterises this additional image (given the main technical terms have been exhausted) remains a question. But an image it definitely is.

 

The ultimate ends here are not in terms of the theory of such, but the practice of such. The theory side is just a means (and only one means) and is always up for elaboration and change. The practice is in the results obtained. By whatever means.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

Refering to post 78

 

OK, I don't follow, I'm out of my area and out of time. I trip up when I think that the max spacial freq that the sensor could capture might be something like 2500 cycles per mm of film neg, but the projector lens at that frequency is probably giving undifferentiated grey.

Edited by Gregg MacPherson
Link to comment
Share on other sites

Refering to post 78

 

OK, I don't follow, I'm out of my area and out of time. I trip up when I think that the max spacial freq that the sensor could capture might be something like 2500 cycles per mm of film neg, but the projector lens at that frequency is probably giving undifferentiated grey.

 

The projector lens wants to be a good lens. I wouldn't use a normal projector lens. A good enlarger lens is the best bet. The better the lens the better the result (obviously). But the important point is that it's not only resolution that one is after (although after such certainly doesn't hurt). It is equally HDR and grain anti-aliasing. And these attributes are facilitated by higher definition scanning as much as resolution is facilitated by such.

 

What is being transferred is an image.

 

If one wants to transfer a resolution chart then the highest frequencies in such a chart might indeed print as undifferentiated grey. But that is beside the point. An image consists of more than just it's frequency components, and it is these other attributes that are equally facilitated by allocating more pixels-per-mm, as much as frequency components (or lack thereof) might be facilitated.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

 

 

Do you have something in mind? How many lp/mm do you think you will get?

 

Well I just used an Apo-Rodagon 50mm enlarger lens. But really any lens will do. Because it's always about what is doable in practice.

 

The Lp/mm of a lens is just a number a lens exhibits given some nominated response level, eg. at 30% response level. But this response level is purely arbitrary. The actual lp/mm transmitted by the lens doesn't end at this nominated response level or any other level. The lp/mm continues to rise as one decreases the nominated response threshold. Exactly the same lens will transmit a higher lp/mm, just at a lower response level. As the response level is decreased towards zero, the lp/mm increases towards infinity.

 

This is not entirely true but it's true enough.

 

This is also why you want to increase the DR, so it can remain sensitive to the very tiny changes in response level ("undifferentiated grey") that high lp/mms would produce.

 

C

Edited by Carl Looper
Link to comment
Share on other sites

The MTF graphs I've looked at for lenses are quite non linear, and extrapolating, it looks like they will show zero contrast (MTF=0) at quite low freq values. So intuitively (that word again), I didn't seet how one was going to get measurable contrast heading towards a freq of infinity.

 

I like the lp/mm thing. Projecting a test reticle with the line pair patterns on it through the lens. You get a very intuitive feel for the lens (smiley face). But what contrast value your eyes are actually seeing is a bit of a mystery. Maybe the more desperate one is the sharper it looks.

Link to comment
Share on other sites

Here's some 500T I shot with my Logmar trying only to use available light and a Rainbow f1.0 Lens.

 

 

I was getting anywhere from 5-20fc and let the Lab (Gamma Ray) do a best light 2K. Was using a monopod for about half of it so most of the shaking is from that on a few shots.

Edited by Joel Rakowski
Link to comment
Share on other sites

The MTF graphs I've looked at for lenses are quite non linear, and extrapolating, it looks like they will show zero contrast (MTF=0) at quite low freq values. So intuitively (that word again), I didn't seet how one was going to get measurable contrast heading towards a freq of infinity.

 

I like the lp/mm thing. Projecting a test reticle with the line pair patterns on it through the lens. You get a very intuitive feel for the lens (smiley face). But what contrast value your eyes are actually seeing is a bit of a mystery. Maybe the more desperate one is the sharper it looks.

 

Yes, that's right, lenses can bottom out before infinity, and for reasons that also have nothing to do with lenses.

 

But as mentioned quite a few times now, it doesn't actually matter. As previously mentioned one can shoot film out of focus (ie. deliberately flat lining the MTF above low frequencies) and one will still get more image information in a higher def scan over a lower def one. For the reasons elaborated.

 

Simply put: image information is not limited to that attribute of an image measured by MTF.

 

C

Link to comment
Share on other sites

Here's some 500T I shot with my Logmar trying only to use available light and a Rainbow f1.0 Lens.

 

 

I was getting anywhere from 5-20fc and let the Lab (Gamma Ray) do a best light 2K. Was using a monopod for about half of it so most of the shaking is from that on a few shots.

 

I enjoyed this. Quite spooky.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...