Jump to content

Carl Looper

Basic Member
  • Posts

    1,462
  • Joined

  • Last visited

Everything posted by Carl Looper

  1. The article on the decline and rise of film grain was a great read. I can't translate german myself but the google translation seemed quite fine. As a technical person I often struggle with subjective descriptions of film and video/digital, eg. that film might be described as "warm" where video/digital might be described as "cold". But I translate such into technical terms. Grain is noise. Noise is heat (entropy). Warmth is a function of heat. The author of the article notes that such perceptions are, from one point of view, a function of prior "viewing habits", that such is "a cultural phenomenon, not innate", that "it's just a matter of time to get used to the new hyper-reality". But tis this said as a way of avoiding further analysis along these lines. To acknowledge the counter argument as a way to tame it, as something known, as something despite which, need not silence the alternative. The concluding paragraph (Google translation): Here the author passes through a "reality argument" (the difference between that which "looks like" and that which "is"). But is this only by way of a prelude, to separating out film grain as something else, as something that can cross that boundary - from something already (naturally) there in film, to something that, by virtue of it's (natural) absence in digital/video, is added (or must be added). Is the overall argument what it appears to be - that film grain remains something desireable rather than as something culturally determined (or otherwise tolerated by the culturally immune). Is the digital renaissance of film grain something that re-confirms a belief in the warmth of film (noise is heat) - a warmth that must be otherwise manufactured (warmed up) in the digital domain. Seemingly. And does it matter if one arrives at this warmth via film or digital (the reality argument). It appears not. The answer to that last question is, for Peter Jackson, apparently: There is no difference. And it is easy. (And I'd argue - in principle a lot cheaper). But there is a domain of film making (as well as digital production) in which there is a difference. A philosophical difference, whether the technical is enlisted in the articulation of such or not. It is a difference that Peter Jackson's films do not really address in any way. In Jackson's films (as in most conventional films), the technical is subserviant to a story in which it doesn't matter whether the story is constructed digitally or photographically - so long as they both look like they belong to the same universe. If only in appearance. The erasure of difference. Carl Looper
  2. Camera Sound Cancellation. I haven't actually tried this but it should work: Another way to suppress camera sound is to record the sound of the camera at the same time as you are recording the scene. Back in the studio you can use software such as Audacity to sample the camera sound as a "noise template" for noise reduction software. The camera sound will have a frequency response that is quite narrow and regular, which the noise cancellation software should like. The camera sound will be at the frame frequency of the camera: ie. 9 Hz, 18Hz or 24Hz (or 25Hz if you have a Leicina) and the software should be able to suppress if not outright silence the camera sound with very little affect on dialogue. On the other hand I just watched a Super8 film this morning, in which the sound of the camera, running in the background, was a very pleasant experience. Carl
  3. Yep - a great book. Mine arrived yesterday in Australia. I brought it in to work today - 'reading' it on the train. Am particularly inspired by the engineering efforts. Also the blue screen work. Back in the day (1979), blown away by Star Wars, I read up on how they did the special effects - models against blue screen and wanted so much to do it on Super 8. I worked it all out, contrast stocks, bipacking, everything. But I didn't do it in the end because I calculated it wouldn't look any good - because the matte would wobble too much. But as Lutz says: "The tiny format and the lack of pin registration causes the masks to be uneven, but it looks very experimental" These days I'd love to see that.
  4. While not a Super8 camera, at my local camera shop, last year, they were selling cameras that were just a well crafted box made of wood, with a basic lens. They were functional. There are (I imagine) people out there, with their own tool shop in their garage, capable of building a Super 8 camera. Whether they would have the enthusiasm or not is another question. It's all time and money of course. And a custom camera, built in a garage, would be more expensive than the same made in a factory - but only if there was a market for the factory made cameras (which there probably isn't). So it's the garage made camera - or perhaps a one off concept camera made by a factory for trade show promotional purposes eg. to sell something other than the camera. If anyone here knows how to build/machine a really basic Super 8 camera with the bare minimal features (and no lens), could they post a rough estimate on the ammount of time they would need to spend (including dollars per hour), the material costs, plus whatever extra they would demand for their efforts given that a potential benefactor would probably want to also posses the camera. Carl
  5. Who will be the winner of the "Last Roll of K40 to be Processed Lottery". Or will it be rigged.
  6. A $30,000 8mm Steenbeck. Isn't that so weird? But I guess there can be a market for such. The film archive institutions. The transfer facilities. Even if the last Super8 film was shot tomorrow there would still be half a century of 8mm filmmaking sitting around for viewing and transfer. Still - it's so strange to see. Carl
  7. However keep in mind that you can get much better colour definition if you get the full range of colours in the first place. I found daylight balanced flouro light bulbs at the corner shop for six bucks. A veteren filmmaker (70+) acting in one of my films once said to me: Carl - you keep saying you'll fix it in post. But why not fix it now. Carl
  8. I just used a nominal vertical height of 4.01 mm (that I picked up from somewhere) and 4:3 aspect 4.01 / 3 * 4 = 5.346666666 ... There are conflicting numbers around. Apparently the frame is exactly 4:3. Someone told me off once for suggesting otherwise - but I was using numbers from here (which are probably wrong) http://en.wikipedia.org/wiki/File:8mm_and_super8.png These are probably right: http://www.super8data.com/database/articles_list/super8_fotmat_standards.htm But in the above the specified camera frame isn't 4:3 - it's slightly wider: camera frame width = 5.69 mm camera frame height = 4.22 mm But anyway, if using the above numbers: 14.8 mm (canon sensor height) / 4.22 mmm (Super8 sensor height) = 3.5 (scale factor) 55mm (canon lens) / 3.5 (scale factor) = 15.7 mm (required Leicina lens) Carl
  9. Hi Nicholas - hope you have had a great xmas and happy new year. If you do a search on 'Iscomorphot 2X Anamorphic' you'll find (apart from this discussion!) some pictures on the lens, and what some have done with it. The lens was specifically designed for Super 8 projectors. I've tested it with a Canon EOS - just holding it in front of it's zoom lens, and taking some snaps - and the vignetting is definitely gone at the 55mm focal length. I can probably take it a little wider, but will need to do more exact tests (rather than handheld ones). For the handheld tests I found there was some chromatic distortion towards the edges, but that such was easily correctable. There was also some barrel distortion but likewise easily correctable. Otherwise, and more importantly, the image was very sharp across the entire frame (using the Canon auto focus). Now this is how I'm determining the required lens, assuming 55mm lens on the Canon (which could probably go a little wider) The Canon's sensor size is 22.2 x 14.8 mm The Leicina "sensor" size is: 5.3467 x 4.01 mm (Super 8 camera frame) The vertical ratio (between sensor sizes) is 3.691 : 1 So dividing the focal length of the test lens (55mm) by 3.691, gives a requirement for a Leicina compatible lens of: 15mm The angle of view, with the anamorphic lens, would be: V = 2 * atan 2.005 / 15 = 15 degrees H = 2 * atan 5.3467 / 15 = 39 degrees Now 15mm is definitely safe, but I may be able to go a little wider than 15mm because the test camera has a wider aspect than the Super8, ie. some slight vignetting appearing on the test camera may still be out of frame on the Super 8. Carl
  10. I'm after information on what prime lenses, apart from the one specifically designed for the Leicina Special (the Cinegon), can be used with the Leicina. My understanding is that lenses designated as "Leica M Mount" lenses are compatible. But are there any ambiguities in lens designations (out there in the real world) that might cause confusion when looking for "Leica M Mount" lenses? Or is the designation specific enough to elliminate incompatible lenses? In addition - what other lenses (other than Lecia M Mount lenses) can be used (with or without adaptors)? In particular I'm looking for a suitable prime for the Leicina that can be used in conjunction with an Iscorama Iscomorphot 2X Anamorphic lens. My understanding is that a wide-angle lens (such as the Leicina specific Cinegon) will be too wide for the anomorphic. I've read that lenses around the 40mm mark are about the widest one should go when using the 2X anomorphic. I realise a special mount will be required but I can build that.
  11. Yes - default processing is useful but only because it provides for what you would do in the absence of any additional information. But if that additional information is there - such as "I would like the process to be pushed one stop" - then you would expect that to be a non-controversial request. Indeed the very specification of "pushing one stop" is already predicated on knowledge of the technical default. Carl
  12. The white balance setting isn't irrelevant. It would be embedded in the raw file. But I only mention the white balance in terms of using the Canon software where the white balance selection would allow the software to put the raw data into the correct ICC context for rendering - ie. the image won't render as orange. Using the white balance button in conjunction with the Canon software avoids the necessity of creating a custom ICC profile. It will be done for you. But if using a daylight balanced light source - a white balance setting (of daylight) will be incorrect. There will be a minor mis-mapping of data when it comes to rendering. Might still look okay to the eye though. When writing my own software I have to establish the equivalent of a white balance setting - a parameter that ensures that when the data is rendered to the screen - that it renders correctly. Whether I extract this from the raw file, or manually enter it in the code, or have it entered through a GUI, or written in a config file - it still needs to be done. And if I use a daylight balanced light source (which is a good idea for bandwidth purposes) there is a double mapping I would have to take into account: 1. that the sensor data doesn't directly map to the rendering context (whatever it happens to be) and 2. that the film is being illuminated by light for which it wasn't designed. Carl
  13. K is right here. That is about economics. Taking a critical position with respect to technical correctness isn't necessarily a bad thing. I can think of plenty of works of art (and now very expensive works of art at that) which were considered technically incorrect when they first appeared. Not that such is necessarily a good reason to be technically incorrect. That's a bit unfair. Just because someone questions technical correctness doesn't mean they don't understand technical correctness. I understand Newton's Theory of Gravity but feel quite happy to question it. Einstein questioned it - and in the process came up with Relativity. Itten's Theory of Colour questions Newton's theory of colour and many artists understand and use Itten's theory. Yet many scientist's would not have even heard of Itten for the simple reason that the university courses they did, didn't require knowledge of Itten. And there are great examples of music that exploit very discordant notes to good effect - not because they are ignorant of harmonics - but because they are looking for something else. Carl
  14. One of the white balance settings on the Canon is for tungsten light. So if doing the transfer using tungsten light (for which reversal film is designed) one shouldn't need a custom ICC profile - just a little fine tuning of the signal in the context of an existing profile (to which the tungsten balanced capture will already be correctly mapped). Another white balance setting is the flash. But since this will have a different colour to that for which reversal is designed, using flash would require either a custom ICC profile or a custom filter in the context of an existing profile. A custom filter in the context of an existing profile is the easiest way to go, but at the expense of some slight loss of bandwidth. If intending to use flash in the transfer then suitable experiments comparing flash captured signals vs tungsten captured signals should establish pretty accurate parameters for the custom filter. Carl
  15. If you look at how colours in RGB space are transformed to colours in CMYK space, or vice versa there need not be any loss of information, as they are invertible: CMY to RGB: R = 1 - C G = 1 - M B = 1 - Y and CMY to CMYK K = min(C,M,Y) C = C - K M = M - K Y = Y - K But if one is talking about the physical colours (eg. RGB phosphors, CMYK inks on paper, CMY dyes in film) then each of these "light sources/modifiers" may not be able to reproduce the colours (light) that the others can. This works both ways. There are colours which RGB phosphors can render that CMYK inks can't and vice versa. This has nothing to do with any intrinisic limit in the colour spaces used, only that the mediums may not exploit the full range of the colour space (whether RGB or CMYK) in which they are represented. We can always define an RGB/CMYK colour space with 1931 CIE reference points outside the visible spectrum that will cover the entire range of visible light within the visible range of the 1931 colour space. But there will be points within such RGB/CMYK colour spaces that none of the mediums (in terms of visible light) can ever touch because the reference points are outside the visible spectrum. An RGB/CMYK colour space defined with points outside the visible spectrum allow for numbers (within the colour space) that don't correspond to any of the visible colours of the mediums you want to represent in that colour space. A better alternative is to simply use the 1931 CIE colour space itself. You can define subsets of that space (called gamuts) to represent physical media such as RGB phosphors, or CMYK inks on paper, viewed in sunlight, etc. Carl
  16. Absolutely. In terms of colour management, irregardless of system, the CIE 1931 colour space acts as a standard reference colour space. It is by virtue of it's historically fixed meaning (and well defined meaning) that it forms the basis of many colour management systems. That is not to say it can't be improved. For example, equal distances in the 1931 space are not percieved as equal. An improvement on the 1931 space, in terms of perceptually equivalent distances, is the 1976 CIE LUV space. But by using the earliest well defined reference, as a standard reference, all other systems can inter-communicate with each other, by reference back to this standard. The 1931 CIE space shows why any tristimulus gamut (eg. the gamut of RGB phosphors) can not represent all possible chromacitys, because no triangle (three points) within the horse shoe shape of the colur space (the visible spectrum) can cover all the chromacitys within that shape. Some new monitors being manufactured today introduce a fourth stimulus: yellow, in addition to red, green and blue, to better cover the horse-shoe shape. Carl
  17. I found by putting a small down notch in the curve for red, at the location of the spike, improves the signal immensely. Carl
  18. I did both a screen shot of your image and downloaded the profiled version of such and compared the two. In the profiled version there is a spike in my histogram in more or less the same location as yours but my histogram shows the curve going back down to zero in a realtively smooth fashion after the spike - indicating a correct exposure. In the screenshot (non profiled version) there is no spike in a histogram of such. Carl
  19. I should just add that you need to do the exposure settings in relation to the illuminate - the reference signal - rather than an image. You might want to put clear film in the gate when you do this so as to take into account the base of the film.
  20. Hi Friedmann, Perhaps you read the histograms correctly in the first place. But there is a spike at the top end of the red channel - which makes no sense to me in terms of image statistics - and I was interpreting this as what you meant by clipping - and reading that as clipping myself. But that spike must be an error of some sort. But if you ignore that spike the red channel is still actually clipping because the curve still doesn't go down to zero (as per the normal statitisics of a pictorial image) - which means the red channel here is (from statistical probability point of view) being clipped - ie. the red channel is over exposed. The green and blue channels could do with a little more exposure. But you can't do this in a single exposure without clipping the red channel further. So the solution here is to do two exposures - one with a good non-clipped red exposure, and another with a good non-clipped green and blue exposure. You then copy the good red channel into the version with the good green and blue channels. Carl
  21. You know the more I think about it the more I realise we're misreading the histrogram. The y-axis of the histogram is the number of pixels with the corresponding x-axis value. Which means the histogram appears to be suggesting that 100% of pixels (y-axis) have a value where red is near or at maximum. Which is impossible. So it's simply that the histogram has been normalised. We're conflating the meaning of the x and y axis. Carl
  22. According to the author of Gamutvision: So assuming that's correct it is only the dynamic range - rather than frequency response - that need concern us. How many exposures are needed to capture the dynamic range? I'm not entirely convinced one needs more than one exposure? If you drop the exposure time, ie. bringing the red channel down a little from it's maximum are details in the blacks really going out of scope or is it just that you can't see image details on your computer monitor? A single computer monitor image is a poor representation of the actual data. You need to vary the curves (temporarily) to see if the lower exposure version really is losing details in the dark end. Carl
  23. The curve tweaking (apart from a quicky image) is done in the context of ICC profile creation. The data itself is not altered. The data stays the same as it propogates through the chain of renderers. But each renderer needs to know how to render the data - and that is what you establish when "tweaking the curves" in the context of ICC profile creation. The illuminate does emit IR - that's true but we can ignore that. The CMOS response to IR will be very low - otherwise we'd see it in the CMOS generated digital image (infrared photography!). The CMOS sees what it can see. I wasn't suggesting it could see the full range of the illumnate. The purpose of recording the illuminate is, to see what it does see. If it can't see the full range of frequencies produced by the illuminant in a single exposure then multiple exposures (at different exposure levels) is the solution. There could still be frequencys it can't see no matter how many exposures you take. Not much we can do about that. :) Now looking at your captures it becomes obvious that the intensity range of the illuminate (per channel) must exceed the range the CMOS can capture in a single exposure. The only solution there is multiple exposure and merging of the data to a higher bit level data structure. Carl
×
×
  • Create New...