Jump to content

Carl Looper

Basic Member
  • Posts

    1,462
  • Joined

  • Last visited

Everything posted by Carl Looper

  1. Our eyes/brain are the final recipient but keep in mind that reversal is designed to be projected by a tungsten light illuminate - so although our eyes can see a wide range of colours there isn't that wide a range being projected. The available range of colours is defined by the illuminate (the projector bulb) - not the film. The film does nothing other than remove light from the illuminate. The remaining light is less than the full range issued by the illuminate. So all we need to do is record the illuminate we're using and see how the CMOS/RGB is responding to the illuminate. Regarding RGB. When I talk about RGB I'm simply talking about the three data channels in which the CMOS response is recorded rather than "colour" as such. It is just three numbers per sensor pixel. One can call them XYZ or ABC. For the moment we can just think of them as having no meaning other than they represent what the CMOS/Camera chip wrote to memory in response to the illuminate. Now we can assign a meaning to those numbers by noting how the numbers differ from each other with respect to our illuminate. Ideally the three numbers should be the same. If not then it means the CMOS is more sensitive to one component of the light than another. And this is where clipping can occur. Lets suppose the numbers are 8 bit (for arguments sake) and that R=255, G=200 and B=128 (to be dramatic). Then it means if we were to increase the exposure in order to bring out more blue, the red will clip because it has nowhere to go beyond 255. One solution is to digitally increase G and B (rather than increase exposure) but then we're not exploiting the full range of the sensor for those channels. The correct solution is the one you've suggested - multiple exposure. You don't necessarily need to use filters because you can just copy the "good channel" from each exposure into a new file. The only thing left to do after that is perhaps tweak the curves on each channel. Carl Carl
  2. All of these are digital devices. Paper might seem odd calling a device, but it's just the rendering context for printers: the printer's screen. But film is just data. In the same way that you can't observe data (on it's own), you can't observe film on it's own. You can only observe how film is being rendered by a device such as a projector/screen, or sunlight/retina, or LED/CMOS. Digital image data can't be observed on it's own. You can only see it in terms of how it is being rendered by a device such as Photoshop. So you have to look at the way the film is being rendered by a particular device. In our case the device is one we have assembled oursleves. We are the manufacturer. And like manuafcturers we can elect to create a profile for our device that conforms to colour management systems such as ICC. Now as far as I can tell the Diffuse Spectral Density Chart is effectively the output 'profile' for the film in terms of light. And the Spectral Sensitivity Curves is the input 'profile'. The Spectral Density Chart shows the range of light frequencys that each dye, at a certain density, absorbs. Or put another way, it shows which light frequencies are transmitted. For example, lets suppose we have an area of the film in which the "magenta dye" density is 0.5 and that the other dyes are near zero density, then according to the graph (if I'm reading it correctly) that area of film would therfore absorb light of a frequency of between about 500 and 600 nm (a green colour) and therefore transmit the remaining freqencies: 0 to 500nm (blue), and 600 to Infinity nm (red), the sum of which is magenta (red + blue = magenta). We need to characterise what range of light, transmitted by the film, is captured by the CMOS to RGB device given a known illuminate and the response of the CMOS to that illuminate. We need to marry all this information into a single profile because there is no need to do otherwise. The system is a fixed system. The profile will characterise the system. If the profile is ICC compliant then we don't have to worry about characterising any other ICC compliant parts of the system, (downstream from this component), as they will already have a profile. Carl
  3. Hi Friedemann, There is a summary of creating ICC profiles here: http://www.color.org/creatingprofiles.xalter The last paragraph of which is: The 3D look-up table mentioned above is discussed here: http://en.wikipedia.org/wiki/3D_LUT This LUT is basically what I've been discussing - defining a mapping between input data (eg. raw image data from a particular capture system) and how it should render by default on a particular output device (eg. a computer screen). Once this LUT to ICC profile is done, then all subsequent data through that same profiled input system can then be viewed on any ICC profiled output device, and look the same. So the task boils down defining this LUT, which is to say: mapping the light>filter>film-processing>light-film-projection>cmos>raw-data to a particular rendering intent (be it ICC or some other colour management system). Carl
  4. Note that this is just a one off proceedure to create the profile. It is not something you have to do every time. Once the profile is established all subsequent film signals entering the digital domain, via this profile, will have the same default look (+/- any minor variations in processing) Now the "rendering intent" - in the ICC sense of the term - is not about the above process as such, but about the rendering intent of a colorist working on the results of the above process. The "rendering intent" used to establish a profile in the first place is simply to "bootstrap" the profile to a default intent. After that, a colourist, working on the data, using ICC software (where the software uses the created profile), defines what ICC means by "rendering intent". Now the colorist may not do anything. Eventually the data will be used for a render back to film, or to a photographic print, or YouTube, etc. And if those rendering devices are ICC enabled the signal will reproduce as the colourist intended - and if the intention of the colorist is to do nothing to the data then the data will render according to the default intent established when the profile was originally created. Carl Carl
  5. 1.) Of course the stock needs to be developed in a defined environment. As Karl lined out, it needs to be as close to reference E6 as possible. Since processing is rather reproducible, the development will not bring massive deviation. The ICC profile is designed to allow creative work produced in one context to look the same in another context. For example, if I paint a picture on my computer and save it as a digfital file, I could find it looks completely different when opened on another computer system. The data itself hasn't changed but the way in which the data is rendered has changed. But with ICC managed data this can be managed in such a way that it doesn't happen. When working within an ICC managed workflow your rendering intent becomes known because the conditions in which you are creating the picture (looking at a computer screen and selecting icons) have been previously established in a profile. As that data moves around, the embedded profile can connect with other device profiles, and ensure the data is rendered the same when viewing it in another context. 2.) Anything else (like projection/scanning light spectogram, CMOS gamut etc.) should not be part of that profile. Au contraire, everything else needs its own profile. The predigital pipeline of film/processing/projection doesn't exactly fit the ICC model of a managed pipeline. You can model the film/processing/projection pipeline but the only way to manage variations in that pipeline is to do so manually. Or stick to a fixed pipeline (reference E6 etc). But the rendering intent remains undefined. A model is needed when transferring information from the non-digital domain of film into an ICC managed pipeline. The model needs to take into account the purposes of ICC on the one hand and the information entering that pipeline on the other. Of particular importance is the rendering intent. It is not yet defined. So we have to establish that model and rendering intent. One way of doing so is to a. assume a reference E6 process - and b. assume the intent is how that referenced process looks when projected in a dark room. But we still don't have a profile until we define how that intent will be accomplished. Once defined in terms of light, CMOS etc. we can encode such as a device profile that allows the transferred data to pass through viewing nodes and render as intended. The simplest way of doing this to just manually adjust the render of the data to fit the intended colour using an appropriate ICC profile tool. Carl
  6. To clarify the situation with an example. Lets suppose you offer a service whereby you process film according to a particular recipe in which the film comes out "too red". The question here is what determines that description of "too red". Is it the process? Well no. What if it is meant to look like that? If so then the image is not "too red". If the way it looks is the way it is meant to look then the rendering intent has been satisfied and the profile is simply: render these RGB values the way you are already doing so. But if the rendering intent is that it should not have as much red as it does (when viewed on a particular output device) then it needs a different profile. It needs a profile that says: reduce the red by a certain ammount. Or flip the greens to blue. Or whatever. The profile can't define the rendering intent. It is the rendering intent which defines the profile. As the digital file moves from one output device to another the profile ensures the rendering intent is preserved. Carl
  7. Note that Kodak is one of the founding members of the ICC. We need to keep in mind that film can't be directly defined in terms of ICC profiles. An ICC profile preserves a rendering intent as a profiled signal undergoes viewing in different output devices. To put it another way - the transfer of light, to film, back to light is not under digital control. We can't use an ICC profile to manage this transformation. We can use longer time in the stop bath etc. But not the ICC profile. So the task is not to describe film in terms of ICC profiles per se, but to define an ICC profile that says how particular light, filtered by a particular film, onto a CMOS sensor, and to a digital file, should be rendered when viewed on a particular output device. The answer to this question becomes the profile. Carl
  8. No probs. I'll do that. We can then connect the film profile to your scanner profile (and mine) and see what happens. C
  9. What we can do is make the assumption that the graphics in the Kodak pdf are exact. And then test that assumption. The first task is to take the charts into a vector program and hand trace the curves. We can make the assumption that the centre of each physical line in the graphic is the location of where a precise sample point should go, with an error margin of +/- half the thickness of the line (because one will be hand placing the vector points). We can then draw a spline through those points and tweak the spline until it matches the centre of the curve across the entire curve. On top of that can be put a GUI for looking up particular signals in the digital curves and testing those curves against known signals. And then publish the results in terms of standards such as ICC profiling. Carl
  10. Yes - the Kodak spec sheets are an ICC profile of sorts, but insofar as they are embodied in a graphic, they are not very accesible ICC profiles - and their accuracy is unknown. To what accuracy did the graphic designer translate the data? A good profile would be one that provided a lookup table or equation for generating the curves. Note that the dye curves are smooth, which I assume is because each dye is a constant - so they could be represented in terms of a mathematical equation for generating the curve. But a lookup table would probably be the most likely way the sensitivity chart would need to be represented. Perhaps this thread is the beginning of a project to write up the ICC profile for the pipeline. I'll need this same info in due course, myself. For my own work. I purschased a roll of 100D a few weeks ago (and one of the last rolls of PlusX by the way). The intersting thing here is that in the absence of accurate profiles (unless someone reveals otherwise) the only way to put such together is do the measurements/experiments ourselves. But gosh - experimenting with Super8 around here can be interpreted by special people as if we were experimenting with nuclear energy. :) Carl
  11. Yeah - he should look in the mirror and stop confusing someone else as the one who is looking angrily back at him.
  12. Hi Friedmann, this is an interesting question. The technical sheets provided by Kodak allow one to reconstruct the colour signal propogation in terms of light frequencys - but of course, what matters is the frequency of the light source used during scanning. Without a profile for that it's not possible to complete the entire pipeline. But lets follow the diagrams in the tech sheets for 100D as far as that takes us. In the Kodak technical sheet ( http://motion.kodak.com/motion/uploadedFiles/TI2496.pdf ) there are two diagrams on page 4: F002_1045AC F002_1046AC The first describes the response of each layer in the emulsion to the incomming light. So for example, the "yellow forming layer" is at a peak around 440nm. But 440nm is blue light? Well, of course, the film is reversal meaning that it eventually undergoes reversal of this negative respsonse. So where there is a peak in the "yellow forming layer" there will be a corresponding trough/valley in the reversed result, ie. for blue light (440nm) there will be next to no yellow dye in the result. As there should be. The second describes the frequency of light that each dye absorbs. From the Kodak tech sheet: The graph to which this comment refers describes what frequency of light each dye absorbs at a particular density. For example, from the chart the yellow dye, at peak density, can be read as absorbing a narrow frequency of about 440nm. Which would be correct, because the yellow dye absorbs (blocks) blue light, therefore letting through yellow light. So in principle, we could digitise these graphs and prepare a lookup table that maps incomming frquencys to outgoing frequencys. Since the curves are not exactly the same, there is a bias there, which a lookup table can capture. It would be good if Kodak published a list of numbers in addition to the graphs. Much easier to enter into the computer. Anyway what we don't have is a description of the light source used during digitisation (what an old sounding word that is) - what frequencys it generates. Without such we don't know how much each dye layer is being illuminated. But we can measure the scanner's light source (without any film) in terms of the RGB values it gives when digitised. And using any tech specs about the camera we can map those RGB back to frequencys, back to dyes, back through the lookup table, back to the original frequencys ... and tweak accordingly (correct for the bias in the lookup table). Can't forget the 80A filter. Carl
  13. Hi Nicholas - I did get your private message. Thanks heaps for that. Looking forward to seeing your experiment. Wonderful stuff. Hopefully no goblins are listening in and getting angry that we actually enjoy this sort of thing. You know someone I knew in the early nineties (his name is Robert kitchell) did something similar - but it was during the day and didn't involve long exposures. He road from one end of Canberra (in Australia) to the other, taking a single frame every x seconds. Carl
  14. Hmmmm ... doesn't like being called 'Karl'. We have his achilles heel. And Karl is such a civil person himself. Consider this gem: Well gosh - what a sin - wanting to be right? God forbid that right information should get into anyone's hands. Carl - the stalker.
  15. Hi Karl - so you managed to read my post. That's an achievement. Keep up the good work. Must have been the short number of words involved.
  16. Obviously our fiery friend Karl was unable to get past the first line of Anthony's post before his jump-to-conclusion-and-flame approach to social networking kicked in. I can't see anything about Anthony's post that warants Karl's attack in any way. Anthony finances his own work. He ** finances ** his own work. But that's not enough for Karl. According to Karl Anthony should do more. Why doesn't Karl put his own money where his mouth is? Oh that's right. Karl doesn't pay to do anything in the film indsutry. He gets paid. And he wants everyone to know he does. As often as he can. What a wanker.
  17. The native resolution of Super16, to that of Super8, is more like 2 times. But in terms of digital intermediates, where there is an opportunity to digitally process the film signal, it is not that difficult to improve the native definition of the Super 8 signal by a factor of two. This is due to the fact that one is dealing with a motion picture signal rather than a still image. In a typical motion picture signal there is a high correlation between the image in one frame and the image in adjacent frames. Indeed it is precisely this correlation that makes moving picture compression possible. But the grain - which acts as a limit to the definition of the signal in one frame, is completely uncorrelated with the grain in another frame. This separation of signal and grain (or noise) allows the signal to be enhanced without enhancing the noise. Indeed, the noise drops because it is statistically self cancelling. As the noise drops the resolution increases (because it is the noise which limits the resolution). In other words you can improve the Super8 signal to that of the un-improved Super16. Of course you could do the same for 16mm. And 35mm. But it doesn't happen because 16mm and 35mm are considered (for the time being) adequate without such. But Super 8 isn't. And so it's the perfect medium in which to experiment with this sort of thing. Once you can get Super 8 up to an "adequate" definition then it is no longer a factor in whether to use it or not. Getting Super8 up to the definition of native 35mm is a lot harder but not entirely out of the question. However the lens starts to act as more pesuasive limit and is a lot harder to compensate.
  18. The native resolution of Super16, to that of Super8, is more like 2 times. But in terms of digital intermediates, where there is an opportunity to digitally process the film signal, it is not that difficult to improve the native definition of the Super 8 signal by a factor of two. This is due to the fact that one is dealing with a motion picture signal rather than a still image. In a typical motion picture signal there is a high correlation between the image in one frame and the image in adjacent frames. Indeed it is precisely this correlation that makes moving picture compression possible. But the grain - which acts as a limit to the definition of the signal in one frame, is completely uncorrelated with the grain in another frame. This separation of signal and grain (or noise) allows the signal to be enhanced without enhancing the noise. Indeed, the noise drops because it is statistically self cancelling. As the noise drops the resolution increases (because it is the noise which limits the resolution). In other words you can improve the Super8 signal to that of the un-improved Super16. Of course you could do the same for 16mm. And 35mm. But it doesn't happen because 16mm and 25mm are considered adequate without such. But Super 8 isn't. And so it's the perfect medium in which to experiment with this sort of thing. Once you can get Super 8 up to an "adequate" definition then it is no longer a factor in whether to use it or not. Getting Super8 up to the definition of native 35mm is a lot harder but not entirely out of the question. However the lens starts to act as more pesuasive limit and is a lot harder to enhance.
  19. Excellent article. But make sure you take into account the following advice in STEP THREE - USE THE ULTIMATE EXPOSURE COMPUTER WISELY: For some films, exposures involving shutter speeds in excess of several seconds may require additional exposure because the film's sensitivity decreases with continued exposure to light for long periods (this is called "reciprocity failure"). Light meters do not correct for this phenomenon, because it varies according to the type of film. Consult the manufacturers' specifications for details. There are some tricky exposures where you can improperly expose the film whether you are using a camera meter or the Ultimate Exposure Computer. Many of these situations are addressed in "What to do in Tricky Light Situations" in Appendix A.
  20. I'm not sure I totally agree. There are (or were) a few "going forward" points that came up, and it's somewhat unfortunate that such have been swamped by the noise of gears stuck in first. For example, I'm quite intrigued by the contributor who mentioned he had the manufacturing plans for a 70s camera with pin registration. Apart from anything else such information would have scholarly historical value. Would have loved to hear some more on that. And there was someone who mentioned he had a machine shop - that he could build a Super8 camera body. I was quite inspired by that contribution. He wasn't someone who felt he "deserved" a camera. He was someone tentatively willing to build one. And myself. My contribution is in the area of technical ideas and digital image processing. The image processing is not strictly a part of the camera system as such but when designing something like a camera it's a good idea to consider the entire system, from acquisition to release. And like the contributor with a machine shop, my contribution wasn't as a consumer - who feels he "deserves" this or that capability but as someone who would participate in the process of designing and building the camera. I'm still quite inspired by the idea of a new camera. As previously mentioned I'm not interested in whether it looks professional. Indeed I suggested a wooden box with crank handle. This wasn't sarcasm. I was being serious. The purpose of the comment was to move the discussion away from look and feel issues (body designs) to the engineering optical and mechanical concerns. The physics of the thing. It would be a professional camera - not because it looked like one - but because it would behave like one. Before one builds anything it is necessary to design it. Designs don't use up expensive resources. like storyboards for a film, the storyboards require only pen and paper (from a materials point of view). the purpose of the storyborads is to test out ideas without consuming large ammounts of filmstock. And it makes the editors job easy. They do little more than assemble the film together. They don't have to extract a film from hours of experiments or mindless cinematography. In the same way, discussion of how to build a camera doesn't cost anything. Well - it shouldn't but, of course, some anxious types could start reading into such words the end of civilisation as we know it. Carl
  21. Thanks Nick. Jenson and Borowski are starting to come apart. One can see Borowski's head on the verge of implosion. Borowski's now arguing that people like me could indirectly hurt Kodak. Ha ha. And Jenson's proposing (albeit sarcastically) that a digital camera, inside a fake film camera body, is a good idea. Ha ha. They keep coming back for more. Carl
  22. The direction in the original post was in terms of persuading a manufacturer to produce a new super 8 camera. And that got knocked on the head with quite good arguments. But the discussion has since moved on from there, in various different directions, that remain related to the original question but spin it in various different ways. But what were reasonable arguments againsts the original assumptions have next to no meaning in the context of the new directions of the discussion. For example, talking about toy cameras has nothing to do with the original question either. But that's an interesting direction. My only argument there (apart from the diffuculty/impossibilty of it) is that I have no interest in toy cameras. I'm more interested in a notionally "professional" camera (in the wider sense of the term) - which has more to do with the original question than toy cameras. Funnily enough, those more familiar with the native quality of larger formats, have more interest in the traditional look of Super8 transfers - which don't look so hot (technically speaking). They want Super8 to remain the "other" of technical quality. But that's one way of using Super 8. Wim Wenders use of Super8 in Paris Texas was absolutely beautiful. There was a self-referentiality that worked to support the story of a man who had lost his wife - who was trapped in his memory of the past. But almost all uses of Super8 by so called professionals working in otherwise different formats will only consider Super8 in the same way - as a way of invoking nostalgia. As if that was all Super8 was or could be. I'm more interested in using Super8 in a professional manner and I mean "professional" in the broadest possible terms. Which has more to do with the original question than using crummy Super8 transfers of film from a toy camera in a crummy music clip. Woopy doo. What's so professional about that?
  23. I only ever worked with Super8 in a "professional" capacity just the once. It was the early eighties. I shot a music clip. Worked just fine. We used archival Super 8 (of a protest march) intercut with new footage of the singers (in Super 8) and a couple of video shots. I think there was a bit if 16mm in there as well. The thing was put together on video for television distribution. And was screened a number of times. The problem here is that if the argument is that "professional" (in the narrow sense of the word) use of Super8 is a dead end then okay - it is a dead end. And so to is the argument. Because there are other uses of Super8 that are not dead ends. You can call such usage "hobby" or "amateur" but so what. Who cares? I worked on a printing press in the late nineties, making engravings, despite such printing presses having long ago disappeared from so called 'professional' usage. But the work I did I wouldn't call 'hobbist' or 'amateur'. But if someone else needs to bolster their ego by calling it such then so be it. Thats the language they like to use. That it reflects a very narrow view of the universe is their problem. Carl
×
×
  • Create New...