Jump to content

Dominic Case

Basic Member
  • Posts

    1,355
  • Joined

  • Last visited

Everything posted by Dominic Case

  1. In fact the sequence for ENR/CCE/ACE is slightly different. In a normal process the film goes through the developer, which produces a silver image and a colour dye image. Then after the stop it goes through a first fixer to remove the unexposed silver halide. Then through the bleach to convert the silver image back to silver halide (ionised silver), which can then be removed in the second fixer. All that's left is the colour dye image. In the ENR etc process, after the bleach converts the silver image back to silver halide, the film is run through a black & white developer, which simply reverses the process of the bleach - but to a partial, controllable extent. As a result you end up with just as much of a silver image in additon to the colour dye image as you want. It seems convoluted,but it is the way you get complete control of the extent of the effect. In comparison, the bleach bypass process simply skips the bleach (as the name suggests) leaving you with a full silver image. It's an all-or-nothing process. Some labs may offer "partial bleach bypass" but the bleaching operation is far less controllable than development, so the results aren't really reliable. Finally, ENR, ACE, CCE cannot be done in the negative process because there is no first fixer. Hypothetically I guess a lab could convert their neg process to allow a first fixer as well as a redeveloper etc: but it's probably not a viable option.
  2. Yes you can tell the timer whatever you want. But it won't make any sense to him or her. Telecine controls bear little or no relation to printer light corrections - and there is no standard reference point anyway. It's possible to come closer to film timing in a full DI suite, working in log colour space. But you still don't have a reference point. Even for a film print, asking for light (say) 28-29-30 only has meaning in one specific lab: a different lab would likely give you a different result for the same printer points. So it's not really clear what you would be expecting in asking the telecine timer for light X-X-X.
  3. IIRC, somewhere on the can label (should be just after the stock number (e.g. 7291) you should find the designation 1R or 2R: 1R for single perf, 2R for double perf. These stocks could have been made in either configuration. Of course, if you have a darkroom or change bg, you can open the can and find out. If no darkroom, you'll need to beg a favour from your lab. Sometimes they wll break the roll down onto 100ftspools for you (rather than let you loose in their darkroom!). Some labs might charge for this - smaller ones might not. If you are planning to commit a lot of time (yours or other people's) or money into what you shoot on these rolls, then a dip test (aka clip test) would be a good insurance. The lab will process a couple of feet of unexposed stock to check its condition, fog level etc. Again, check on charges. If you are just planning to play around to get the feel of shooting film, don't count on good or typical "film look" results. Stock that is so old will almost certainly be very grainy, and give you a dull smoky image.
  4. Just read this announcement, which is BIG:- Technicolor is to announce today that it has donated its archive, one of the most important collections pertaining to the advent of color in film, to the George Eastman House. This is a massive donation - the cameras, the films, the lab equipment, drawings and schematics, correspondence between Kalmus and George Eastman. The job of cataloguing it would be a lifetime's work - but it's being done by the only surviving Technicolor researcher to have worked with Kalmus. Read the PR here.
  5. You must ask the lab to provide FLeX files with the transfer. These relate the Keykode edge numbers on the negative to the timecode on the video. I don't know if Premiere handles FLeX files (if it does it well or at all). If it does, then you load the FLeX data into Premiere along with the images, and it will spit out a cutting list at when you are done editing. You send this, along with an offline version of the image, to the neg matcher. Alternatively, you get an EDL out of Premiere, which goes to the neg matcher along with the FLeX files, and the neg matcher should have software that will translate the EDL into a cutting list (or KDL). What is VITAL is that you talk to the lab, especially their telecine people, AND the neg matcher, AND you check what your workstation can do, BEFORE you start. Take notice of what they say. Please excuse the capitals in the last sentence, but don't overlook them. Read it again. Really. It is the only sentence in this reply that you can absolutely rely on. You will need to look into what transfer speed you should ask for at telecine, and what framerate of timecode, and then what speed yor Premiere system should be set to. If you choose anything that interpolates frames or fields in the transfer, then you will have headaches getting a perfect neg match. Oh, and then there is the matter of syncing sound. Talk to the sound dept of the lab too. Don't forget when you are editing, that however easy and tempting all those effects are on the workstation (eg speed changes, freezes, dissolves, fades) they are all much more complex on film, and will have to be catered for separately. Putting a speed change into an EDL will almost certainly screw things up for the neg matcher unless the EDL is cleaned up first.
  6. I think most of these predictions are a bit limited in their vision. Of course it's hard to predict massive changes or totally new ideas, but if we could all do that then there wouldn't be any new inventions, we'd have them already! But think back 15 years to get a sense of what has changed. Make it 20 years, as technology seems to change faster and faster. So let's think about 1990. Windows 3.0 was just out. Your PC probably ran on a 286 chip, with a 40Mb hard drive. No sound unless you added a sound card. World Wide Web was still in its infancy. No email. Dial-up connections at 28Kbps. Mobile phones were the size of a housebrick. You used them as phones! - they didn't do anything else, and only worked in the city. The first DI shot (one effect shot, not a complete film) had just been made (in the film Willow). THe first Avid non-linear editing system had just been launched, but it would be a few years before NLE completely replaces cutting on film. No HDTV; no 16x9 TV. No DVDs till 1995. No thought of digital cameras for cinema. Who would have imagined the iPhone, Facebook, Avatar, Google, or the RED camera? Who would predict in 1990 that a computer manufacturer and the Beatles' old record label would be infringing each others' trade mark space (Apple & Apple Corps). Or that one of them would be making postproduction systems? In fifteen years time (2025) we might still be shooting the same tired old stories. Possibly for TV, probably for various streaming and on-demand outlets. At the high end, "stars" will have priced themselves out of the market, and we'll have mostly CGI action flicks, assisted by various forms of motion capture, virtual background capture, and so on. And we could be shooting true 3D (not just stereoscopic) with some kind of multiple lensing device to give many more points of view: like holography, but probably not holography. The data (lots of it) will be captured instantly in the editing room, via fibre, satellite or even normal wireless transmission. So screen language (close-ups, depth of field, tracking shots, tilts, etc) will all evolve to new devices for storytelling. At the lower end, productions might still be more conventional.
  7. I guess this is a good example of the influence a "star" might have on a script. Sure, the Hatter's not that important, but if you want Johnny Depp, you've gotta build it up a bit, even if you are Tim Burton.
  8. YAWN! Even when the subject title is "Alice in Wonderland", we get mostly a diatribe on whether the 3D is any good, or if it looks like film or digital. It's no wonder that so many drearily unoriginal films are made when the response to the cinematic values of the film is so dull. I know films don't set out to replicate books exactly - they are different media. But surely it is a requirement that if you are going to use the same title as a book, then you need to do more than use the same characters and set pieces. Lewis Carroll's book was a brilliantly original and amusing but donnish look at some of the conundrums of language, logic, epistemology and philosophy, set out to entertain young children. Alice's voyage is one of inquiry. She reaches the end of the book with new wisdom. Burton's film is a fast-moving colourful adventure in an absolutely standard cookie-cutter mould of "Alice sets out to rescue the world from the mad emperor." She reaches the end of the film with eventual triumph,having "restored order" in Wonderland, and as an afterthought, having turned into a feisty young woman who won't marry the rich young idiot, and goes off to conquer the "real" world. How is this the same story?
  9. I contacted Rocky Mountain some years ago about getting some very old neg put through an ECN1 process.They quoted me to do it, but warned that it might take several months. Apparently they wait until they have enough of any given film type to make it worth making up the chemicals. If you are the first one on the bus it could be a long wait - or you could be lucky.
  10. Two polarisers probably won't do any more than one. If the reflected light is perfectly poalrised, one filter will do the job. If it isn't (as most likely the case), then adding more polarisers won't help at all. Best (as suggested before) try to minimise the light outside. A black tent or flat would be one way. And can you throw more light on the characters inside the restaurant? That would help too, if you don't see too much of the exterior (except the glass) to give the trick away.
  11. I guess Stefan predicted the demise of 16mm just moments before the Baftas and Oscars. The success of The Hurt Locker seems to suggest that there is life in the old dog yet. But then again, it wasn't shot on his company's cameras. A great result for Aaton though. And Fuji. And female film directors. And so on. Sorry Keith, I've no idea what the missing word is.
  12. Given that almost no commercial laboratory could manage to process it, even when Kodak was forced to release the formulae some years ago, I'd say that the answer is that almost no-one would know how you could process Kodachrome II.
  13. So, you are asking why product X (unspecified stock type, gauge, speed rating or any other property) is grainier than something known to be Kodachrome. Turn the question around and it sounds like an advertising campaign of the worst sort. "Kodakchome has less grain!". -Than what? Cornflakes? But really, any answer could only be speculative, and you've hit on most of the possibilities already. I think Kodachrome was around 25ASA then, and acknowledged as remarkably fine-grained. Anything else used (for greater speed, availability, processing convenience) would be grainier. Apparently the two images were part of different films shot by different people: they obviously made different choices about the acceptability of gain versus other parameters. It would help if the film stock could be identified by looking at the entire film area, not just the image. That could tell "a film person" a lot about the stock type, whether it's duplicated, enlarged, printed from a negative, etc etc. BTW, I would pick up on your comment that "the grainy film on right can't even (h)old a solid color like the sky". Grain is an inherent part of the structure of an image in a film emulsion. There is no reason why a "solid" colour would be less grainy than any other area. It's not the same as (for example) compression artefacts in a digital image. And in fact I don't think the blue sky is any grainier than the rest of the image in the same frame, though undeniably much grainier than the Kodachrome image.
  14. It's not a hard-edged science. It's not like cutting a piece of glass to fit a window frame. More like cutting a panel of wood to block a hole in the wall. The panel of wood will be bigger than the hole, and you can position it a little to the left or right, so long as the middle is over the hole. Film has a very wide useful exposure range, which means that with a low to average contrast scene you can overrate or underrate the film and get away with it. Like a big panel of wood and a small hole. And if you are shooting a very contrasty scene, with abrightness range outside the film's abilities, don't forget that the film characteristic curve has gradual slopes on the toe and shoulder, and you can nudge the image up and down the curve to control which end of the range gets better treatment. In essence, exposing according to the recommended EI will get your mid greys exposed in the middle of the range. If you have no really extreme highlights, but you want to see into the shadows as much as possible, then you'd rate your 500 EI film at 250 or even lower. Or, you'd rate it at 500 and over-expose by 1 or 2 stops. It is exactly the same thing. If you are doing a TV commercial in a white-tiled bathroom, no shadows, you might rate at 800 or 1000. Doesn't matter what midgreys look like, you want as much range in the whites as possible. Other non-sensitometric reasons for varying the EI rating would be to do with minimising grain or trying to get away with fewer lights. So
  15. If the Kodak person can't get the stock numbers right, there's no hope for the rest of us! I know that they do follow a system of sorts, but it's less than informative. 7219: 7=16mm; 2=camera negative; 19=something that isn't 18, 05, 72, or anything else. Is there anything less useful? Some car manufacturers (the ones that haven't gone for silly, meaning-free names for models) still use simple number series that told you something. Peugeot 30x is a small family sedan: 40x is a larger family sedan; 20x is smaller. And the x simply changed with new models, so a 407 was a later model than 406. Why can't they number film stocks that way?
  16. What you are overlooking is that modern colour negative stocks already work in the way that you are proposing. There are two blue-sensitive layers, two green-sensitive, and two red-sensitive. In each pair there is a fast, coarser-grained layer and a slow,finer-grained layer. Each individual layer also contains a range of grain sizes, which is how the emulsion has such a broad useful exposure range, and also why the contrast is low. So if you want extended range, perhaps you just need to persuade Kodak to make the slow layer slower and the fast layer faster. If you are thinking in terms of beam-splitting, instead of trying to find a Technicolor camera, why not look more into the 3D projection model I suggested. Whatever it is that they put on the front of a vertical-offset 3D film projector is all you need - and the means to put it onto a camera. Sounds easier than some of the other ideas. :unsure:
  17. only indirectly . . . NZ National Film Unit was privatised some years ago and became Avalon, and at some stage the lab became just "The Film Unit". Then it went up for sale again, and Peter Jackson bought it rather than see it close down or go into foreign hands. He then built a top class sound mixing facility across town, moved the lab into the same place, added a digital facility and called it all Park Road Post. Moving a lab isn't easy, but in his spare time, he relaxed by making a film or two - or three. Did quite well. No bad for a lab owner. That 7374 stock is probably quite old. I can't remember the last time I came across it.
  18. No - it's the other way round Rob (I know you really know that!). Black and white neg through the ECN2 colour process would be a serious worry - there would be no image as the process is designed to remove all silver leaving a colour dye image (and b/w neg has no dyes). Worse, there is a good possibility that the hot developer would soften the emulsion enough for it to fall off into the dev tank. But . . . . Colour neg through a black and white process will give a black and white image - of sorts. You won't get a lab to do it because their black and white processor (if they have one still) won't be able to cope with the remjet backing - quite apart from the uncertainty about time and temperature for the developer. But - given the same messy business of having to remove the insoluble remjet cleanly, and the same unknown time and temperature (or even what is the best b/w developer solution), it could be done at home in a large spiral tank. No need to worry about the three emulsion layers (in fact there are twice as many). They all rely on forming a silver image as part of the process, so you'll simply get a pnchromatic b/w negative image. But there's no need to go to this trouble. Shoot colour neg, process it normally, and desaturate the colour in telecine or in DI, as Adrian suggests.
  19. Keep it simple. Use a wider angle lens, and enlarge/crop the image when you transfer. Frame up a grid before you start, and mark up your viewfinder or video monitor in some way that someone else who knows how can tell you. Expose correctly (or a tad over if you hate the "thin" neg look) . Process normally. You get the right size image, right field of view, right contrast, but the grain is enlarged more than it would be normally.
  20. If I've understood the question, then preflashing is the last thing you'd want to do. You've got an original scene with a Dynamic Range of - say - ten stops. You've shot this on a digital camera. Your display device is giving you a maximum Dynamic Range of six stops (EV 4 - 10), and you can manipilate the curves of your images so that you utilise this range fully. You want to shoot the image off the display device onto black and white film and produce an image that looks as if it was the original scene shot directly onto film. (or is at least intercuttable with it). Film negative has a much wider dynamic range than six stops, and it would be able to capture all of the original scene, albeit with some toe and shoulder shaping. So if you simply shot on camera negative, you would end up with a very low contrast image compared with the original. Flashing the stock does indeed lift shadows, by moving them from the toe of the curve up onto a slightly steeper part. In fact it extends the dynamic range of the emulsion. But in doing so it reduces the overall density range of the negative, and therefore reduces contrast, particularly in the shadows. It's normally used to reduce contrast. You want to increase contrast. Nothing you do to the display image will increase the range on your final negative, it is limited by the maximum brightness of the device. It's why CRT film recorders gave way to laser recorders some years ago. The high contrast stock you shoot on is the only viable method of getting the range you want in the negative. And the short answer to your question has to be: "do tests". Create a grey scale on your display device, in half-top intervals through the 6 stop range it is capable of. Measure on your spotmeter, and correlate that against the values in the digital image. Shoot that onto your hicontrast stock at a bracket of exposures. Normal processing should be about right - as you say the stock is designed for the purpose. Then you should be able to measure the density of each step on the negative and plot it against the original input values to see what happens at each end of the curve and the middle. You can measure the density by projecting the negative and using your spotmeter again, in the absence of a real densitometer. Basically you are looking at the process that is used to calibrate any film/digital/film i/o system, such as the DI chain of scanning, grading, & recording back to film. The principle to follow is a closed loop, of unity characteristics.
  21. That's a great compromise solution, and in a way it fits with human vision in that low light levels are seen with less saturation anyway (scotopic vision). And it gets around the colour filter layer problem and the remjet problem. But it is combining two different emulsions with different curves, and it doesn't really extend the range of colour information that's recorded. I think the two-perf solution is the way to go. Have you looked at the vertical offset 3D projection system, used briefly n the '80s and reintroduced by Technicolor last year? It's a 3D projection system in which the left-eye and right-eye image pairs are each a 2-perf image, one above the other. The projector pulls down int he normal 4-perf mode. The magic is in the beam-combining optic on the front which pushes the two images (polarised) out through a single lens, but offset so they are superimposed. You simply need the exact reverse of this (or really, the same thing with the light going the other way), and instead of polarising filters, you need the heavy ND in one light path, behind the beam-splitter, in front of one halfof the gate. As an aside, (since we've buried the flipped-over colour emulsion idea): even if it was going to work, and even if Kodak would consider reversing the order of emulsion layers in one coating run (I doubt if even James Cameron could get this), the tonal detail would be all to pot: each emulsion layer is manufactured to develop at a certain speed, given that the developer takes significantly longer to permeate to the lower layers, and is turbulated less when it gets there. Bringing the red-sensitive layer up to the top would be like bringing bottom-feeders up from the ocean floor to feed in the oxygen-rich, bright-light surface layer, and sending the surface fish down to the Deep. They wouldn't like it at all.
  22. . . . in a brilliant arc, John brings the discussion back to the original topic of negative cutting.I wouldn't recommend a Swiss Army knife for cutting your neg, and I'm still waiting to see the filmmaker's version (with built in register pins and a little brush for applying the cement). But I would agree that the Hammann cleaver (yes, German) is the best tool for the job.
  23. I think that is the point. In the message, Simon rounded up a number of people who happened to disagree with his view, noticed that some of them had English-sounding names and/or lived in English-speaking countries, and thereby identified them as a group in some confusing way, compounding this by siding with the original poster, using German, on the strength of his name sounding Germanic. It's probably more or less harmless in this forum, as most of the participants in this thread are familiar with each other - at least through the internet. But really this is exactly typical of the start of a long and slippery slope towards much more dangerous behaviour in other arenas. I too enjoy Simon's posts on this forum (maybe because I so often find something to disagree with, we must be similar in some awful way :blink: ) . My comment wasn't about Simon personally, but about the message. Finally, In the interests of accuracy, I believe Simon is Swiss, so in fact no-one fought against him or his forefathers.
  24. Kodak used to have some useful publications, but they aren't as forthcoming on the details of film manufacture as you might want. after all, it's their business, why give it away? The texts Karl pointed to are good for the underlying theory. Or you could start with http://www.enotes.com/how-products-encyclo...otographic-film Or for fun,
  25. Most of the articles and stories I find on the internet make that (reasonable) assertion. But I met with Bernard Happe a couple of times in the early 1980s (he was gracious enough to provide some comments and suggestions about the book I was writing), and he was quite emphatic that although Technicolor London had a lab full of recently decommissioned kit, the Chinese Government insisted that Technicolor built them new equipment.
×
×
  • Create New...