Jump to content

John E Clark

Basic Member
  • Posts

    852
  • Joined

  • Last visited

Posts posted by John E Clark

  1.  

    Ah, yes, the ever-present backlight :-)

     

    Night driving scenes are often lit so that the characters' faces are visible, but in reality you'd barely be able to see them, if at all. Most dashboard lights aren't THAT bright.

     

    Depending on the style of the film you can get away with fudging the lighting motivation if it lends the images a certain mood.

     

    Look at this shot from Bigger than Life:

     

    biggerthanlife1.jpg

     

    I doubt anyone's living room in real life would ever be lit to create such hard shadows from a low angle. But it works because it heightens the mood of the scene and projects the inner monster of James Mason's character.

     

     

    This film is from 1956, where somewhat of a completely different set of aesthetics were in operation. 10-15 years later, and the 'explosion' of location shooting pretty much ended this sort of stylistic studio setups.

     

    Here is a shot from "A Serious Man"(2009) of a 'family interior setting'... There may be some cheats involved, but most of the lighting looks to be motivated by sources that one has in every day experience. (In addition to the story being quite different than the above example...).

     

     

    a-serious-family.jpg

     

    Even in a more noir setting... still motivated...

     

    vlcsnap-2012-04-07-15h04m05s243.png

  2. Hi guys,

     

    while taking photos of a scenery illuminated with a HMI lamp (magnetic ballast) I noticed the following:

     

    #1

    each picture has a different brightness; kind of expected, since the camera shutter is not synchronized with the mains frequency (50 Hz here, i.e. 100 Hz light pulses)

     

    #2

    Every picture has a different color tint to it. The effect is very obvious, color casts vary between white, greenish/yellowish, blueish and magenta/pink.

    ....

     

    Followup Question: why doesn't this have an effect on color uniformity with digital (non global shutter) or film cameras with mirror shutter? After all, different parts of the image are exposed at different times, right? Or does it have an effect sometimes?

     

    KInd regards,

    Marc

     

    This may be significant for your #2 question.

     

    from http://www.lovehighspeed.com/lighting-for-high-speed/

    ---

    HMI and fluorescent lights are generally fine for speeds under 100fps as long as they use electronic ballasts and are set to flicker free. Although HMI lights do not suffer from the flicker which effects tungsten, HMI’s can suffer from “Arc Wander,” whereby a plasmatic “hot spot” moves within the bulb, causing an amorphous shifting movement in the light output. The most common side effect to this is a rapid colour shift in a shimmering effect. No HMI light with a normal electronic ballast can be guaranteed against some form of flicker, no matter how big the lamp is.

    ---

  3. Yeah, I've seen those kino flo cfl! Are they small enough for standard household lamps(and not too tall as to stick out the top?) That hair dye tip is great, can they be sprayed with the dye? I'll use that for sure! How long to the photofloods last?

     

    Any tips on candlelit scenes? Or other combustion based/enhancing lighting

     

    In my experience with still photography and 'daylight' bulbs, they were 'pretty short'. Not hours short, but definitely seem to far shorter my expectation...

     

    On candle light and shooting, since you say your are using an Alexa...

     

    Here's an American Cinematographer article on the shooting of the film "Anonymous"(2011), where the values of ISO 800, 1280 and f/2.8 are mentioned as frequent shooting settings.

     

     

    http://www.theasc.com/ac_magazine/September2011/Anonymous/page1.php

  4. With respect, Dirk, your post is not completely clear, particularly if one doesn't have a copy of 'The Negative' to refer to.

     

    In any case, I think that it is always worth pointing out to students and beginners that Adam's Zone System for exposure was just one part of his system, and although it is very useful with motion picture work, it shouldn't be applied without an understanding of what it was originally designed for.

     

    Yes, looking at the Negative was only 1/2 of the process... the other 1/2 was looking at the print materials, and finding their characteristics.

     

    While most of my cohorts would take densitometer readings from the negatives, most, myself included, would not get density reading from paper, but would determine the 'dynamic range' as it would be called today, of the paper, usually about 7 to 8 stops from 'black' to 'white' for a 'normal contrast' paper, by a more ad hoc method... exposing the sheet of paper to varying amounts of light...

     

    The whole process was directed to 'seeing' the final print, given the real scene at the time of shooting.

     

    What was important was knowning the 'negative' and what could be done with it, and corresponding, what was the final output, and what could be done with it.

     

    These days, the final output for most people I am going to hazard is some form of computer display via some sort of internet service (web page, streaming, etc.). Few people will actually see their works on a Real Live Motion Picture Screen. Sure some few may, but even for many 'fests' the lesser lights get either a crappy projector on a 'big screen' or get shunted off to a 'auxlliary room' and has a 'tv' of dubious quality and calibration.

  5. Unfortunately our edit pc crashed during rendering, causing us to be disqualified for entrance because we didn't make the deadline by 10 minutes.. (I still can't believe it, it sucks so bad.) But nevertheless I am really happy with my results, since this was my first time as a d.o.p. in a crew larger than 3 persons.

     

    This happened on the first 48-Hour project I participated in. My participation was 'data wrangler' and the take away 'learning' was this... the group was had about 30 participants... but most of those were only 'writers'. Even so, the 'story' and 'script' that was developed didn't really measure up to having all that 'writing talent' available...

     

    The production 'crew' consisted of a number of shooters, and many of the lighting set ups were pretty 'complex'.

     

    The net result was shooting scenes was rather slow. As data wrangler, getting all the shooters to just get a clapper mark on their footage was a problem. There was someone performing script supervision duty. She and I did try to come up with a scheme to deal with the footage, make records etc...

     

    I handed off the footage and the documentation to the editors. They then tried to use some software to 'align' the sound recoring with the camera audio for several hours... I then worked aligned most of the shots that had marks with their corresponding audio in about 1/2 hour. That still required about 15 shots to be manually aligned to 'gosh I think that's where the actors said that'... needless to say I was unimpressed with the 'auto tools'.

     

    And... drum roll... the project was 15 minutes late, despite having all the footage shot by Saturday evening, and ready for editing by about 1 AM Sunday morning.

     

    So, what did I learn... small group is good. Mark every shot even if the shot will eventually be 'no sound'. Keep the 'story' simple enough to shoot in one day, with one camera, with simple lighting setups...

     

    I think the group leader learned that lesson as well, because the next year his core group was around 5 people, and their film only used 3-4 actors and 2 or 3 locations. (I didn't participate...).

     

    Your shots look pretty good for having asked about 'how does one shoot noir' on Friday.

     

    Now you have time to review, I'd suggest looking into using Premiere's 'Channel Mixer', or the equivalent in your favorite NLE, to create your monochrome images.

     

    For the desk shot, one 'trick' is to have white paper on the surface which bounces a bit of light back up at the actors.

  6. This is good advice, with one important proviso. Adams did expose for the shadows, but controlled his highlights through Push/Pull processing. This was possible and practical because he was shooting single frames in Large format. One of the reasons he resisted shooting roll film was that he was unable to process each frame individually.

     

    Although Push/Pull processing is possible with motion picture film, it's not a practical way of controlling highlights as whatever changes you make to the processing time affect the entire roll.

     

    Ansel Adams did use, and apply his 'zone system' to Polaroid materials, and while one could 'adjust' the developing time, it definitely was more of a 'test to find the limits' approach.

     

    Not unlike what one needs to do with any digital camera, especially DSLRs. When I first began using DSLRs for stills, late 1990's, it occurred to me and others in some sort of group epiphany, that digital cameras behaved more like 'slide film' than 'negative film'.

     

    I'm pretty much convinced that for the most part DSLRs have maintained that 'goal' of 'ready to view' imagery that slide film was intended.

     

    In that sense then, the method that Adams used to evaluate and calibrate to Polaroid materials is probably more in keeping with what one would do to determine one's own operating procedures for digital imaging, at least in the DSLR category.

     

    In the higher end cameras, there is perhaps useful 'dynamic range/latitiude' in the higher 'densities', but it is still more of a method of discovering limits, that yield the best images.

     

    I will also note that Ansel Adams was predominantly a 'Black & White' photographer, and never really did make 'color' a significant part of his work, even though later in his life he did work with color materials...

     

    I've often mused that most people who shoot DLSRs would do better to shoot for converting to monochrome rather than worry about the myriad of problems with 'color', either camera's 'color science' or the lights and their spectrum...

  7. Boy... if I had had this sort of windfall when I was young... I would have bought that Beaulieu with the electric motor option, and shot the hell out of Film film...

     

    But alas...

     

    I have no idea why 'anamorphic' is even in the discussion... lens choice is really a 'story choice', and for some directors, it seems to be an 'aesthetic' choice.

     

    It would not be my choice ever... even if I had somehow got the buget to do the "Penultimate Remake of Beau Geste"... or some such 'epic' type film.

     

    Things that happen with anamorphic... straight lines in the scene end up bowed, depending; focus fall off at the edges of the frame, thus requiring a smaller f-stop than the equivalent spherical lens, thus requiring more light (or higher ISO till noise intrudes...), or, alternatively, requiring the actors be very critically placed, and the follow focus precise.

     

    Things which are obivous are the 'bloom' of the highlights... and I almost never look at 'bokeh' at all... unless I'm paying so much attention to the technique, that I miss the story completely.

     

    These days... If I were to have such a windfall as to be able to afford buying an Alexa outright... I'd probably go with one of the under 10K Canons, look at the Black Magic offerings, and... drum roll... think about buying some fairly good production sound equipment, and lighting equipment, and a used van to carry all this stuff around...

     

    But that's me...

    • Upvote 1
  8. Yes - I did read it and did not get my answer. It seems to talk about the film process...

     

    From the wiki:

    ---

    Anamorphic format refers to the cinematography technique of shooting a widescreen picture on standard 35 mm film or other visual recording media with a non-widescreen native aspect ratio. It also refers to the projection format in which a distorted image is "stretched" by an anamorphic projection lens to recreate the original aspect ratio on the viewing screen.

    ---

     

    What more 'questions', since the above does not go in to any detailed 'film' process description, but just what the 'lens' does to record the wide screen field of view on to a 'standard 35mm film or other visual recording media...'.

     

    I think I've seen this sort of 'thread sequence before'...

     

    Take your new camera, rent some spherical prime lenses, say a selection of lenses 25-50mm, then rent another selection that are 'anamorphic'... and see what happens.

     

    If you can afford an Alexa with not much concern for 'pay back', surely you have sufficient free time to perform various tests with a large set of lenses.

  9.  

    So is it possible to use a spherical wide angle lens and effectively emulate the same vertical dimension as an anamorphic? considering anamorphic doesnt increase vertical field of view does it?

     

    Basically...would it be possible to get a spherical wide angle lens and emulate an anamorphic look without cropping the top or bottom (since anamorphic lenses do not crop the vertical)....im assuming anamorphic lenses capture the same vertical frame as a spherical wide angle lens?

     

     

    Part of the 'anamorphic look' is the distortion that is created by the lens. If you want that 'look' use that lens.

     

    Of course there are spherical lenses that have 'wide' angle of views... pick your favorite angle of view, know your sensor size, and find the appropriate lens to achieve that view.

     

    Further, one 'form' of anamorphic involves a spherical lens with an anamorphic 'font element'.

     

    Since I really don't paricularly care for the 'look' or even the 'wide' (2.40, or whatever...) I haven't investigated all the potential options.

     

    But given the popularity of the type of lens... there is a solution out there.

  10. I forgot to also note, that a LUT operation can implemented in hardware with no processor involvement at all...

     

    So the pixel data can pass through a chain of LUTs each one transforming the value according to some 'rule', and not ever be processed by the camera's micro processor, or DSP processor, again depending on the hardware implementation.

  11.  

    Then you have to account for the fact that an Arri Alexa can do 240fps with no compromise other than a shorter exposure time (i.e. no windowing like Red does), which requires a HUGE amount of bandwidth (one reason that Red uses it). Then on top of that, account for the fact that an Alexa allows you to build a color LUT on camera, which is a pretty processor intensive task... and it also has to be quiet enough to let you record sound on set while filming... it adds up!

     

    I'd agree with some of what you have written, but the use of a LUT does not add 'all that much'. There are several 'LUTs' being used in DSLRs. For one, is in 'white balance'. Another is most likely in 'shaping' of the sensor 'raw' data into the various 'profiles' that the camera may have.

     

    The problem with the DSLRs often there is no way that a user can create a custom LUT for many cameras. There may be 'tweaking' of existing LUTs, etc.

     

    Here the pseudo code for how raw data is transformed to output data via a LUT...

     

    output = LUT[ raw ];

     

    Here's the pseudo code for scaling and offesting via multiplies and adds...

     

    output = raw*scale + offset;

     

    (Examples simplified from R, G, B... but that would only mean something like:

     

    output_red = LUT[ raw_red ];

    output_green = LUT[ raw_green ];

    output_blue = LUT[ raw_blue ];

     

    Multiplies and adds are usually more processor intensive... unless the processor is in fact doing a lookup to produce the multiply or add results... not usually feasable...

     

    Further, the LUT transformation is requires no more clock cycles than addressing the LUT element. Multiplies usually require several clock cycles.

  12. Hi there,

     

    This weekend I am participating in a 48 hour film competition, where my team and I have to create a short film around 5 minutes within 48 hours.

     

    Now the genre we got was Film Noir. Awesome, right!? I am really excited to film this tomorrow.

     

    I am the cinematographer on the project, and since I have little fiction experience and also no experience at all on film noir, I was wondering if I could get some lightning advise.

     

    We are working without a budget. I am filming on a Canon 60D and we have 3 350 watt redheads and 2 dimmers.

     

    I really want to make the best out of this no budget film as possible, and I think we have enough tools to archieve some great images.

     

    I am doing a lot of research now online on classical film noir styles and I notice that the lightening is often very harsh, with strong backlighting and really dim fill lights.

     

    A little 'late' for studying up on this...

     

    Here is a classic on Hollywood lighting of the 1940's

    http://www.amazon.com/Painting-With-Light-John-Alton/dp/0520275845

     

    Apparently there's a kindle version available... for those who need 'instant' information...

     

    But an element of 'noir' which seems to be often forgotten, was the subjects of 'noir' were themselves considered 'dark'.

     

    The 'harsh' lighiting reenforces this 'story' concept.

  13. Recently DSLRs got that magic lantern with RAW capability (really crappy post-flow) but definitely I think ProRes encoders built into the Arri with ARRIRAW offstorage support is super fantastic. And as you mentioned, I do notice the ARRI ALEXA has this serious film-like...(whatever this means - i kind of dont have a clue) quality - its softer..just looks like...a movie? Not sure if you can understand what I mean...I guess that has to do with their proprietary lenses...probably it costs more to make 'filmic' lenses than 'dslr' lenses....thoughts sir?

     

    There are in my mind 2 essential features of 'high end' vs 'consumer' cameras, and 1 philosophical issue. The philosophical issue is 'who is the market, and what are they willing to do to the 'image' before it is presented'. DSLR cameras are made for people who want to get the image to presentation as quickly as possible, and so the output of the camer is directed to that goal. Even in stills, most people shoot JPEG, some may alternate between the 'raw' and JPEG... and a limited few may shoot exclusively 'raw'.

     

    The high data rate has some engineering costs. If a 2Kx1K (near enough to HD...) sensor is 'read out' at 30 bits per channel per cell (taking RGB as a 'unit') that would be 1024x2048x30 would yield a 60 Mbit 'chunk'. But... for a 1/48 shutter speed, one needs to read out those bits in 0.020 seconds, this is a 3 GBit/s stream.

     

    There are some non-loss compression techniques to reduce this data rate, but even so, the amount of electronics to support such a data rate, requires 'more powerful circuits', along with some sort of cooling...

     

    This is why ARRI is using some 32 'readout' circuits, such that each path is less than this high data rate. From their tech brief, they also have 'high gain' and 'low gain' amplifiers on each path, and then through the magic of DSP combine these two in to a resulting 'high dynamic range' value for each pixel read.

     

    Even after all is said and done, the data rate is still fairly high and beyond the usual SD card or the like type recording device. The camera manufacturers who cater to the high end market have recording devices that are very expensive, but 'do the job' for these rates.

     

    Most DSLR's shooters are not going to spend $100s of dollars for single item recording devices...

     

    The other option is high speed cables to an external recording device via say SDI interfaces... again not a typical DSLR interface. The closest one gets is the HDMI output. The HDMI output is not really a 'professional' connection. To be sure people use it, and many DSLR shooters are satisfied with the results, in terms of 'industrial' design, it is pretty crappy... ranging from contacts to no effective strain relief when connecting to the camera or recording equipment.

     

    The other issue is sensor yield. Depending on the specs for sensors, ARRI may not have a 'great' yield for their sensors because of the close tolerances they set on quality. Whereas a DSLR manufacturer may 'accept' sensors that have a larger variation of response across the sensor, the higher end camera manufacturers would reject such a device.

     

    My 'advice' would be rent the ARRI, buy a reasonable DSLR for 'learning/experimentation'.

     

    If one were to spend the 80K just to get on to the ARRI bus... I'd strongly suggest setting up a rental company to rent it out to recoup the cost...

    • Upvote 1
  14. Just as an odd thought, what about a bright light behind venetian blinds... old tech, but works. In the case of Still Flashes, the light pulse duration are typically above 1/250 to 'very fast'... for the Edgerton type capture the bullet through the apple shots... and so are not long enough to 'fill one frame'... and most likely will display any form of 'electronic shutter artifiact' in its worse form.

  15. Jürgen Jürges was the cinematographer, for the 1997 german version of "Funny Games"(1997).

     

    Here's a URL for a german cinematographer's organization, where there is an entry for Jürges

     

    http://www.kinematografie.org/cameraguide/detail.php?id=142

     

    I don't know if he would 'answer up' to a email from this site, but that seems to be about the only way you could get 'detailed' info on the film from the cinematography point of view.

     

    Another german cinematographer who has made a number of german films as well as a few Hollywood productions with Tom Twyker, has written books on the cinematography, but one needs to read German to beable to understand...

     

    On the other hand, if you were thinking of the 2007 'english language' version... the cinematographer was Darius Khondji.

     

    There may be some American Cinematography articles, but perhaps not.

  16.  

    Thanks for the reply. This helps me a bit, conceptually. How does this apply when not shooting unevenly lit brick walls? Film?

     

    Eye lashes with mascra usually form some sort of 'contrast' which gives the impression of 'sharp' focus... or the lack thereof...

     

    In some situations the sharpness of the system is degraded. Haze/soft focus filters, etc... or in the case of digital, 'anti-aliasing' filters to avoid jagged lines on image detail.

  17. Why do the comparisons always come back to resolution? I have yet to see any professional colorist say they prefer grading digital footage than film. When 95% of biologists accept evolution, people accept it. If 100% of colorists seem to favor film as a medium, what does that tell you?

     

    The comparison is specious. I would sort of wonder about such an assertion at this point in time when most if not all major Hollywood productions are scanned to DI these days. Hence the 'colorist' is working with a digital image.

     

    It may be that back in the olden days, 100% of the then Hollywood 'colorists' (where there 'colorists' as they are known today, back when one had no real method to really 'change' things, except for 'timing', or more exotic processes like 'preflashing' or more recently 'bleach bypass', etc...)... anyway, it may have been that 100% of those working at such things, preferred some ASA 50 film rather than push process on Double-X to eek out a 400 ASA rating... with attendant increase in grain...

     

    I personally wouldn't mind a 4x5 sized movie film negative... but the economics of the situation do not allow such...

     

    The question then is what level of image parameters produce an acceptable image quality.

     

    I think digital for the 'high end cameras' is approaching the 'resolution' parameter, if not exceeded 'film'.

     

    I think the next 'big thing' is the issue of bit depth and resulting dynamic range, especially in the 'low end'... the brackish water I swim in of DSLR filmmaking...

  18. I think people confuse optimal scanning resolution with measurable detail, hence some people saying that 35mm film is 6K or 8K just because there may be some anti-aliasing benefits to scanning at that high a level of resolution. But I can't find any published line resolution chart that shows a piece of 35mm movie color negative film, 24mm wide, resolving 4K or higher... you would think by now that someone would show that.

     

    A look at the Kodak data sheets for 'popular' movie films, such as the 500T 5219, gives a 50% MTF rating for each R, G, and B, of 35 cyc/mm, 50, and 70 respectively.

     

    Unfortunately many digital sensors, don't have MTF ratings published... but if one just took the simplistic 'pixel resolution', one would have potentially MTFs higher than that of film...

     

    However, lens figures into the resulting image MTF, as well as sensor quality... and as experience shows... can swamp out any high resolution capture media...

     

    On item that is not mentioned much is that the grain of film acts as a 'randomizer', and as such does 'fuzz' edges, which reduces jaggies, leading to a different appreciation of the resulting image. On the other hand, it is also known that if the 'grain' is sharp, for some situations, the resulting image is preceived by humans to be 'sharper' that one where the grain is 'soft' and perhaps has a higher MTF.

  19. Can anyone recommend any resources, either online or in a book, which serve as a good refresher / tutorial for light metering for cinematography (I'll be shooting S16 film), particularly spot metering?

     

    For me, when using Film film, there are several considerations. However, I've only shot still Film film, so perhaps some of the 'what will I do with this shot in film negative development and printing', may not be available to the Movie Film user.

     

    However, there are a couple of 'fixed points'... while one does not have a grey card often available, I will take a meter reading off the forhead of the subject... if human... and if the person is fair skin adjust by open up 1 stop from the meter. 'mediterranean' complex, leave as is, and dark complexion, closing down 1 stop.

     

    For non-human subjects, green folage in shadow, often evaluates to 'middle grey', where as green folage in sun may again be 1 stop brighter.

     

    White buildings are perhaps +1.5-2 stops brighter than middle grey, 'grey' buildings +1 stop... etc.

     

    One can on a reasonbly 'sunny' day take a meter reading from a local 'grey' card, and assume that the light falling on the distant subjects is 'the same', and expose accordingly... for cloudy, or partially cloudy days... that doesn't work so well...

     

     

     

    For still film there was the 'convention' of exposing for the shadows, and developing for the highlights. With that in mind, one may 'place' shadows at 2 stops under the spotmeter reading, read what the 'high values' are, and adjust by 1-2 stops the development time to prevent the highlight values from becoming too dense.

     

    I don't know with modern movie Film film, whether this is an appropriate convention. Perhaps with digital scanning of the original negative, curves can be used to 'adjust' the values to match the desired output device.

  20. Here's a more modern use of candles, candle light, and camera than Kubrick's "Barry Lyndon"(1975, 100T 5254 film)...

     

     

    http://www.theasc.com/ac_magazine/September2011/Anonymous/page1.php

     

    From the article:

    Anna Foerster was the cinematographer.

    ---

    Foerster shot all night scenes at T2.8 and 1,280 ASA. Day interiors mixed in a healthy amount of Vermeer’s “north-facing light,” and in those situations, she set her stop between T4.5 and T5.6 at 800 ASA. “We used candlelight and fireplace light even during day scenes because we were assuming gloomy English days,” she says. “We couldn’t dim the candles, of course, so we had to bring up the daylight level and compensate for the exposure. Otherwise, the fire would have overpowered the daylight or had the same value.

×
×
  • Create New...