Jump to content

Mei Lewis

Basic Member
  • Posts

    444
  • Joined

  • Last visited

Posts posted by Mei Lewis

  1. Best option is to use a studio with built in cyc/infinity backdrop.

     

    If not that then David's suggestion of a white paper roll is easily the best option, especially if you need a full length shot.

     

    Lighting any type of sheet, either from the front or back, and getting it even, with no seams, but not overexposed to the point of causing contrast reducing flare, is very tricky.

     

    The smooth, flat, matt surface of paper is much easier to light evenly.

  2. I really think there's something in this.



    I don't think cinematographers are using waveforms to judge composition, or even very much to judge about lighting, other than exposure levels.


    But by 'measuring' the images with a waveform monitor, some of their structure makes itself apparent.

    (That's true if we're looking at images from the camera or final shots as they appear in the film).



    The waveform is showing something about the geometry of an image, and often images that people like from a cinematography standpoint are very clear and simple, the viewer knows exactly how to interpret what's going on.


    Look at this list:


    (I'm not saying I agree with the list, but they are the sort of shots many people like).


    Most of them are very simple and could be sketched without too many pencil lines.

  3. Why are you showing "Low-con encoded video", "<<ITU709 video>>" and "ITU709 display" as having the same range?

     

    Aren't "Low-con encoded video" and "<<ITU709 video>>" just data, which doesn't have a range? (it's inside a computer).

  4. Keep in mind that it is not really accurate to talk about number of "stops" that can be displayed -- after all, if you look at a log signal on a Rec.709 monitor from an Alexa, you can see all 14+ stops displayed. The reason why you generally only see 10 to 11 stops in Rec.709 is because if we want something that looks like it has "normal" contrast with good blacks and whites, that's about how many stops of information we can display. If images were all strictly linear, then maybe we could just talk about stops of a display but images often aren't strictly linear.

     

     

    I think you _can_ talk about the number of stops a display has, it's possible to measure it as the difference between its black and white levels using a light meter. But this range doesn't really have much to do with the range of the camera or the original scene, and I think that's where Jan is having difficulty.

  5. This is my current understanding of what's going on here.

     

    When an image is displayed white in the image data is usually mapped to white on the display, and black to black, so CU=DU and CL=DL, but that doesn't have to be the case.

    If CU is mapped some value higher than DU then clipping of whites happens. If CL is mapped to somewhere lower than DL then black clipping happens.

    post-44079-0-35814400-1502446931_thumb.jpg

  6. I agree with John, I definitely recommend shooting color and doing your own conversion.

    It gives you much more information to work with, and you'll probably find you'll want to do a slightly different conversion for each shot.

     

     

     

    There is one 'reason' for setting the camera to record in B&W while learning to see in shades of grey, is that color contrast may result in no B&W contrast, as the values of the objects are the same, and only differ in their color.

     

     

     

     

    And by shooting color and converting yourself, you can choose if and how much different colors contrast with each other when converted to black and white.

  7. You may well be right, David. It's been a while since I've had to deal with pull down in Cinema Tools, so maybe things have changed.

     

    To respond to Mei's concern, unless you're actually seeing a problem with playback on your monitors, I wouldn't worry about it.

     

    Yes, I can see a problem, motion is not always smooth. It's most obvious on wide, panning shots.

    I think Phil's explanation of randomly dropped shots covers it.

     

    If I export to a USB drive and play back on a TV there's no issue. I guess because the TV is designed for 25fps playback (and probably 30 and maybe 24 too).

  8. Aha! So we are capturing a higher dynamic range, but that DR isnt preserved all the way through to the final deliver. That makes sense.

     

    So a camera that is able to capture, lets say 9 stops, will end up at maybe 6-7 after a light grade. And a 15 stop camera, would end up at around 10, and retain more detail in the lift and gain?

     

    Thanks for all the input!

    I don't think that's quite right.
    There are 3 places along the process of image capture and display whose dynamic range we primarily have to worry about:
    1) The scene we're filming - this could have very many stops of range
    2) The camera (digital sensor or film) that is recording the scene - 9 or 15 stops say
    3) The output image - 6 stops?
    Going from (1) to (2) anything outside the camera's dynamic range is lost. When the transition is made between (2) and (3) no tonal range of the image is necessarily lost (unless highlight or shadows are clipped for creative reasons or by mistake), but the range is scaled down to fit into the available dynamic range of the display.
    So a 9 stop camera would record 9 stops of the real scene, and those 9 stops would be shown as 6 stops on a typical display.
    A 15 stop camera would record 15 stops of the real scene, and those 15 stops would be shown as 6 stops on a typical display.
    The range between the brightest and darkest parts of the output picture is the same for both cameras, because they are both being shown on the same display.
    The difference is that the lower dynamic range (9 stops) camera won't have recorded as much into the highlights and shadows, so detail won't be visible there, and so the mid tones of the image are spread over much of the 6 stops of display range. For the 15 stop camera, all 15 stops have to fit into 6, so the mid range is compressed into fewer stops to allow the highlight and shadow stops to fit, and the mid range loses contrast.
    You *can* display HDR images on a normal dynamic range display. This is what is happening in the images you see if you google image search for "hdr photography". I think it's a pretty ugly look, as do many people, which is probably why it's not used very often, particularly for video.
    This is also basically what's happening when you look at log footage on a normal display without the appropriate LUT, and it looks very 'flat' (lacking contrast) .
  9. Thanks Stuart.

     

    With the computer monitor, I'm not talking about filming it, but watching content on it.

     

    Any and all 24fps content is going to somehow miss or double frames as it's displayed. And I think 60Hz is a very common refresh rate for computer monitors, probably because of the 60Hz mains in the US and elsewhere.

  10. Does anyone have any newer thoughts on this?

     

     

    I'm in the UK and I've always shot 25fps until now.

    I occasionally shoot stuff that ends up on TV where I'm pretty sure shooting at 25fps is still the best option, but should I shoot stuff that I know will only be seen on the web at 24fps?

     

    I have a related question which despite much research I've not been able to get a good answer to. Many computer monitors have refresh rates that aren't multiples of 24 or 25. My 3 monitors are all at 60Hz.

    This seems to mean I'm never really going to get smooth motion. Why isn't this talked about more online? Am I missing something?

  11. But the way these Blomkamp movies are being presented to the public, it makes me feel like he is only doing them because he is somehow out of work. Either trying to get a studio interested in him or his ideas or trying to sell himself or his movies in an alternative way outside of the studio system. Which again could only mean he is not getting any studio work.

     

    I think that's a slightly old-fashion, hierarchichal way of looking at things.

     

    The only absolute requirement for movie making is film-makers, film viewers and some way for them to be connected. Based on what he's said, Bloomkamp isn't touting for work in any way, he's making his own work, as he wants to.

     

    In this are the movie industry is probably some way behind other entertainment, for example Valve becoming their own game distribution platform, bands releasing their own records, comedians making their own specials.

  12. Please consider that on INTERSTELLAR, the 15 perf IMAX, 65mm was constantly shot handheld at a T2 with a +3 diopter quite often. That makes the focus pulling quite brilliant when you know that bit of information.

     

    What does "+3 diopter" mean here please? I know a diopter is a type of glass lens element, but what is it's effect? Does it multiply the focal length by 3?

    Thanks.

  13.  

    The absolute worst offender of this is Gangster Squad. I saw it in the theater with a friend who has no knowledge of film making at all and during the final act's night shootout and fist fight she leaned over to me and asked why the movie looked weird like a TV show. It's because the motion blur was so blatantly not film, it was pulling her out of the movie. Most regular audiences can't seem to tell or don't care about the difference between something shot on digital or film, but when your format is pulling a regular audience member out of the film's immersion, you're doing something wrong.

     

    In what inherent way does digital motion blur look different to film motion blur?

     

    Isn't it just down to how long the shutter is open?

  14. There's an article on the ASC website where the gaffer on the Dark Knight explains:

     

    "For close-up and medium shots of the couple, an Arri LoCaster LED with a 1'x1' soft-box snoot and interchangeable diffusion frames provided eyelight, and 5K tungsten Chimeras provided a soft edge. “We always tried to approach the eyelight from a complimentary angle to the camera,” says Geryak. “If the camera was over someone’s right shoulder, I’d stand over his left shoulder and try to wrap the light from the key side so it looked more natural.”

     

    https://www.theasc.com/ac_magazine/August2012/DarkKnightRises/page1.php

  15. I've worked on a few smaller productions where people have used a tape measure to pre-measure distances to actors' marks so they can ocus to them during a take.

     

    From memory (it was a while ago) they measured from the mark on the camera to the actors' mark, so the line of measure is usually off at some angle from the perpendicular to the film plane.

     

    I think the distance scale on lenses specificy perpendicular distance between the film plane and the plane of focus. That would be shorter than the measured distance typically.

     

    Do focus pullers use trig to figure out the distance they should pull to on the lens, having only measured the direct distance? Or do they measure perpendicular distance?

  16. I didn't put that very well.

     

    I didn't mean stagey in a negative way, just that it looks like it was shot on a stage. I definitely didn't mean stylized, because the whole trailer is _very_ stylized.

     

    I guess my real question is, assuming they've made the interiors look deliberately like they were shot on a stage/not in a real location, why have they done that?

    Is it some reference I don't get? Is that how a lot of classic westerns were shot?

×
×
  • Create New...