Jump to content

Daniel Klockenkemper

Basic Member
  • Posts

  • Joined

  • Last visited

1 Follower

Profile Information

  • Occupation
  • Location
    Los Angeles, CA

Contact Methods

  • Website URL

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. On a film camera (like your example) this is a reflection off of the side of the film gate, which is usually a shiny metal. I've simply heard it called gate flare, but there could be some more official name that I don't know of. Interestingly, I've seen similar flares on mirrorless digital cameras from low-quality lens adapters that lack proper internal light traps or flocking. If there's not a broader name for it, perhaps we need one? Mechanical flaring, perhaps?
  2. I'd suppose that "EM. NO." is short for emulsion number.
  3. Through some searching, I found an old Akkuman catalog; the cells are: 3372319 Saft VHCS3200 3,2Ah NiMH Sub C. Normally I would say that cells above 3000mah should have no trouble running the camera at high speed. However, given that the pack's manufacture date is June 2005, the cells' capacity are likely much reduced at this point. I would consider having the pack re-celled. Sub-C NiMH cells can be found with capacities up to 5000mah, but higher capacity cells often also have more pronounced self discharge. When I owned an ACL, I never needed more than 12V - the motor could quickly reach 75fps with high-amp NiMH cells. I would advise against using a V-mount or lithium chemistry battery without a voltage regulator. The circuit board image you posted does look similar to a voltage regulator, with the small screw on the blue component probably being for voltage adjustment. "VCLX" is a model of block battery made by Anton Bauer that has a 14.4V 4-pin XLR output, which makes the regulator theory very likely. There are D-tap to 4-pin XLR cables on the market that could make it possible to use a V-mount with the regulated cable. You should test the cable's output with a multimeter before connecting it to the camera. I would highly recommend having a professional, or a friend who is savvy with electronics, check things out for you!
  4. The Sony cameras' eND do take a small amount of time to adjust, even at the fastest setting. I think if the iris ramp were done over the course of about 3 seconds (or longer), the eND would be able to keep up and the effect would be seamless. If the desired effect is for the depth of field to change as fast as possible, a motorized setup would likely be faster - you'd only be limited by the speed of the motors. Iris rings and dual-pola variable NDs don't have to turn very far to make exposure changes. I've used the Sony eND in a few scenarios when the exposure changes dramatically, e.g. a garage door opens and sunlight pours in, or the lights come on unexpectedly in a dark room. In the former situation, the change was gradual so the effect was seamless. In the latter, since the lights coming on was a surprise to the characters, it made narrative sense that it would take a second for their eyes to adjust, so I lived with the very brief overexposure; the auto adjustment is smooth and otherwise doesn't call attention to itself.
  5. Keslow Camera is listed as a place to rent the Cinefade, but presumably you've already checked with them. Panavision demonstrated a similar system as a prototype at Cine Gear Expo a few years ago; you could check with them, but I haven't heard any updates so I don't know if the filter was ever produced: https://www.cined.com/panavision-liquid-crystal-filter/ An alternative would be to use a Sony camera like the FS7 II or FX9, both of which have an electronic variable ND which can respond to a manual aperture change if the ND is set to auto. Those cameras are surely much more available than either filter system. For any other camera, you could also add a motor to a dual polarizer style variable ND in a geared filter tray, like the Bright Tangerine One Tray. You'd need a multi-channel MDR like a Preston system - while I haven't used a Preston this way, it should be possible. If you go this route, you'd probably also want to be careful in selecting the combination of pola filters to make sure the filters don't introduce color shifts when you change exposure.
  6. Several years ago I was looking for more information about why Zeiss might use a triangular aperture (technically a Reuleaux triangle) and came across a white paper I wish I'd saved about the effect of aperture shape on laser optics. What I recall from it is that a perfectly round aperture produces concentric rings of diffraction, radiating outward evenly in all directions; while a triangular aperture has lines of diffraction that radiate away parallel to each pair of corners, kind of like a 6-point star filter, with areas in between the lines unaffected by diffraction. So my guess is that Zeiss was trying to delay the onset of diffraction-related softness at small f-stops, to keep the image acceptably sharp for a wider range of apertures. It certainly seem characteristic for Zeiss at that time to care more about the sharpness of the in-focus parts of the image than they would about the softness of the out-of-focus areas.
  7. This is a fairly easy query to look up - there were short sequences shot as stills with the Canon 1D Mark III, which has a burst mode capable of 10fps. The different gamma from the 24fps footage is also a minor tell. https://shotonwhat.com/cameras/canon-eos-1d-mark-iii-camera This article gets the frame rate slightly wrong: https://www.reuters.com/article/uk-mumbai/slumdog-crew-took-to-the-streets-of-mumbai-idUKTRE49T37T20081030
  8. Without having data for the bulbs spectrum*, my best guess is possibly a little bit, but probably not very much. The strongest UV filter I've personally seen - a Tiffen UV-2A - has a perceptible light yellow tint, which means that it does slightly filter some blue wavelengths in the visible spectrum. If the filter appears clear/neutral to the eye, then it follows that it shouldn't affect the brightness of the bulbs' output in the human-visible spectra. To be sure, I'd recommend visiting the location with the UV filter you plan to use, and metering the bulbs through the filter. A C800 color meter might also be informative if you can get your hands on one. Of course, the only way to be really sure is to shoot a film test, but I know that's a luxury these days. The original article I remembered about UV filter transmission is here: https://www.lenstip.com/113.1-article-UV_filters_test_Introduction.html There's also this more recent one from Lensrentals: https://www.lensrentals.com/blog/2017/09/looking-at-clear-and-uv-filter-spectrograms/ * I searched a little bit for this, but the few spectral transmission graphs I could find for tanning bulbs only show the UV portion, and omit the visible light wavelengths.
  9. While I don't have any experience with tanning beds - neither from filming nor otherwise - I do know that the major filter manufacturers have and do make UV filters in 4x5.65" sizes. Tiffen has a few strengths, with older filters labeled UV haze 1 or 2, and newer filters named UV-15, UV-16, and UV-17; higher numbers filter more UV. Schneider offers a UV-410 filter that claims to cut most UV wavelengths. I haven't personally tested any of the above UV filters... I am aware of a test of stills filters, which found that B+W (owned by Schneider) was by far the most effective at actually filtering UV, while the lone review on B&H of the Schneider UV-410 filter claims that the filter added significant flare/glare. It's possible that the B&H reviewer is correct, but it's also possible that they made some mistake, like not bothering to clean the filter, or didn't mitigate it by tilting the filter... A polarizer can help with UV exposure in landscape scenarios, but this might not be applicable in your situation, with the camera aiming directly at the UV source. I'm sorry I can't help with anything more than hearsay! But at worst the UV would cause a loss of contrast, for which you might be able to compensate somewhat in the grade. Flicker would be my biggest concern, and it seems like you're already prepared for that. Your test with the Cine Check will be the most informative. In the absence of a test, I would film with a 144 degree shutter, assuming a 60Hz line frequency.
  10. While I get the point you're trying to make, your point would be better made if you didn't make demonstrably false statements in the same breath. (Nevermind not even trying to answer Chantel's question in a constructive way...) Super 16mm isn't "1mm added," it's 2 and a quarter millimeters wider. By itself, that's a 20% increase in area; but if normal 16mm has black bars added (i.e., if the image is cropped) to make the same aspect ratio as Super 16, S16 has 49% more image area on the negative. And because film resolution is determined by the size of the image area, having ~50% more area is a compelling and perfectly understandable reason to prefer Super 16. Yes, plenty of us have gotten away with mixing the two formats, myself included; and standard caveats about aspect ratio / extraction areas apply - there obviously would be no difference whatsoever if one intended to deliver 4:3 or 1:1. But if you have some weird axe to grind about the difference between formats, you might consider getting your facts straight in the first place - especially facts that are so trivial to look up.
  11. Thank you! I think technical reasons are the original purpose for all strengths of diopter, and the extra artefacts of the higher strengths are probably an unintended side effect. But using technical flaws in unexpected creative ways is part of the innovation and fun of cinematography. In your case, anamorphic lenses are a special scenario - especially classic/traditional lens designs - because they're known for having different behavior depending on the focus distance. I think they might behave differently, I wish I could try that! Thinking about all this reminds me of a tilt-shift style diopter holder I saw at the last Cine Gear Expo before the pandemic: https://www.vocas.com/vocas-5-axis-diopter-holder.html I can imagine a lot of creative ways to use something like that, especially with split or selective diopters. Neither would I, and I hope I didn't come across as dismissive in my previous post. If people want to believe a little bit of magic is real, why spoil the fun?
  12. I also decided to do a test to get to the bottom of this. I actually have the same lens as the one in the original post, a Zeiss Contax 50mm (not rehoused, but the glass is what matters), and combined it with a "full-frame" camera and some Lindsey Optics diopters. The first test was set up to replicate the interior conditions mentioned above - the slate was 4.5 feet away, and the background 15 feet away. The lens was set to f/2. Focus was sharp on the slate for each take. I only used the +1/4 and +1/2 filters on this test, because the farthest focus with a +1 diopter is 1 meter. Diopters always went into the same tray in the matte box. I had read or been told at some point that putting a diopter in the wrong way can increase aberrations, so I tested that, too: https://vimeo.com/522484255/02c168ba97 As Stuart already demonstrated, I don't think there's a meaningful difference. Feel free to download the original files, pixel peep, and come to your own conclusion. I also wanted to see if the diopters were introducing any field curvature, which would affect focus on the background. I set the focus to the surface of the coin for each take. Skip to the second half if you want to see the differences made obvious: https://vimeo.com/522487048/887642358c Without the filter, the Zeiss 50mm has a very flat plane of focus (which is why Zeiss called it a 'Planar' lens after all). There is some mild field curvature added by the +1/4 and +1/2, but not enough to notice if you aren't looking for it. The +1 does noticeably distort the field of focus away from the background, especially when turned the 'wrong' way. But given that a +1 filter is confined to 1m or closer distances, its utility in normal photography is limited. My final thought is that it's basically a magic trick for the people at the monitor. The focus on the lens is set for the background; "Now watch as I put in the special filter for the talent's close up!" Everyone watches the background suddenly go out of focus before their eyes as the filter slides into place.
  13. Thanks, and likewise! I think you've got the right idea. I'd say there's three categories - "this looks good," "this might look iffy on the monitor but I know I can deal with it in post," and "I need to do something about this before we do a take." Watch enough movies and you'll see that even the greatest cinematographers occasionally have a shot here and there, where they got into a little trouble with the exposure... but it's only something that a real nerd like me would notice. This interview with Gordon Willis about The Godfather Part II is great, and I think one of his statements applies just as well to the plethora of choices in digital cameras we have today: "There’s a great deal of latitude in the Eastman color negative. A lot — an incredible amount. But you have to know where it is you’re going to put it."
  14. Hi Robert. I'll give it a try, and probably explain a few things you already know along the way. 🙂 I think it helps to start with how it works for film, because it's easier to visualize. If you look at the pictures in this post on another forum - https://www.minilabhelp.com/forums/topic/25897-control-strip-issues-v30-ra-kodak-chem-lorr/?tab=comments#comment-62349 - they're Kodak control strips, with calibrated pre-exposed steps. The lightest, most see-through parts have no exposure; and the more exposure the negative received, the more "dense" (i.e. opaque) the negative is. Consider what it means that the straight-line portion is "linear" - that a measured amount of exposure will have a predictable and proportional change in how opaque the negative is. The amount of change stays constant: if you photography a grey card first without a filter, and then make a second exposure with an ND 0.3 filter in front of the lens, there will be a change of 0.3 optical density between the two exposures when you measure the negatives with a densitometer, as long as the two exposures are within the straight-line portion. The toe and shoulder are where the change in exposure is not equal to the change in recording. That's "tonal compression." If you have details that were exposed in the non-linear parts, and try to bring them back into the straight-line range, they probably won't look how we expect they would look. For example, if an actor's skin is overexposed enough to be in the shoulder, there's little chance their skin tones will look right if you bring the levels down in post. Digital sensors are fundamentally a bit different from film, and digital "density" is really "sensor voltage interpreted as a level of brightness." (That sentence glosses over a lot of complex math and engineering.) Hypothetical digital characteristic curves are still interpreted as logarithmic graphs that simulate an optical density-like response, partly because it's how it was done in the past and we're used to working that way, and mostly because a log curve is easier for us to wrap our heads around than a linear representation. Looking at a log curve can be a handy way to see at a glance how a camera or film stock might respond to exposure. The curve only tells part of the story, though, especially for digital sensors. A camera's sensor might clip long before the log curve would indicate - the manufacturer might have given the curve a longer shoulder so they can still use the same curve with better sensors in the future, or to prevent issues if raw footage is post-processed far differently than it was exposed. And for both film and digital, each color layer (for film) or color channel (for digital) has its own curve, with potentially different toes and shoulders for each color - meaning one color might clip or show noise before the others. Intentionally exposing in the toe or shoulder is pretty rare - you have examples like Gordon Willis deliberately pushing and underexposing The Godfather so that the image couldn't be brightened at all in printing, locking in his exposure choices. On the 2007 film Sunshine, to convey the intensity of the sun up close, Alwin Küchler intentionally overexposed the negative by up to 10 stops in certain scenes so that details were actually "burned out." It would seem much more common to make small exposure changes to avoid going too far into the toe or shoulder, like underexposing slightly to retain highlight detail, or overexposing a greenscreen shot slightly to get a better key. Generally, I think the idea is to know where the toe and shoulder start, and how that will translate into the final image via post, so that you can know on set when your exposure is in trouble, and do something about it before it's too late! If you want to do a deeper dive into this, the first few chapters of David Stump's Digital Cinematography book go into much more detail. It's not the easiest read, but it at least has more pictures than the wall of text I just wrote. But I hope it helps!
  15. The aperture works exactly the same; depth of field, exposure, etc. will behave as they normally would. You can think of it as extracting the central 1/3 of a 135 stills frame. To tell you how I learned the hard way: I once borrowed a Nikkor 55mm f/1.2 for a shoot, hoping it'd be a bit more similar to the Kowa 16mm primes in terms of the coatings and the stop. While passable wide open for full-frame stills, on 16mm the chromatic aberrations were proportionally larger since the area of the negative being used is smaller. I went back to my Contax lenses after that - the Zeiss 50mm/1.7 was usable wide open, so it wasn't worth keeping the Nikkor if it'd need to be stopped down anyway. It almost sounds like circular logic - the better your lenses are, the better they'll look on smaller film formats.
  • Create New...