Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 07/26/19 in all areas

  1. 3 points
    Hmm an attempt to get more coverage? Cropping reduces quality. Or they could have been running at different framerates, shutter speed, stills, VFX look around etc... who could forget the 35mm/HD hybrid from:
  2. 3 points
    Because the mirror/shutter edge lines up with the expanded edge of the S16 gate aperture, if you don't decrease the shutter angle from 180 to 172.8 degrees you will get smearing along the whole edge of the frame. If you crop it out you may as well not have converted to S16 in the first place. Many people glued a lightweight wedge to the mirror edge to do this, but it needs to be securely fixed and very accurately positioned so as not to scrape while spinning. Typically a job for a trained technician. The magazine conversion is less essential, though there is definitely a possibility that the expanded S16 area will get scratches, scuffs or simply bruising. Again, it affects the whole side of the frame where rollers, sprockets and guides will contact the newly expanded image area, so cropping just gets you back to N16. If you don't re-centre the lens mount you probably won't notice anything drastic with 35mm format lenses except zooms will track off to one side. How are you modifying the gate? If it's not done very competently you will get scratches from burred edges. Don't just use a file for example. If you expand into the area of the left vertical support rail you need to machine that rail down thinner all along its length so that the expanded picture area does not run along it. Google pictures of the narrow left rail on SR3 gates. When I was working on Arriflexes, I would use a jig with a gauge to re-fit the gate so that the aperture lined up with the ground glass, and then use a depth gauge to check and adjust the flange depth to within 0.01mm. Everything on Arriflexes is adjustable, if you take them apart you can easily lose fine settings. Personally I think if you can't afford an already converted S16 SR2 , it's a waste of time trying to convert one on the cheap. A half-arsed conversion will turn what was a professional camera into little better than a quieter K3.
  3. 3 points
    Modern digital cameras often do not see saturated colour very clearly, and there isn't a very good solution to it. The problem is that, instinctively, one would assume that the RGB filters on a Bayer-patterned sensor would be bright, saturated, primary colours. They're not. Often they're pretty desaturated, which helps with sensitivity (by not filtering out too much light). It also helps with sharpness, because the RGB images from the Bayer sensor are not as different as we'd expect; it's easier to infer where sharp edges are in the image since all of the pixels can see most of them. The result is a picture with rather reduced saturation. This can be corrected with what a specialist might generally call "matrixing," but which basically means "winding the saturation up." This works to a degree, but subtle distinctions between colours can be reduced; for instance, a lot of Bayer cameras can have trouble telling purple from blue, and it can introduce chroma noise if people try too hard to tease out the colorimetry. There are a lot of caveats to all of this. Higher end cameras are more likely to use more saturated filters, accept the sensitivity and sharpness hit, and achieve better colorimetry as a result. An Alexa is not a great example because it's far from the latest technology, but it was never a design which targeted massive sharpness or huge sensitivity. It does, though, have a nice colour response. Also, the human eye works very much in the same way; it does have red, green and blue-sensitive cells, perhaps better described as long-wavelength, medium-wavelength and short-wavelength because they have a very broad sensitivity that overlaps a lot, much like a camera sensor. I don't know if what you're describing is caused by all this, but it's likely it has at least some impact. P
  4. 2 points
    If anyone says they don't see a difference, they either are sitting far, far away from the screen or their eyesight is in question, honestly. There ARE some very rare examples of super clean 35mm, but yeah, otherwise, it's plain as day. 16mm and 2 perf are of course either more blatant. But hell, even anamorphic 35mm these days can be plenty grainy. I love it when DPs push process and really aren't scared of grain.
  5. 2 points
    but this is evanglism from your side btw. however… i do care on the choice of formats and i do respect the choices of others. But i don´t like when people do "anti" film propaganda… cause this ends in dump discussions with producers… just to remember, thats what i wrote: "not saying that this trailer does look terrible but i got distracted by the look. And i loved wide-angled shots chivo does on the prior mallick movies." just means that i got distracted by the look -. thats what the look does to me however what it was shot on. I do personally like all mediums digital & film and i use mainly the alexa for tvc/advertising work. But i still see many reason for both formats film & digital nowadays 2019. And I can´t understand why people propaganda digital as be as same as film…or saying that there isn´ßt any difference (what you clearly said above) there is still a differnce and i can tell you from many many telecine & grading sessions. Also a reason why many cinemato and photographers still use film. And no i am not a Film Fanboy… But saying there is no difference isn´t the truth. And yes its a CInematographers Forum we still should discuss about that! But enough for now… sorry for getting this out of Topic.
  6. 2 points
    I didn't read the original post as that. Maybe there was nothing to defend? Anyway, here's to good films. Whichever method.
  7. 2 points
    For the record, I am the crazy person. Let me explain. So I went to my very first film festival as a director. The producer submitted it so I had a free VIP pass so I figured "why not?" I meet up with the crew, go into the VIP lounge and see people taking themselves way too seriously. It's a film festival so comes with the territory, but they had the nerve to cater the lounge with JERSEY MIKES COLD SUBS. For our international posters; Jersey Mikes is a more pretentious Subway. Literally the only good thing they do is hot subs, but they gave us cold subs.... The big positive before showings: I MET SINBAD AND ASKED HIM IF HE'S STILL GETTING CHECKS FROM "JINGLE ALL THE WAY" So now it's time to enter the theater to sit through a bunch of other shorts I don't care about so I can see what I worked on played on the big screen. 45 minutes of poor audio, bad acting, confusing narratives, and cliche visual choices later: The documentary I worked on played on the screen. I gave the producer 2 different final files. One was studio-levels, the other was computer-levels. I was let down to see the projectionist (guy with a Macbook) chose the wrong version, so the final image was overly crushed and had odd compression waves in the shadows (let me know if that might've been my fault, I hadn't seen it on ANY screen/projector I played it back on beforehand). Skip to this section for the best part of the festival hijinks So a couple of us get out of the theater and there's a suit that walks up to use who needs us for some mock red carpet interview.... here is my chance to promote this project The following is 90% accurate of what was actually said: Interviewer: "So how did you and the producer come together for this project?" Me: "I just found an ad on Twitter.. I showed up to Newark, New Jersey looking forward to being murdered by an internet stranger. Unfortunately that didn't happen so here we are." Interviewer: "So were you the writer too?" Me: "It was a documentary, there were no writers." Interviewer: "How did you guys creatively gel" Me: "Honestly really well. I had been using R&B for montages and transitions for years, so I could make this doc feel like an Osmosis Jones transition." Interviewer: "Did you see any of the other shorts?" Me: "Yeah they were boring except for the one about Cricket" Interviewer: "So you're nominated for an award, any advice you'd like to give your fellow filmmakers?" Me: "Festivals are cool but no one actually shows up to these things so focus your time and energy on building an audience via social media. Also film school is a scam, do not go to film school it is a debt trap, your parents are wrong, they know literally nothing about this industry" UPON SAYING FILM SCHOOL IS A SCAM THE ENTIRE CAMERA CREW WAS SILENTLY CHEERING ME ON and then after we walked off the red rug, 2 of the festival helpers (all of these were high school kids) called out like "Dude you're awesome I was trying to figure out if I should go to art school!" I went up and talked to the guy, next to him were 2 girl helpers all fascinated with who this guy saying hella real stuff was. I told them I was a filmmaker who built my audience via social media, my work had gotten over 100 million views and at one point Youtube star PewDiePie stole a clip from one of my videos and had to email me to resolve the issue. Soon after, like 8 of the high school interns were crowded around to hear me talk, it was like a fever dream. At one point in the gathering I asked what they were getting paid to help, they said they were all unpaid interns. I responded with "I heard unpaid internships are illegal now?" That had them pumped up to just walk out during the middle of the festival. "OH MY GOD THANK YOU" one of them said. Another said they had to be waiting up at a door to make sure no one would sneak in. I told him "Dude you're at a film festival no one wants to sneak into this trash, you aren't even getting paid they can't force you to do anything." Their supervisors walked by reminding them to return to their posts and the kids were all reluctant to listen after my pep talk. Yes, I single handedly destroyed the morale of a film festival's entire working staff. So the lesson to be learned here is I am a universal entertainer and someone better get me a show deal before I turn actually crazy.
  8. 2 points
    Allow me to lead a round of applause. Not to self-plug, but I find this guy is apposite and intelligent: https://www.redsharknews.com/business/item/1375-opinion-is-film-school-worth-it https://www.redsharknews.com/business/item/4870-how-to-give-the-best-advice-to-newcomers https://www.redsharknews.com/business/item/1372-don-t-work-for-free
  9. 2 points
    Funnily enough, there's a brilliant video on the Indy Mogul YouTube channel about use of water, featuring an intelligent, hilarious and good-looking British guy. In short a garden hosepipe is enough for even reasonably large areas, but the real issue is having control over that area. Don't wet down an area that some unsuspecting member of the public might drive or walk on. P
  10. 2 points
    Sounds like you already figured it out! Backlight and edge light is all about ratio's. How bright your background, key, and edge/back will determine the edge/back's effectiveness. Quality of light will only determine if the edge/back will have a hotspot or not. Soft edge/back lights have no hotspots while hard edge/back lights have a hotspot. (Also, if you're hard light edge/back is too bright, putting diffusion on it will actually slow it down faster than dimming it) The still you posted from Shutter Island was shot by Robert Richardson who often uses a strong backlight and bounce card combo (I believe known as a fire starter?). He uses a really bright backlight and bounces in the key from the backlight. I did this on a feature recently and it turned out great: --- I agree with Satsuki that halation makes these punchy backlights look great. Some lenses naturally have halation like vintage or uncoated lenses.
  11. 2 points
    XX Neg can be tricky but also some great results can be had from it. Here is a Super-16mm film we developed and scanned to 4K on the Xena 5K machine which has gotten some really good attention.
  12. 2 points
    I don't think Bruce is saying to literally specialize in ultra high speed slow motion equipment. I think he's trying to communicate that it would be a good idea to specialize in something that very few others do where you're at. The question I think you need to ask yourself is, what's the vacuum in your market that you can fill? It might be slow motion, jib, etc., but do your research and see if it resonates with you.
  13. 2 points
    We shot "The Lighthouse" on 5222 and its an old, soft, finicky stock. Whether it's worth a 4k scan depends on how you treat the film and the lenses you use. I mostly processed it at "-1/2" with 2/3 stops extra exposure. This sharpens it up and increases dynamic range. Even then, a gray tone would already be black at -4 1/2 stops incident. Highlights fare much better, but still, latitude is still not great. Neither is resolution - I'm not so sure 5222 achieves 4k when I see the our untouched 4k scanned footage next to the 2k VFX footage. However, HDR might be worth it. Personally, I like more contrast in black and white and more subtlety in color. 5222 did have one superior trait. From my tests 5222 has much more "local" or "micro" contrast and separation than either 35mm color film or Alexa footage. Even while being softer and grainier. In that way, it is irreplaceable. Jarin ps: 7222 is much too soft. I wouldn't shoot it. In 16mm I'd shoot TriX instead and process as a negative. A very pretty stock. If only they made it in 35mm! Just pull a stop to get the right contrast!
  14. 2 points
    When I first started shooting digital capture, I metered and lit for a "film" look. IOW, the meat of the image within the 6 stop range, with the extra highlight and shadow detail for rolling off into black and white. But, over time I've adjusted my approach to using much more of the dynamic range of the camera for presentation and using less for rolloff. So, I'm lighting much more high contrast than I used to, and grading with a much lower contrast. Kind of like lighting for 10 stops of DR rather than 6. For a day exterior, this doesn't much matter as I have little control over the lighting, but if I have a deep shadow area and a sunlit area in the same shot, I might not fill in the shadow area at all or little. Note that when using this technique, one must carefully watch the clip points of the image as there can be little room for recovery in post color correction. And if you are metering with this approach, a spot meter can be useful, but you must run some tests so that your ISO setting on the meter corresponds to the recording of the camera. So, ISO on the camera might not match that on the light meter. Of course you can also use the waveform or other tools from the camera and/or display but they must not be viewed through a REC709 LUT or you won't see all the information.
  15. 2 points
    Thanks guys. I need to convince A24 to make prints! What's interesting about exposing and grading black and white is that you make day scenes brighter than you normally would, since it's your only tool to strengthen transitions between night and day. This is not fully portrayed by this first trailer, which has a very high number of shots from our "dusk" and "dawn" scenes. This film was much different than the Witch. This time, the night scenes around the "lantern" that look so dark in the movie were nearly blinding on set. It also has a proper black and often good highlights, unlike the low-con look of "The Witch." We may continue to stay rich in contrast for our next color film as well. Shall see. Harris Savides had such a profound influence on so many of us cinematographers. For me, the soft look and unending highlight scale stuck for a long time. Jarin
  16. 2 points
    Looking for Kurtz.. Mekong River .. never get out of the boat ..
  17. 2 points
    The inverse square law holds the same for diffused light as it does for a point source. The one difference, is that when you place diffusion in front of light, the diffusion surface becomes the light source. So, you would measure from the diffusion and not the lamp behind the diffusion.
  18. 2 points
    Ursa Mini Pro. Nikon Series E.
  19. 1 point
    Robin, the initial post has no reference to a film being "ruined" because it was shot digital. People have commented that the trailer looks video-ish and cheap but that is not a comment about the quality of the story. Nobody has dismissed the film. How can they, nobody has seen it yet. Forgive me but you often get very agitated about things that are not actually being said.
  20. 1 point
    The tech sheets seem to indicate that 250D is grainier, but also sharper than 50D. It also looks to have more of a toe in its characteristic curve, so more forgiving to underexposure while having less shadow separation. I sure miss the XTR Prod. What a handheld camera. Jarin
  21. 1 point
    Roger Deakins has said this when being interviewed a few times..he couldn't tell if a film he was watching was film or digital .. he's also not afraid of using it.. and lights exactly the same .. thats what he says .. and really I do think he knows what he's talking about .. yes its a forum about camera work.. thats what Im saying .. people are not seeing the woods for the tress .. getting obsessed with Digital meaning a film is ruined by using it.. its like saying the only "good" films are in BW.. Im sure this would have been hot topic at the time if they had the inter web ..
  22. 1 point
    Malick is an acquired taste but I find that since he switched to digital, his films tend to feel quite cheap? I don't know, this one especially feels cheap to me.
  23. 1 point
    A few things... 1. A "middle" grey card is not in the center of the optimum exposure range. At the standard rated ISO exposure, it will be about 1/2 stop below the middle, compared to the "X" crossover on a video greyscale chart. 2. When viewing dynamic range on film vs. a digital camera, the DR refers to areas where detail is visible vs. not visible. But this does not mean that the quality of detail at the extremes is the same as the quality of detail in the middle of the range. While detail in the deep shadows can be distinguished, it is very grainy, which is disguised by the compression of the detail in the characteristic curve as rendered on a print. On the highlight end of the curve, it is not so grainy, but also not so detailed either, and there could be some color shifting. So, it's best to think of the range of tones that you are capturing as about 6 to 8 stops, with everything above and below as "roll off" into shadows and highlights. This is especially true in 16mm where you are enlarging the grain of the film much more than on 35mm film. 3. When you "push" the film processing, you are gaining "exposure" in the middle by loosing detail in the shadows. So, if when you expose normally you would get 2.5 to 3 stops of detail below your grey card exposure, when pushed 1 stop you will get 1.5 to 2 stops of detail below your grey card exposure. Also when pushing, instead of 3 to 4 or 5stops of usable detail above the grey card, by over development of the negative, you will likely loose a stop there as well. So, when exposing film for push processing, instead of a perceived DR of 6 to 8 stops, it's more like 5 to 7 stops. Personally, I feel that push processing 16mm film is a pretty harsh look. And, if you do, I would light and expose the film as if you were limited to 5 1/2 stops. 2.5 stops below the grey card and 3 stops above.
  24. 1 point
    I meant this model, NOT the m4/3 normal E2 model: http://www.z-cam.com/e2-s6/ they are developing raw options for their cameras. we just started to use the regular E2 model for documentary stuff and it manages very well in that environment, especially when needing the remote controls. the integrated monitor thing is not important for most users if they are really doing work-related stuff and not super low end vimeography as a hobby. A fixed non-orientable monitor on the back of the camera is just not usable at all in any real production environment, not even in low budget indie films. If you really want to do anything with the camera you will need a separate onboard monitor anyway. the built in display on the Pocket is usable though if one is so low budget that cannot even afford proper lenses to put on the camera... It is nice to hear that the regular Pocket has been reliable in most uses. by my experience the Blackmagic hardware tends to be cheap build quality and unreliable at times so it would be great if they would have at least one product which does not release smoke and die in the middle of the production XD probably the 6K Pocket will not be delivered in time like has happened with all their camera models. Still better though than the Nikon Z6 raw option, they NEVER deliver the promised features not even one year late :P
  25. 1 point
    This camera is a game changer for many reasons, not just resolution, but also a "close to" 35mm sized imager and EF mount, two things that were not quite ready on the previous version. I really hope it will do 2.8k with my super 16mm lenses, that would be so hot if that worked. I was still going to buy a 4k version, but this new version really takes the cake. It also pulls Blackmagic's pants down on the URSA Mini Pro, because they will OBVIOUSLY be updating that very soon. My guess is with all the talk about 8k in the presentation, the new UMP will be 8k with a FF imager. That's kind of what the rumors have been steering towards. Now that the pocket is 6k with a 35mm imager size, it's the next best move for the company in my guestimation.
  26. 1 point
    Nah, you're CP2 will become even more valuable because it has optical imperfections that narrative projects love.
  27. 1 point
  28. 1 point
    For big budget films, it seems much more common amongst DPs from Europe or Australia where DPs more frequently operates the camera as well.
  29. 1 point
    Shooting green screen interviews today in a conference room at a certain software giant's building in Redmond, WA. What are you working on today? Upload a photo with your reply.
  30. 1 point
    I run a successful rental business here in Hollywood. I use Share Grid and word of mouth to get my clients. I specialize in film cameras, so I have 35mm, 16mm and super 8 cameras of various kinds. I also have some tools and training to do basic maintenance so I don't have to outsource much of it, so ownership isn't too bad. I also have a complete ecosystem where I can discounted film, processing and transfer for most of my non-commercial clients. I also have a complete post facility, so I can do synching, grading and editing for clients if they need it. I've been at it for 3 years now and I've learned A LOT about the business. 1) I could not be in this business without low-cost equipment. If I bought new equipment on a loan, I would never be able to pay that loan just on rentals alone. People who use Share Grid or Kit Split are going to rent from the lowest price people and there are so many out there with amazing deals, it makes no sense to try and compete unless you get a killer deal on equipment. 2) No matter what, you need general theft and damage insurance for your own equipment. This is because believe it or not, the insurance that's used from Share Grid and Kit Split, does not cover intentional theft. Meaning, if someone were to walk away with your package and disappear, the insurance would not work. So you have to pay quite a bit of money up front for special insurance in order to deal with it. The way I get around this is by researching my potential clients heavily and if I don't feel right, I don't offer them the equipment. When you live in a media city, this can be easy, but anywhere else, it's a lot harder. So this is an added cost, on top of your loan payment. 3) Most renters aren't very nice with your equipment. They throw things around, they drop it, they get it wet, they scratch the lenses, etc. 9 times out of 10, something I rent, has damage upon return. Generally it's the viewfinder on the film cameras because people use it as a handle or something. Sometimes I find a lens that needs a rebuild or a battery that won't charge. It really gets annoying because here you are offering a service and now you've got little issues that you can't charge their insurance for. Generally if something gets damaged, it's covered, but only if you catch it right away. Most of the time, damage isn't known until the next shoot when you put all of the equipment to the challenge of working full time. This has bitten me so many times and I just had to spend $1500 bux to get 3 of my lenses rebuilt because one of my renters months ago, damaged them. I don't know who... so I'm unable to get anything from it. 4) If you don't live in a media city, don't bother. Honestly, it's not worth dealing with the one guy who hasn't ever used your camera who is making a music video and doesn't know much about it. Those people you do not want touching your equipment and sadly, that's the common denominator for non-high end digital equipment like the cameras you're describing. However, if you lived in a media city, where there was high demand for digital cinema cameras, you could get an Alexa XT Plus used, set of Super Speeds used and put together a $700/day rental for the kit and probably do some decent business, even without accessories. You could also place your camera and body at a smaller rental house, let them deal with the insurance/storage. Whenever you want the camera, you simply book it through them and get it back. That sorta thing does work well and I know many people here in Los Angeles who do that. I personally don't think it's worth the risk financially, but it's for sure a possibility if you own equipment people want. In summary, I do think you need to specialize. So maybe you focus on a low-cost body, but you have great lenses. Maybe you focus on having a great body and no lenses. Maybe you focus on having a complete kit with lighting and all the accessories. You can't do the mid range body and mid range lens deal, nobody will want it. If I had this all to do again, I would probably not bother renting. I think over the long term, I've made less money then it's worth for the wear and tear on the equipment. I also think I've been screwed by so many people, just the time it takes to deal with those things, is time I could be out making money. I think the cut-off for me is rapidly approaching $400 - $500 a day for it to be worth me renting and I'm already raising my prices to reflect that. I have a pretty long client list of people who always come back to me, so I can literally just go with those people from now on and never rent on Share Grid or Kit Split again. But it's taken me 3 years to get there and it's been a wild ride since. Ohh and P.S. I use my equipment for shoots all the time, the conflicts are no big deal. Simply tell a would be renter that the camera is booked that weekend.
  31. 1 point
    You have the right idea of counting the stops up and down from middle grey within the camera's native DR. Let's you go in and manually take care of thing if you find the Rec709 conversion to be too punchy. On the other hand, I really only ever meter when shooting film. A color monitor in Rec709 has worked fine as my meter.
  32. 1 point
    Look for responses from others, too. I'm an amateur DP (work in post), but I know a bit about HDR/SDR workflows as a result, too. So take the above with a grain of salt.
  33. 1 point
    I personally wouldn't do it. Yes, it would be great to have your own high-end camera, but unless you have enough work of your own to justify the cost, you might be shooting yourself in the foot. How many people do you currently know of that are ready and willing to rent a camera from you, and how often would they realistically rent? I've been asking myself similar questions lately. I worked for a production company for years and then started freelancing about a year ago, so I had to start piecing together equipment of my own. Honestly, buying a camera is my last priority, and for the number of shoots I do, it doesn't quite make sense for me. I've been investing thousands of dollars in lights and grip gear. That's still added significant value for me, and when I need a camera, I borrow or rent one. Also, I personally wouldn't drop a bunch of money on a camera if you didn't have a decent lighting and grip package. So what if you had a Red? If you don't have lights your footage might not end up looking all that great anyway, and isn't that why you wanted to invest in a nice camera?
  34. 1 point
    Dunno. We used a an early generation Alexa with a Panavision-modified 1960's Super Baltar at T/2. It turned out alright. The critical key is triple-wick candles and hiding additional tea-lights in the right places. Jarin
  35. 1 point
    Modern Inventors? As in still alive? Garrett Brown. The steadicam! Johan Hellsten. The easy rig! Howard Preston. Someone beat me to it, but nonetheless on the list! Mark Roberts Motion Control. The Bolt Arm! George Lucas. Digital editing. 3D animation. Digital cameras. Did he actually invent these? Not entirely, but he ushered these tools into existence the same way Steve Jobs ushered in the iPod, iPhone, etc.
  36. 1 point
    Archive viewing on the Steenbeck for a motorsport feature. The commute for this job was quite short and it was a 10am call as well. Jackie Stewart in his pomp, arriving for the German Grand Prix, Nürburgring, 1971
  37. 1 point
    Jaakko patented his optical printer in 1974 so it’s more like 45 years.
  38. 1 point
    Bill Tondreau - motion control pioneer. His Kuper Controls software is still used on set today, even though it's last version was released in 2004. Retired (I think). Jaakko Kurhi - in business as JK Camera Engineering for 30 years (or more?). Maker of Bolex accesories for animation, low cost optical printers for students and professionals, and all kinds of other gismos for analog animation and vfx. Retired (but still tinkering).
  39. 1 point
    XAVC Class 300 feels pretty comparable to standard Prores422 in terms of overall image quality. I haven’t compared them side by side, but I feel like I’ve encountered compression artefacts in similar levels from both. Prores422HQ I’d put it a bit above standard XAVC. Both are mastering codecs though, and far above a delivery codec like h264. Prores is self-contained and drag-and-drop though, which makes it VASTLY easier to deal with in post than XAVC.
  40. 1 point
    To add to the info above, for 16mm it's necessary to do A & B rolls to make invisible cuts on your print. You can also add additional rolls for titles or even optical FX (Mike Jittlov's films are a good example of this). I am in the process of doing a little all-analog film right now, but I plan on getting the workprint scanned so that the sound can be handled digitally. Once the mix is done, we'll send a digital file to the lab to get an optical sound negative shot. Fewer and fewer labs make prints any more. Fotokem, Colorlab, and Spectra come to mind. Cinelab just recently stopped making prints, sadly. ...another option is to skip the whole process and shoot on an optical sound camera like an Auricon, on reversal film. Your camera original would be your release print!
  41. 1 point
    I'm not sure it's even that important, from the perspective of what it does, to be able to put a brand name on that. It's a tungsten halogen sealed beam array. P
  42. 1 point
    It could be a project to replace the magnetic heads by an optical one and feed an LED with linear signals. You’d need to install a very narrow slit between the LED and the film, maybe a microphotograph of a black line on a piece of sound recording film. The line should be clear, surrounded by opaque black, four tenths by 0.032" or so. Then you can try out photographic sound recording on the available Double-Eight stocks.
  43. 1 point
    HDR is useful for all kinds of film, not just intermediates. To be clear here, HDR in the context of film scanning and HDR in the context of viewing platforms (screens) are different things. Unfortunately they use the same name, but they're basically unrelated. HDR in scanning works by taking two or three exposures of a frame and combining them into a single image that includes shadow and highlight detail, rather than favoring one over the other. It is very much useful for just about any type of film (camera original, intermediates, etc). Print sees the least benefit, but it can still be useful there. There are three advantages to HDR scans: 1) Increased dynamic range. If the sensor in the scanner has a limited dynamic range, a 2 or 3 flash scan will increase this range, sometimes dramatically. We've seen shadow details come out of dense Kodachrome that clients didn't even know was there. 2) Improved signal to noise ratio: If the sensor has issues with dense film (similar to a low-light situation with a digital cinema camera) where you'd see the sensor noise, an HDR scan eliminates this problem by doing a special exposure for the dense film, with either more light or more exposure time to overcome the sensor's noise floor. 3) Increased bit depth: This will depend on the scanner and the image processing, but I can tell you definitively that in the ScanStation the standard bit depth is 10bit. With 2-pass HDR it's 14bit. In the Director, with 3-pass HDR it's 16bit. In the end, the file that comes out of the scanner is the same file format in the same color space as a standard dynamic range scan. The difference is how much more you can push and pull the image in grading, because the extremes now become more workable. BTW, with print, the increased bit depth can be really useful, especially if the print has some color fading. It gives you a bit more flexibility when grading and we've found that we've been able to pull color out of some prints with HDR that we couldn't with an SDR scan. When the color is that marginal, every bit counts. It's less useful with a good print, since prints have their dynamic range limits baked in from when they're timed at the lab.
  44. 1 point
    A big weekend is ahead of me!
  45. 1 point
    CRI uses a limited number of color patches to give a rating. The newer TLCI measurement is more precise. Take a look at this article. In general, good LED lights are now "good enough" and the usability, features, efficiency more than make up for the few imperfections in color rendition. If a light has a CRI above, say, 95, that's fine by me unless it is some kind of critical application. However, anything under that and I would not use it as a key light but only as an accent or background light. Yes, absolutely! There will always be a need for big, powerful lights... regardless of the technology! One reason is lighting large areas, at night for example. Other times you need a high T-stop for depth, or high frame rates, or you need to compete against the sun. Another reason is that you need to work at a certain light level. Imagine that you are lighting a sound stage, so there is no light, and your brightest light is a 50w LED. You then use it to simulate sunlight... but now this 50w light has got to be the absolute brightest light in the scene. This means that everything else has to be much darker, because you need the contrast? In this theoretical scenario, can you imagine how dark the studio will be? A computer screen at normal brightness would light up the room! That is why you always need big powerful lights. LED does not alleviate that need, it just makes it more power-efficient to get a lot of firepower. All of them... Lower heat, higher efficiency... And with the fixtures that have the ability to dim down without color shift, you can dial any color temperature or white balance by the press of a button. It is just the beginning, as I think we will see some wild features in the coming years! Portable video walls? Moving lights? The cinema industry can take a lot of inspiration from stage lighting technology. As long as the CRI (or TLCI...) is good. Tungsten tech now has less uses and is becoming a low-budget short or specialty item even though some DPs still like to use Maxi Brutes or light arrays of tungsten lights. Personally I have rarely used tungsten lights over the past few years, except on sound stage where you need many units and it is still more cost-effective. On all other shoots, in my experience, tungsten lights have been almost completely phased out. They totally remain (and should remain forever) an available option. The one advantage of tungsten is that the color spectrum is excellent, and the same regardless what is the brand of the bulb or if it's a cheap bulb. The fixtures are much cheaper, too, and they do not become obsolete. I have heard of a new tungsten technology by the MIT that would be more power-efficient than LED, but that was a while ago.
  46. 1 point
    Skylight is cooler than direct sunlight — you can see that with your own eyes. The sky is blue. You mean you’ve never noticed the colder shadows near sunset on a clear day? The French Plantation dinner scene in Apocalypse Now Redux is an example, people not being hit by the direct setting sun or its bounce back / reflection at them are lit with a whiter light.
  47. 1 point
    If a camera has been converted to S35 (which should be noted on the camera or lens mount) it will have had the lens mount re-centred, but just because the gate is full aperture doesn't necessarily mean it's a S35 camera. At the last rental house I worked for, I think all of the 35-3s we had were fitted with full aperture gates, even though most of them were Normal 35. I was told this was because producers liked to have the ability to adjust framing when Cineon scanning and digital effects became more common in the 90s, and the soundtrack area could be masked out later if required. I think only one of the half-dozen 35-3 cameras in the fleet was converted to Super 35. It wasn't a good camera to convert because the viewfinder didn't quite cover the enlarged frame, even with modifications to the ground glass holder and prism baffles. I believe P&S Technik did our conversion, which basically involved fitting an offset PL mount, modifying the ground glass holder and opening the viewfinder as much as possible to cover the new frame. The gate was already full aperture. I have no idea about contact printers or whether they can mask the soundtrack area, but it wasn't a problem having full aperture gates in N35 cameras shooting anamorphic here in Australia. Every Scope ground glass I've ever seen is centred for N35, I don't see any advantage to shooting anamorphic in Super 35, just the hassle of having to mark a custom ground glass. The extracted 1.2:1 frame is no bigger, since with Scope the frame height is the restricting factor not the width, and a scanner will scan the full gate regardless of where your scope frame sits within that. So why use Super 35?
  48. 1 point
    I'm shooting an indie film called "Love Witch" for director Anna Biller, who has posted on this forum in the past about classic studio cinematography. We met back in film school at CalArts and I shot a short film for her in the mid 1990's in 16mm in the style of an old Technicolor movie. She asked me to shoot her latest feature in a similar hard-light style, modeled somewhat on 50's-60's color movies such as "Marnie". Anna is also doing the production design, costumes, and later, the editing. We are shooting in standard 4-perf 35mm 1.85 on an Arricam ST and plan on a photochemical finish, and then a transfer to digital from a timed IP. FotoKem is handling the processing and HD dailies (to ProRes 422 LT). I'm shooting most of the movie on the slowest speed tungsten stock available, Kodak Vision-3 200T, rated at 100 ASA in order to get the printer lights up higher for more saturation and contrast. This means I need to get up to 100 foot-candles of key light just to achieve an f/2.8. For day interiors scenes on stage, that's a lot of light and a lot of heat. We are using mostly Zeiss Super-Speeds and I'm averaging near an f/2.8 for everything inside. For a couple of interior locations where I have to balance to daylight and don't have enough light to use an 85 filter (and thus end up with an effective 64 ASA), I'm switching to Vision-3 250D rated at 125 ASA. With HMI lighting, I can get to an f/4 a little easier but it's easier to do this old school hard lighting style with the tungsten fresnels. We're about halfway through the shoot so far. We spent two weeks in a warehouse converted to stage space in North Hollywood shooting on sets, then a week outdoors in a park, and then spent last week at the old Herald Examiner building using some of their spaces. I won't be able to post any images for a couple of months at least, so when that happens, I can discuss the technical aspects more clearly. Ideally, the best stock to use for this Technicolor look would have been the EXR 100T that Kodak used to make a decade ago, printed to Vision Premier 2393, also obsolete now. Or maybe the Fuji Vivid stocks, though I think they only made 250D and 500T in that style, and I would have needed a 100T or 250T version. Anyway, the choices have been reduced to Vision-3 negative and regular Vision print stock. Vision-3 200T is pretty sharp I'm finding and I'm using a lot of diffusion filters to knock that back and get a more glamorous look, which is fun. I'm finding that I need to use a direct 2K (Mole-Richardson stage Junior) at full flood about 15'-20' away or so to get an f/2.8 key, for closer work, a 1K or 650w Tweenie is bright enough, again, all direct.
  49. 1 point
    For each location he has to say the entire speech from far to near and then the editor uses the section he wants. But first pre-record the speech and have him sync himself to it by playing it back using an earwig on every take. In post you'd use the single pre-corded take over the entire commercial so the sound is continuous. It takes some practice for an actor to get used to lip-syncing to sound playing back through his ear. And actually if this is the only sound, you could just play it over a speaker but I think an earwig would be better, especially in a noisy location.
  50. 1 point
    There are many pages on the internet that can tell you about interlaced-scan video -- here is one example: http://www.kenstone.net/fcp_homepage/24p_in_FCP_nattress.html But I think the "video" look is more related to the higher motion sampling rate -- 60X per second -- combined with a lack of shutter, so no temporal gaps in the motion. This gives the moving image a "live" look like it is happening right now in front of the camera, sort of fluid-looking and a little bit smeary. 24 fps with a half-shutter creates more of a steppy, strobing motion. It's not a question of good or bad, obviously it could be argued that it would be better to have a higher sampling rate for motion, it's mainly a matter of conditioning. I have a theory that the more "hyper-real" the process becomes, the more it makes the fakery of fiction look obvious, sets look like sets, costumes look like costumes, and actors look like actors. This is one reason why these more immersive processes work well for IMAX-type nature documentaries where everything IS real in front of the camera. It suggests that one solution to high frame rate / high resolution / high dynamic range / 3D processes will partly just to be more perfectionist about what goes in front of the camera. But it does imply that 24 fps has the effect of enabling that "willing suspension of disbelief" that we talk about by giving everything a certain motion cadence that moves it from the strictly realistic, like a filter over reality.
×
×
  • Create New...