Jump to content

Stephen Baldassarre

Basic Member
  • Posts

    93
  • Joined

  • Last visited

Everything posted by Stephen Baldassarre

  1. I can't explain that either, especially GoPros. Their image looks like standard def and the lens distortion is nauseating. Sure, but that only defines digital output ranges. What actually gets captured can vary extremely and you can't go directly from something with color/contrast ranges approaching Kodak 7217 to a Rec.709 output without doing something to reign in said image, or you wind up with a very fake looking 8-stop pallet like Mr. Purcell got (clipped Y and Cb/Cr channels, no knee). How is it a toy? Just to clarify, if it DIDN'T have professional audio capabilities, no HDMI or other options, it would have been a more serious camera? That's exactly what Joe Rubenstein (the creater) wanted. I pushed for that as well. It was the Kickstarter backers that insisted on adding in a bunch of other stuff, which delayed production and greatly increased the cost of the camera. They did not pass that added cost onto the customers, they just wound up selling them for close to their cost. It beats the D16 in precisely one area, price. Having used both, I might accept a BMPCC if somebody gave it to me, but I wouldn't spend money on one. I'd like to see some of your D16 DNGs. I can only imagine whoever was shooting didn't know what they were doing and/or you have an issue with your work-flow. True. While most of the camera was made from off the shelf parts, assembled in Canada, the FPGA and chasis were custom built in China, which I believe is what was giving them grief. Switching manufacturers would mean almost starting from scratch. I know the prototype cost about $1,500, but that was before they added in all that other stuff. Seriously, just the HDMI port complicated things SO much! They originally weren't going to have one, but the backers griped. So they added HDMI, which required another daughter card and a bigger processor, which added power consumption and heat, required a re-designed FPGA, which all together required a fan, which added more power consumption. They weren't going to have audio capture originally, but the backers insisted. Then they were going to put in crap audio capabilities like all cheap cameras do but I (along withe a few others) said "I'd rather it not have audio because I'll never use that." They eventually decided to add pro quality, with good op-amps, phantom power and analogue gain. It has 24-bit 96KHz capable ADCs as well, near top of the line. Does ANY sub-$10,000 have that? The audio also added to the FPGA design and another daughter card. By the time they were done implementing all the features people wanted (they held firm on no compression, thankfully), the chasis went from a single board with a sensor daughter card to being jam packed with stuff. Then they had to write the firmware, which also added delays and cost, but they never budged on the price. Got that right! I don't know WHAT they were thinking. See, I don't think somebody should have to change their project to suit the camera. I don't want to be forced to avoid all but subtle movement because the camera won't take it. I don't want to be forced to avoid fine details, to use lighting that doesn't emit IR etc. because the manufacturer wanted to keep the price under $1,000. A proper optical block costs almost $400 in that size, so they just skipped it.
  2. I have to agree with that. The down side is now everybody with $1,000 claims to be a professional "film" maker when most don't even deserve to even call themselves video makers. Though if I were to do a run & gun documentary, I still don't think you can beat a 16mm camera in some respects. There's no boot/load time, no waiting for auto-focus etc. You just push the button the moment you see some action and know you have SOMETHING. I was on a motocross doc in the mountains on 2x 400' loads of 7201 with a crew of three (director, camera op, me on audio) and a CP-16. We never missed a thing; get a shot, run through the woods to the other side of the track, get them coming the other way, go back to the beginning for an interview, catch an accident as it happens, all without leaving the camera running (makes editing easy).
  3. You have to remember that while new CCDs were still being developed, efforts to improve the technology itself pretty much stopped 20 years ago to give CMOS tech more effort. By the time CMOS chips started appearing in things like the Alexa (which initially had a mechanical shutter), the differences in performance for large pixel sensors was minimal. However, CCDs can still generally beat CMOS in small pixel sensors but price matters more in that category. Case in point, I hacked a CCD camera with 1.4uM pixels to capture raw data and got a 10-stop range out of it (8 without the hack). You'd be lucky to get an 8-stop range out of a CMOS camera of similar pixel size without HDR or DNR trickery. Did I mention almost all CMOS chips have DNR built into them (and often more in the camera processor)? It may hide noise, but it introduces other artifacts. I'm sorry to hear that. The camera itself does almost no processing. It pretty much just converts the 16-bit linear data to 12-bit Gamma-1 and stores it. The rest depends on who's using/grading it. It's no coincidence that the Kodak sensor in the D16 has similar performance to Kodak 7217. The gamut is way beyond what most software/equipment handles by default, so it needs to be converted to Rec.709 for video release, to XYZ G2.6 for DCP or back to 16-bit linear for film-out. It seems most D16 shooters are self-taught video shooters using bad lenses, bad lighting/exposure technique and don't understand the raw workflow. It took me very little effort to get a natural look using freeware. I know another guy (professional video technician) who independently developed a similar work-flow and he also got great results. Not in my experience, though it doesn't have DNR built into it like CMOS cameras. If you want to crank up the gain, you have to apply the DNR yourself. As for the highlight clipping and latitude, that's an exposure/workflow issue. I found it has a 12-stop range and holds up to heavy grading quite well, though I really hate when people try to make the look of a project in the edit regardless of what camera they use. $$$. The BMPCC was designed for wanna' be film shooters while the D16 was designed for actual film shooters. The Digital Bolex team said up front that they were making a camera for a very small niche market and actually got better response than expected. They mentioned the color space up front and that the viewfinder (even an external one) should not be used to judge exposure or white balance, just like on a film camera. They can't be held responsible for people using the technology incorrectly. They went out of business not from a lack of sales but because manufacturing costs more than doubled since they started, had issues with one of the manufacturers and knew they wouldn't survive a price hike. They could either become like everybody else, which they didn't want to do, or they could quit while they were ahead. I also suspect they hurt themselves with the hipster/retro marketing. I was actually seriously considering getting a BMPCC, despite my gripes, just because of the price. After considering the rig I'd have to make to handle it, adding a proper optical block etc. it would have cost over $1,500 but still have worse color and rolling shutter than my $800 Canon. I can't stand rolling shutter. Even in multi-million dollar productions, that artifact just screams "cheap" to me.
  4. Well, they haven't given resolution much priority because there's so much real estate on 35mm. The main priority is has been reducing grain, which is basically done by making the emulsion extremely thick. That allows for more light-sensitive volume so you can get away with smaller grains for a given ISO at the cost of more light scattering through said emulsion. Optimizing for 16mm would basically entail thinner emulsions to get a sharper image while sticking to lower ISOs to minimize grain. Like, 5203 is 50D but the MTF is 50% at 70L/mm. In 16mm with optimal lenses, the contrast is 50% at 700 lines. Now if you had a thinner 16D emulsion, you could get more like 160L/mm but with less grain at the same time. Plus, the lower speed would make it easier to get in the sweet spot of the lens without an ND filter. When Vision2 was current, 50D wasn't sharp enough, 200T was too grainy, so I used 7212 rated at 50 ISO for everything as a compromise. Even then, emulsions were getting fairly thick. Now I'm forced to use 7213, which has about the same grain as 7212 but lower res. Cost is the sole reason CMOS won that war. CMOS was vastly inferior but kept getting better that's where all the R & D has been going. A basic S16mm-sized CCD costs about $400. Enlarge that to S35-sized and it's $2,200, not including the external amplifiers, ADC etc. Now if you have 3x of them (not to mention the prism)... Consider a 2/3" (regular 16mm sized) 3-CCD camera costs at least $23,000 without a lens. Compare that to CMOS, which is about $80-$100 for S16 and $600 (though some are more) or so in S35. The amplifiers and ADCs are built into the CMOS chip as well as a lot of the image processing, so it not only make cameras manufacturing cheaper, but engineering is easier as well. A 3-CCD S35 camera would be unbelievably large and expensive, larger than any 35mm film camera. So, single CMOS it is. The Digital Bolex, when fitted with a good quality lens (which nobody posting online examples seemed to use), was by far the most film-like video camera I've ever used and it was CCD-based. Great color, global shutter, 11-12 stop latitude when handled properly. Water under the bridge at this point. Hopefully, when the current technology matures, cameras will go back to having global shutters again and color/dynamic range will take precedence over cheated ISOs and pixel count. In my experience, most video cameras are natively 160-400 ISO, Alexa being an exception due to its enormous pixels. Looking back at some of my earlier digital narratives, I did myself and my clients an enormous disservice by chasing a "film-like" look on video equipment. By just letting video be video, like I did later, I got much better images that ultimately look more professional even now. YES, and the most disturbing artifact, which seems to have only appeared in the last ten years or so, is when light sources of certain colors are within a scene, they make the areas around them DARKER. Police car and dance club strobes are a great example... a blue light flashes and everything reflective around it turns dark blue rather than bright blue. A major problem with single-chip design is that the green channel only has 1/2 the sensor resolution and red/blue are 1/4. You can interpolate the values on the fly at low quality, like most cameras do, or you can do it as an offline process with much greater quality, but you can't restore information that was never captured. That is the sole reason the Alexa has a 2.8K area; so it can capture more color information and then be resized to 1920. It still isn't a perfect solution. All single-chip cameras allow at least some alias distortion in order to get the highest resolution possible. They merely hope you won't notice, particularly with cheap cameras, which tend to lack OLPFs (why they look sharper than pro cameras). In order to avoid all artifacts, the optical resolution must be restricted to 1/4 the sensor's resolution. That means it would take an 8K sensor, optically limited to 2K to avoid alias distortion. Of course, 8K on an S35 sensor means .003uM pixels, or roughly 1/7th the native sensitivity and dynamic range of Alexa as well as 49x the data (more processing, more storage, worse rolling shutter). A 3-chip design can capture almost its full resolution without alias distortion and no interpolation/resizing is necessary. While this conversation is purely academic, there is a lot of merit to 3-chip design, which is why it's so important for live production. The Thompson Viper was a raw 3-CCD camera and I was amazed by its quality, though still not a substitute for 35mm IMO.
  5. To be fair, I said "hot-rodded" DSLR. The color is very strange, it has the overt sharpness to it and still has a rolling shutter. I wasn't intending on starting a war. I wouldn't even try, yet many people swear they are absolutely amazing. The have poor IR handling, no anti-aliasing filter, it's virtually impossible to get accurate color out of them (spending hours correcting post doesn't count), rolling shutter etc. and when you rig up everything it takes to handle it comfortably, it isn't cheaper than the competition anymore. I actually seriously considered a BMPCC for a while, despite being really disappointed in the original BMCC and found it unusable without major modifications and constraints on how to shoot with it. Plus, they don't have nearly the ISO or dynamic range they claim. I can only imagine they assume the user will apply noise reduction in post to rescue shadow information. On that note, I have not used a single piece of BM equipment that was reliable. Again, not trying to start a war, just going on my own experience. Compare that to, say, a run of the mill Canon or Panasonic that does what it claims and works every time. I ultimately decided not to own a semi-pro video camera because there weren't any in my range that didn't irritate the snot me. That wasn't the case 15 years ago, strangely. Single-chip tech is every bit as capable of being film-like regardless of being CCD or CMOS. The two main problems are 1, the best dyes are patented and 2, lesser dyes allow increased ISO at the cost of color purity. As for dynamic range of CMOS exceeding CCD, that's done by artificial means; noise cancellation and other image processing tricks. Now Sony has recently released a CMOS chip with analogue memory, like CCDs have, and IMO is the best of both worlds. It has fairly good color science, native global shutter and larger pixel area like CCDs but at the cost of CMOS. Sadly, there aren't any conventional cameras using this technology yet. There's also quantum film right around the corner, which looks really promising, but I suspect it will wind up being an excuse to cram even more pixels into an unreasonably small area. Check out the MTFs. Earlier T-grain films resolve quite well, but as emulsions kept getting thicker with each gen, the MTF drops more and more abruptly above 20C/MM. I suspect a lot of that has to do with the sharpening effect of modern lens coatings and other processes. The resolution isn't greater, but there's a little "bump" in the MTF curve of a lot of film projects before dropping like a rock that adds to the perceived sharpness. I agree. I spoke to a Kodak engineer a few years ago and she said they stopped all R & D. Also, I would love to see a film DESIGNED for 16mm rather than a stock designed for 35mm and merely slit for 16mm. When I spoke to the engineer, she said there was a few things they could do to make 16mm better but since 35mm was the biggest seller, they couldn't justify the cost of a purpose-made 16mm stock.
  6. I wouldn't even go that far. They essentially changed the lens mount and modded a PAL chip set for 24p capture. 3-CCD cameras are designed to correct for chromatic aberrations via sensor placement while cinema lenses do it internally. Lucas, of course, said that the new "superior" format showed off the "shortcomings" of cinema lenses. He also neglected to realize the image quality of a lens changes depending on the frame size. Like, a lens designed for 35mm frames will yield a better image on a 24mm wide frame than it would for a 10mm wide frame, while a similarly priced lens made for a 10mm wide frame would be sharper but won't cover a 24mm wide frame. I mean, it DOES require 6x as much glass to cover, say, S35 as it does 2/3". I think it's good people are willing to try new technology and techniques, but it shouldn't be reported by somebody that doesn't understand it (as mainstream news shows us time and time again)... or maybe he understood but was spreading propaganda. Yep. I remember the first feature on which I worked that was shot in HD video. It took special set design, special lighting, special clothing, art direction etc. to make sure everything fit within the limited pallet of HD video. It probably cost MORE to shoot on video in the end. I also remember when the first PII-based camera hit the market and everybody was excited to not have to deal with tape any more... yeah, but you now have to have somebody follow you everywhere with a Mac Book to off-load clips every few minutes. They solved all that by recording everything at 24mbps, adding a lot more image processing and improving automatic lenses, so now everybody IS a Hollywood cinematographer at the low, low price of $800! Am I the only one that sees Red as hot-rodded DSLRs? While I'm here, I can't seem to convince anybody that Blackmagic cameras are cheap mirrorless cams with pro CODECs and some added features. I can tell you, having worked with both film and REAL professional video cameras, BMs aren't even close. Any way, I think Hollywood went primarily digital before 2016, but, most IMAX, like conventional theaters, are 2K video, so who cares? I agree. I always thought the professional CCD-based cameras are quite good. My heart sank when we replaced our CCD field cameras with large, single-chip CMOS cameras (for that more "cinemaesque" look that's far less film-like in every regard except DoF IMO). Any way, the cameras in Avatar and E2/E3 hardly had any screen time. What time they did have was heavily manipulated. They probably could have shot on DigiBeta and hardly anybody would know the difference. A lot less of Apocalypto was shot on Genesis than they admit (they used many cameras) and more of it was film than they admit. Never the less, I thought the Genesis was a good camera for some things, but most people seemed to use it as an excuse to not light scenes well, which leads to a very "video-like" image. I think the use of digital intermediaries also really helped push things in that direction. Producers want to be able to manipulate stuff, CG is not the last resort it originally was but the first resort, second resort, third resort, last resort and if it's completely implausible to use CG, they put the release on hold till it becomes possible to use CG. In the mean time, scanning film is expensive and the optics in the scanners, regardless of capture resolution, leave their own imprint on the image, like an optical film printer degrades the image just by adding another lens to the path. Film is also much lower resolution than it used to be. Kodak became so obsessed with reducing grain (done by making emulsions very thick, allowing light to scatter more within them) that 35mm film went from about 5K res to maybe 3K.
  7. Oy, I had no intention on seeing Rogue One, but now I'm going to have to rent it just to get some kind of clue. I've heard it's way better than TFA and I've heard it's way worse. I thought TFA was well-done for the most part, visually and aurally speaking any way. I agree that the script missed a lot of opportunities by trying too hard to recreate ANH and despite what everybody says about it, there's a lot of CG for the sake of CG. I'm fine with effects when they serve a purpose, but it seems like about half the movie had some kind of "enhancement" that didn't really need to be there. I have no intention on watching the Han Solo movie either. That reminds me; I really need to get my laserdisc box set sold before Disney makes everybody sick of Star Wars.
  8. I wouldn't even try to do that with a motorized zoom. Smooth starts are always a problem. I prefer to dolly when possible rather than zoom (I don't even have a zoom lens for my 35mm camera and the zoom for my S16 camera stays at home) but if I must do a zoom (like for 2/3" doc video work), it's via purely mechanical control.
  9. See, that's what I feel (though I'm starting to find that not many people know what a "medium shot" etc. is anymore). I was electrician on a feature a few years ago and the DP was really angry that zooms and crops were done in post (which he only discovered at the premier) rather than being asked to shoot that way on-set. Just because there's more pixels than the output medium doesn't mean the glass has the resolving capability either, so even something like a 1.5:1 crop can yield a softer look than having used a 1.5x longer lens, despite being shot with 2x the output resolution. For the last few years, it's been 95% documentary work. Before that, it was narratives and commercials. Digital zooms/pans are usually fairly obvious, depending on the resizing algorithm used. You can see the pixels changing. Even with something like stabilization, you can get rid of the shake but not the motion blur and rolling shutter artifacts from the shake. It's better to shoot a stable shot. I imagine a lot of stuff being shot on small, lightweight equipment is part of the problem there, because people do some amazing things with hand-held 35mm or 2/3" 3-CCD cameras that needed no tweaking in post. While we're at it, I know a lot of DPs are pissed about all the stupid things people do in grading, often to the point where it doesn't look real anymore, certainly not what the DP intended. I understand correcting minor exposure errors or difference in color between one lens and another, but not changing the entire look of the movie without the DP's involvement.
  10. This is something most people seem to overlook. Most video cameras don't do deep blue or deep red well. Film is almost as good as the human eye in that regard, Alexa is almost as good as film, Kodak's CCDs are right with it and Sony is third IMO while everybody else falls behind them. Many cameras have weak color filters to cheat up the ISO, so the saturation is boosted in the camera's processor, which in turn creates strange artifacts under certain conditions. On top of that, virtually ALL current cameras have rolling shutter, which puts them in the "not good enough" category. I really had high hopes for the Digital Bolex, which is the only video camera that I was excited to try in a long time but they got ignored because of the price tag. The uncompressed capture, global shutter and accurate color put them way ahead of the competition even against higher priced cameras. Everybody doing microcinema seems to be obsessed with high ISOs and pixel count; yet Alexa is lower res than a lot of the current "hot" items and it's on top in terms of popularity in Hollywood. The reason for that is it is almost as good as film in terms of contrast and color. I think that's also the reason 16mm is still alive. It allows that same great color and latitude even though it's lower res than 35mm, at significant cost savings of course. Got that right. They KNOW the major flicks are terrible, recycled schlock. Selling tickets are all that counts. P.S. I know my next film won't make back the money I spend on it, but it WILL be film. I don't care, I just want to do it, despite potentially being able to turn a profit by shooting on video.
  11. I keep reading that the main reason many people want to shoot UHD or 4K is for re-framing in post. Now, I would like to change SOMETHING about just about every video/film I produce, but framing is seldom one of them. Why is reframing so important that it warrants 4x the storage and processing power? I could only conclude that a lot of DPs have a hard time making decisions when put on the spot or that their viewfinders are inaccurate. Am I missing something? Not trying to be rude, just genuinely curious.
  12. I know this is an ancient post but most of you guys are still here. A well-maintained, professional 1/4" tape recorder is about 25Hz-25KHz at the standard 15 In/S and about a 70dB dynamic range without noise reduction. Add Dolby SR and you get a perceived dynamic range of about 94dB, which is better than many CDs. Furthermore, you don't have ringing or sub-Nyquist alias distortion like brick-walled digital formats do, so even though CD technically has less inherent noise and distortion, it has other artifacts that can be hard on the ear. Many of the Mercury "Living Presence" classical albums were recorded on custom 35mm magnetic 3-track, which boasted an 80dB S/N ratio without Dolby! Many of those recordings from the 60s still sound amazing today. Any way, to answer the original question, "it depends". Under ideal circumstances, 16mm optical can range from sub-sonic to about 6KHz, though I know a guy who made some customized optical heads that could do a little better. Most projectors, which were generally budget-oriented for the 16mm market, are about 200H-4KHz. That is largely restricted by cheap audio components and making a compromise between frequency response and noise. The narrower the slit, the better the frequency response (until you hit the diffraction limit) but also more noise since less light passes through the track at any given time. A brighter lamp can help make up the difference but only to an extent, as film grain becomes the limiting factor. Signal to noise ratio is 30-40dB, depending on who made the optical negative, quality of the positive etc. I just performed an experiment with using Dolby on 16mm with mixed results. Dolby SR is totally useless but Dolby A makes a substantial improvement to noise without sapping out too much high end. Another idea I've thought about using was to put SMPTE on the optical track and have a computer chase it. I'm not sure if a normal 16mm projector is stable enough to maintain good sync though. In the end, I have some 16mm prints that sound fairly good in my living room home theater while others are terrible. One of the things I've noticed is that people who try to compensate for the lack of top end wind up adding too much distortion. The mix should be bright but without excessive energy at any range.
  13. I think I know the answer to this, but I noticed the max density of Vision3 is notably greater than Vision2. Does that mean all that extra highlight latitude is lost in the print? Thank you.
  14. You can expect lower contrast but it should still render a usable image.
  15. Well the LCL is about 4CM and it DEFINITELY looks like a BA15D base, so it looks like there's a variety of options including a 100-Watt ESR, which I think I'm going to try. Thank you everybody.
  16. Front and rear projection are the same as far as spill is concerned. The advantage of rear projections is solely because you can put ND gels over the viewing side of the screen, which reduces light output but reduces spill by twice as much. The best thing to do is simply keep spill off of the screen as much as possible. Another option, if you use a glass (retroreflective) projection screen, is to project into a mirror (at 45 degree angle) directly under the camera lens. That greatly increases the brightness of the image from the screen but you are stuck with shooting from only one position.
  17. It's a 3" lens, but I can't say much else about it as I found it in a rubbish heap and there's no discernible markings. It's really old, dark brown, has a cotton jacketed power cable and is made of cast aluminum. Any way, I thought it would be perfect as an on-cam practical light for an upcoming shoot. ESP certainly LOOKs right, but I'll have to take some measurements. Thanks for the tip!
  18. DLPs tend to flicker more (especially single-chip projectors) since they work by rapidly oscillating tiny mirrors. Single-chip projectors have color wheels to filter the light so colored strobing can occur. LED, while often not as sharp, do better when rephotographed for that reason. Most projectors use tungsten/halogen lamps so it will look yellow if your are white balanced to fluorescent overhead lighting. You may be able to electronically balance the projector to FL lighting but most don't have sufficient wiggle room. Your best bet is to balance the FLs to match the projector via gels or see if you can get a filter to match the projector's light to the FLs. Really though, your biggest problem is going to be getting the projected image bright enough and avoiding spill on the screen, which washes out the shadows. If you're using rear projection, you can put an ND filter over the screen to reduce the wash. What happens is, even though you may loose two stops of light from the projection, ambient light passes through the filter twice and is thus attenuated four stops.
  19. Does anybody recognize this lamp? I'd like to get this ancient little inky working for a future project. It runs directly off of 110VAC. Thanks.
  20. I have been using a conventional projector with a telecine camera lens (which has a mirror and condenser) in place of the projector lens. Before I had the telecine lens, I was using a box I made between the projector and camera that had a 45-degree mirror and a 5" condenser lens. That did as good of a job but was harder to line up the shot. http://www.gcmstudio.com/videoonly/wavemotioninterference_test.mp4 I found the rotating prism and screen on editors really degrade the image.
  21. Oh yes, I definitely plan to do that. If I can get away with JUST the 16.5 stop ND and use the iris for fine adjustments, so much the better. I have a Canon G20, which is natively about 50 ISO and I plan to shoot 7203, though film has WAY better latitude.
  22. You're not too late at all; the eclipse is in August. There's no practicing with a video camera though since there aren't any eclipses between now and then (and it's only a 2-minute event). I shot one on video a few years ago with multiple NDs and two polarizers. It was a disaster. I did get a 16.5 stop ND filter for this one, which will help at least.
  23. Wow, that's probably one of the best cameras out there! They may be clunky and heavy but the build quality is unmatched IMO.
  24. Yeah, LCDs don't flicker like DLPs, plasma screens etc. so no worries. Getting the exposure and color right will be tricky. I suggest doing some test shoots with a video camera rated at your target EI to get an idea of what to expect in terms of lighting etc. Of course, keep spill off the screen as much as possible.
  25. Adjusting the shutter won't help and can actually make it worse. The camera shutter still only opens 24 times per second but it spends more time being closed at 144 degrees. Shooting LCDs is pretty easy as long as you can deal with the color balance and you don't have to worry about flicker. Daylight film does well for that, orange gels are OK but the shadows tend to take on a orange cast and you lose a lot of output. Not many screens have enough adjustment to match tungsten lighting, so probably the best thing to do is use daylight film and 5,600K lighting. DLPs can be another matter. I've never tried to reshoot DLP video off of a screen but they DO flicker a bit. I'd use a spot meter to make sure the contrast is OK. Projection especially has a tendency to look murky and washed-out when reshot. Rear projection usually looks better because you can put an ND filter over the screen. The projection may be two stops darker but the wash from ambient light is four stops darker because it passes through the filter twice.
×
×
  • Create New...