Jump to content

Stephen Baldassarre

Basic Member
  • Posts

  • Joined

  • Last visited

Everything posted by Stephen Baldassarre

  1. This is sort of off-topic to cinema but related. I was hired to set up a church with a video streaming system. Since they were in a hurry to get up and running, I set up temporary lighting on their balcony going purely by instinct (and a light meter of course). I have the entire stage lit within 1/2 stop using six Fresnels and it looks good on camera. I want to be a bit more "scientific" for the final version, where the lighting will be mounted to the ceiling and there will be eight fixtures total. I did an experiment where I replicated my temp lighting in EASE Focus, which is speaker coverage software. I chose a speaker model that radiated (in the high frequencies) similarly to a Fesnel and adjusted the virtual test frequency till I found the one that most closely matches the lights' beams. See attached. The results are consistent with my practical observations with measurements taken at eye-level. The map is set to cover a 1-stop range where bright red (none showing) is the equivalent of +1/2-stop, deep blue (none showing) is -1/2-stop with green/yellow being on target. The 2nd pic is a quickie experiment for a possible lighting scenario; much better than the current temp lighting with everything within 1/4-stop (but probably can't fully trust it). Sound does not behave the same way as light, as similar as it may be in some respects. Is there software out there that is DESIGNED for lighting? I feel like that would be a lot easier to do than speaker placement software for certain. Choose a fixture type, beam-width, power etc. place it in the room, adjust the angles and barn doors, done. BTW, the grid is 5' spacing. It's a pretty big stage. The 2nd "zone" in the 2nd pic accounts for the difference in floor elevation as they sometimes have people talk in front of the stage. Thank you!
  2. I look forward to seeing some shots! Music videos have always been my favorite projects to shoot because you can get away with just about anything you want. If you can pull off some effect, great! If it doesn't work, it's part of the art form, so no big deal. It's like when I shot a music vid and luma-keyed in some ethereal junk in the background just because it was a plain black background otherwise. The luma key ate into some of the foreground but that just made it cooler!
  3. That looks cool! I love practical effects. One of my favorites for a music video was a shot that waved and distorted. Nobody believed I didn't use some computer plugin (and this was in the 90s), I just shot downward through a Pyrex pan with water in it.
  4. I found this explanation on a site that's largely dedicated to matte paintings https://nzpetesmatteshot.blogspot.com/2016/01/mattes-maketh-musical.html "any color that had more green than blue could not be reproduced. Green and yellow, for example, were missing in action. Yellow turned white, green turned blue-green (cyan). That's because, by definition, green and yellow have more green content than blue. (Yellow = green plus red.) Excess blue was poisoning those colors. Vlahos realized that if he could add just a little density to the green separation in those areas only when it was being used as a replacement for the blue separation, he could get those colors back. He needed a film record of the difference between blue and green, thus the Color Difference System's name. In a pencil-and-paper thought experiment, Vlahos tried every combination of the color separations, the original negative, and colored light to see if he could achieve that difference. The magic combination was the original negative sandwiched with the green separation, printed with blue light onto a third film. The positive and negative grey scales cancelled out, but where there was yellow in the foreground (for example) the negative was blue and the green separation was clear. The blue light passed through both films without being absorbed and exposed the third film where there was yellow in the original scene. The same logic produced a density in green and in all colors where there was more green than blue. This was the Color Difference Mask, which in combination with the green separation, became the Synthetic Blue Separation." I've learned so much about Hollywood effects from that site over the years. It's really worth a look if you haven't seen it already. It actually inspired me to do background (oil) paintings for a recent series of promos I shot instead of finding real backgrounds or making them with CGI. They weren't exactly 100% realistic but the cartoonish action of the video made the paintings work well in this case. Plus, it gave a unique look to the final product.
  5. It sounds like you have the ideal rig already. It may take some experimentation but you might try making a frame that can bolt to a tripod with a high grade front-side reflective mirror (so you can get away with a much smaller, thinner mirror) to put it almost right up to the camera.
  6. Why not do a foreground painting? Put a sheet of glass between the camera and shore, then paint what you want onto the glass. If you paint them so some real elements show through gaps in some cookies, it will have more "life" to it. There doesn't need to be a ton of detail so you can get it done very quickly. You could probably comp in a second pass of the painting with a little bit of motion, for the cookies right at the water line, bobbing with the current.
  7. Thanks for the links. If I was smart, I would have downloaded the entire page when I found it. There's at least enough information between the links you guys provided to refresh my memory on how it was done.
  8. Excellent, thank you! While I was hoping to find the static web page explaining the process in detail, that video goes into a lot better detail than most, including the often overlooked cheat for the blue separation. I thought there was another trick where a blue/green difference separation was created to use in place of the blue separation...or something like that. I recently regained interest in this process because I was doing a green screen comp where spill was really obvious on a VERY fuzzy, mostly gray sheep when it moved quickly. The green spill suppression wasn't as effective as simply replacing the green channel in this case and the change in color was barely noticeable. That got me to thinking about how to change the channel matrix to get better color in other situations, should the need arise. Thanks again!
  9. Hello all; I found a web page years ago that explained the traditional photo-chemical blue screen process in brief and easily understood but fairly complete detail. It talked about how a basic composite required something like 14 film elements, the issue with blue spill and how a green/blue difference element needed to be created in place of a conventional blue separation for the foreground. I can't seem to find the article now (which had great illustrations as well). Does anybody know where it is or if there is another resource that does into that level of detail? There's hundreds of sites out there that talk about the process but are dumbed down to the point of not really representing how it was done (many say the green separation is simply used in place of the blue separation, which was rarely true.) Thanks!
  10. Well, when you compare it to "Keylight", it's really very similar, though Keylight has the extra processes built into it. I just have a template set up for the process and I tweak it for different situations.
  11. Sorry if this is in the wrong forum or against the rules. If so, feel free to let me know and/or remove it. I noticed that there's tons of videos of people using either the built-in chroma-keyer or a 3rd party plugins for green screen et al comps and none using the method I use, which I feel works much better and best of all, is free. Let me know if you have any questions.
  12. Cough Is it $1000? Does it have a proper optical block? Probably not
  13. Ah, so, 2K on a 2/3" sensor (regular 16mm) is about as dense as you can get without sacrificing performance, this is true. As for the OLPF, you have to figure out where your compromises need to be. With a conventional single-sensor camera doing its demosaic process internally (usually bilinear, nearest-neighbor in really cheap cameras), you'd have to limit the resolution to about 500 lines to avoid ALL aliasing (and that's what the first prosumer HD cameras did). However, you can "get away" with a less aggressive OLPF to allow for somewhat higher resolution while allowing a little aliasing that most people won't notice. Now, move the demosaic process to a computer, where you have much more power and no longer need to be real-time. You can use a more sophisticated algorithm that can figure out where true fine details are by looking at contrast changes across all three color channels and adjust its averaging from one pixel to the next. In the attached image, we have an image that (I believe) was shot on film, scanned to digital and converted to a Bayer pattern (left) to show what the processor has to do. The middle image is a bilinear interpolation, which fills in missing color information by taking the average of adjacent pixels. Despite the source image being free of aliases, there's pronounced aliasing in the center image because the Bayer pattern essentially sub-samples the image. On the right is a more processor-intensive algorithm that can be performed on a computer. So for a single-chip camera, one could use an OLPF that restricts the resolution to, say 900 lines and have almost no aliasing upon output. A camera with no OLPF at all may resolve about 1000 lines with a really sharp lens, but any details above that get "folded" down into the sensor's resolution as aliases. The "sharpness" people perceive is not so much the extra 10% resolution, but false details that didn't exist in the real world. One can also increase perceived sharpness by increasing the contrast in the 800 line range. Vision3 film is actually lower resolution than any of its recent predecessors, but people say its sharper because the contrast increases in around the 40ln/mm range, or roughly 600-800 lines if you're shooting on 35mm. Vision1 could easily resolve 5K (as opposed to V3's 4K) but it had a more gradual roll-off in contrast leading up to that 5K, so it looked less sharp. Of course, if you push the contrast of the fine details too far, it looks like cheap video. Sorry, I'm sure that's way more complicated of an explanation than you were hoping. My prototype will have an HDMI port, which will initially feed an inexpensive LCD panel. The preview will not be full resolution and possibly not even color, depending on how much processing overhead I have, but it will be enough to accurately frame the shot and that's better than I can say about 90% of the optical viewfinders out there. One might use the same port to feed an EVF, but that is a later experiment. Betacam is a true component system, roughly equivalent to 4:2:2. Almost any switcher could be converted to component by replacing the I/O cards. Betacam's main advantage is its component recording, so if you're going to use composite infrastructure, you would be in better shape sticking with 1". Considering the best practical performance you'd get out of a single-chip camera is 4:2:0, I still don't understand why this is important. You said yourself, we aren't producing DCPs, we're producing ATSC, Netflix and You Tube video. That's the same time that I switched to digital, but I was using Y/C component before that. Less expensive than RGB or Y/Pb/Br but sharper than composite and no dot crawl. I couldn't afford "professional" preview monitors, so I got a pair of C64 monitors for $20! I still have a slightly modded one and use it with my 16mm HD film chain. :p OK, so use +6dB gain then. Either way, you have to live with some extra noise or add DNR and live with lost details. 500 ISO is not even 1/4 stop difference. Unless you're shooting on something like an Alexa, you're not going to get TRUE 800 ISO performance without compromise (I know what BM advertises, they're 400 ISO cameras). I find 400 is already difficult to shoot outdoors as the base ISO. You have to have a 3-stop ND just to get F16. If you want to open up to F8 or higher to get a sharper image, you better have a top quality ND filter to avoid more IR contamination. Yet your main camera is a BMPCC? There's nothing wrong with S16. You can get shallow DoF if you want or long DoF, depending on your lenses. The image plane is large enough to get sharp images out of mediocre lenses and really good lenses can be cheaper/more compact than 35mm lenses. Don't forget the "guard band". The actual film apertures are slightly smaller than the available area to avoid problems (like seeing the sound track or perfs) due to gate weave on that $5,000 counterfeit Indian projector head they used at the $1 theater. A very common trend these days. I'm not putting words in your mouth but I suspect most people do it as backlash against the 1/4" and 1/3" sensors previously found in prosumer devices. My budget minded contemporaries complained endlessly about not having selective focus in those days and we (me included) fought tooth and nail to get our lenses up to F2, which made for a pretty crappy image. In retrospect, I should have been aiming for optimal image quality rather than "film-like". Getting back on track, it's gotten to the point of being ridiculous, with product reviews being shot on full-frame DSLRs with long lenses at F1.4, insuring only 1mm of said production is visible at a time. I suspect people will consider shallow DoF a very dated look in the future, just like how the advent of cheap digital reverbs and keyboards makes a lot of 80s songs sound dated now. I admit ENG cameras can be a bit cumbersome, particularly the support. However, I don't think I've ever heard somebody say "Awe man, I can't fit my camera in my pocket, better cancel the shoot!". My prototype camera should be about the size of a GL1, so it should fit in a decent sized bag and not draw too much attention from authorities on-location.
  14. That's what we've been discussing. Software solutions take time and don't do a perfect job. It's better to make a camera that doesn't have the problem. As for the OLPF issues, the solution is to use an OLPF. It's the only way to get optimal image quality. Sure they can. There used to be several cameras on the market that had global shutter and were relatively free of aliasing for around $1,000. That was back in the CCD days, and CCDs cost about 5x as much as CMOS for equal size, so sensors were generally fairly small. With the new crop of global shutter CMOS sensors, none of which are being used in camcorders, there's no reason it can't be done with a somewhat larger sensor now. The road block in the current market is that a lot of processing power (and licensing fees) are added by capturing in a conventional video CODEC. Three color channels must be interpolated, gamma transformed, gamma corrected, white balanced, then converted to YCbCr, saturation boosted, knee adjusted etc. If it's a better camera, a "black frame" average image is subtracted from the source to remove fixed pattern noise. Then it has to be converted into whatever CODEC they want. ProRes is not especially processor intensive, but there's fees involved. H.264 is an open standard but requires massive processing power. All that has to be done internally and in real time, so corners have to be cut elsewhere. It's actually CHEAPER to have a minimal processor running fairly little code to capture the raw signal, even if storage requirements are a lot higher. That doesn't bother me, I already have a couple 240GB SSDs that could give me about an hour each of record time of raw HD. It's really no different from back in the film days where you had multiple magazines, except now we can dump an SSD to a laptop and reuse it on-site if need be.
  15. We've had viable ways to avoid composite video issues since the early 80s for production. Delivery may still be composite, but delivery media have always sucked and probably always will, though I find Blu-Ray can be quite good when done right. Even my first paid video productions in the 90s stayed component till the print. I don't make them out to be "magical", just that they don't share all the problems of CMOS like you say. As for "efficiency" in ISO, CMOS does it by artificial means. Weak color dyes (not always, but in many situations) are used to let more light hit each pixel. Since every pixel has an amplifier next to it, it can't possibly gather light as efficiently as CCD, but makes up for it with internal gain whereas CCDs are passive components, containing no gain of their own. I find the current trend of making the look in post to be tedious and produces a very alien image that can be outright nauseating to me. So, I might make slight tweaks to white balance or level but that's it. I suspect "cinematic" is one of those meaningless buzz words like "warm" in the audio field. Many iconic movies were not crisp at all. Many DPs went out of their way to soften the image, especially on close-up shots, to the point where specialized diffusion filters were created to gradually soften the image while dollying into a tight shot of an actor. 400 ISO isn't enough? Almost all of them have gain, you just don't have bake-in DNR like CMOS cameras do. You're right in that there aren't any UHD CCD cameras (as far as I know) but 12-bit and 444 is somewhat common for industrial cameras. While single-chip cameras may have a 444 option, that's not what the sensor is outputing, so I'm not sure why it's that important. The HIGHEST density color on a single-chip camera is green, which is only every other sample. The other colors are 1/4 resolution. These colors are merely averaged together to get complete color channels, then converted to luma/chroma channels. If you are using RAW capture, you have much better interpolation algorithms available on your computer, but you're still dealing with "educated guesses" based on localized and non-localized spatial analysis of the frame. If you think so, I trust that you have become numb to the issue. Bull crap. Marketing mooks may have the brainless masses convinced that VistaVision sized imagers are the only way to get a "cinematic" look (there's that word again), but the fact of the matter is that the vast majority of film material has been produced in flat 1.85:1 35mm (21mm x 11.33mm) and Academy (21 x 15.24). On top of that, most cinematographers fought to get longer DoF, not shallower. Indoor shoots were routinely done between F5.6 and F8 unless there's a reason to do otherwise. I don't have the exact percentage, but very few movies originated on VistaVision or 65mm. The standard in Hollywood these days is S35. Bear in mind, almost all broadcast is done with 2/3" sensors (regular 16mm sized) and you're personally pushing the BMPCC, which is S16 sized. Once again, that's your fault that you don't know how to grade the image. It's truly as clean and unprocessed as it gets. Do you not own lights? Why do you need 1,600 ISO for professional work? Have you not paid any attention to my remarks about CMOS cameras having DNR built into them? If you want a noiseless image at absurd ISOs, you need DNR and the D16 doesn't do that automatically. If size is all that matters, keep your BMPCC. OK. I've shot plenty documentaries on conventional video cameras and even CP-16 without issues.
  16. I don't expect perfection by any means, but the fact that nobody cares about easily rectified problems that would have been considered inexcusable in pro video 15 years ago really tells how standards have changed. How is it we live in a time where obvious banding and macro-blocking are worse than ever, yet we insist and having more than double the pixels the eye can see? How is it nauseating, gelatinous distortions caused by movement is acceptable in cameras that cost as much as my house? Why do people insist on capturing with the "best" 10-bit or 12-bit CODECs to guarantee optimal image quality but it's OK to leave out a simple part that prevents strange colors/patterns? Maybe I'm in the minority for being bothered by those things, but considering there are currently ZERO camcorders in the sub-$20,000 market that are free of those problems, shows me that there's a niche market not being tapped. You just got me more charged than ever to try and make my prototype and if even one person is willing to buy it, I consider it worth while. Imagine, an HD camera with no rolling shutter, no alias/moire, raw capture, S16-sized sensor, pure color, all for $1,000. That sounds pretty much the same as your BMPCC except free of all the annoying problems. Sure, it will be larger, but I'd rather have a camera I can hand-hold if needed. Is "pocket-sized" really worth all the problems that come with it? Maybe to you, but not me, I can have that for free on my cell phone. I got most of the parts I need on order now. Some of them are coming from England and the programming side is rather foreign to me, so it may be a while before I get to testing. I'll be sure to post results if I'm successful though.
  17. In rereading my last post, I made some errors. It's been a really long weekend! For instance, if I remember correctly, the "shadows" of a BMPCC are handled by 30dB high-gain channels in the sensor, in parallel with the unity gain channels, which equates to +5 stops. Using the global shutter mode disables the high-gain channels because of bandwidth/processing limitations, so you are left with 7 stops, not 6. Any way, I made a few other goofs like that but the principles are correct.
  18. You are correct, we have gotten rather off-topic. I really have considered buying a used BMPCC and after-market OLPF several times, but there's still other things about it that bother me. CCDs are natively global shutter. It has nothing to do with the camera's processor. All photosites on the sensor simultaneously become photo-sensitive and simultaneously dump their charges into neighboring capacitors where they are then sequentially sent to the output. With CMOS sensors, each row sequentially becomes photo-sensitive and sequentially dumps directly from the amplifiers. I am aware of several software solutions to rolling shutter, but the best way to fix a problem is to not have the problem in the first place. You can still see jello-cam shots even on high-budget productions, especially where there's high action or flashing lights. I've used one such plugin and if you don't have it set up perfectly, it can even make the problem worse. The fact that there's no on-board amplifiers on CCDs also allow the holes to be larger, allowing slightly higher native sensitivity and lower aliasing. A 3-CCD camera can almost get away with not having an OLPF because there's virtually no space between the holes, and I have in fact gotten decent images out of a 3-CCD camera that was missing its OLPF (mix-up at the repair facility). With a 3-CMOS camera of otherwise equal specs, aliasing is more noticeable because more detail falls between the holes, almost like having a UHD camera but only reading out half the pixels. Very true, I was just hoping there would be ONE HD camera out there for $1,000 that had global (or close to it) shutter and a proper OLPF. My requirements are pretty minimal, but it seems everybody has become blind to the issues that bother me the most. See, I thought Digital Bolex was the one manufacturer that DIDN'T make the same compromises as everybody else. Global shutter, proper OLPF, raw capture, great audio, no frills. It does have a viewfinder, that screen on top. It isn't very good but it's good enough to frame the shot, which is all you need. I shot with it for two days, both indoors and outdoors and never used an EVF, which would have been easy enough to add. Incidentally, the viewfinder was originally going to be a black & white LCD that swiveled to be 90 degrees to the ground. It was Kickstarter backers that insisted it be color, which required a more powerful processor to demosaic and white balance the preview. That, along with other added features, meant there wasn't enough room in the compartment to allow it to swivel. That's your fault, sorry. The CCD's color filters are so pure, the natural output is well beyond the gamut of most video formats. You have to pull back the saturation to fit within the color space of your environment. This is exactly what I meant when I said most people don't know how to handle it. You're used to BM's sensors, which have very weak color dyes and thus, fairly low saturation. If you simply apply a demosaic algorithm and gamma correction to BM's raw files, you get a very flat, gray image. The D16's output also has no contrast "curve", just a gamma transform, so you have to adjust the head/toe curves yourself to get soft clipping like what's baked into the BMs' sensors. See, 200 is as high as it gets for me. The studio cameras I used to use were natively 400, which was OK because we lit the room specifically for those cameras, but they really sucked when we took them outdoors. My favorite thing to do when I was shooting film regularly was to load in Vision2 100T for entire shoots, exposed as 50T. Outdoors, it would be about 25 ISO with an 85A filter, which put me at F11 if I wasn't using an ND filter. The D16 has true analogue gain. Before the last firmware update, they let you crank it up to 800 ISO. For some reason, they switched to a "BMPCC" mentality with the last update, so when you set it to "800", you were merely changing metadata rather than actual gain. Luckily, the owner of the D16 I borrowed ignored the last update. Any way, it looked good at 400 ISO, a little noisy at 800, but a little DNR applied to the shadows cleaned it up nicely. Remember, all modern CMOS cameras have DNR built into the sensor itself. Some cameras have more DNR built into the processor, which you may or may not be able to bypass, but the sensor always has it, unless shooting in global shutter mode. I think I said elsewhere that the BMPCC's sensor has a global shutter mode but using it would mean about a 6-stop dynamic range! OK, so smaller holes have less volume for collecting light, but the noise floor stays the same. Thus, there's a lower signal to noise ratio on the output. They eye can pick out details below the noise floor but most cameras automatically clip the output right above the noise floor, hence the loss in latitude. In more technical terms, let's say you have a sensor found in the typical broadcast camera. Its 5uM pixels would have a 65dB S/N ratio, or roughly 12-stop range, and a native ISO of about 400. A UHD sensor of the same size has 2.2uM pixels, which is 1/4" the light gathering volume per pixel, and a native 100 ISO. The noise output is the same as that of the HD sensor, so you now only have a 42dB S/N ratio and lost two stops of latitude in the process. On that note, my phone has 1.5uM pixels and is natively 50 ISO. HDR mode is a MUST in order to keep any shadow information. That comes with cost of its own. Price increases at a square of the size. So, a 2/3" HD CCD may cost $300. If one were to use UHD, it would need to be M 4/3 sized to avoid loss in dynamic range, which would be $900 each. You also have much shallower DoF, more limitations on lenses etc. I think M4/3 is a good format, very close to flat 1.85:1 35mm, but you would HAVE to use CMOS to keep the price under control, and we're back to our rolling shutter problem. You can add a mechanical shutter like some of the Alexas, but that's also very expensive. Note that in order to meet demands for a higher resolution camera, Alexa introduced their 6K 65mm camera to avoid loss of latitude. That's a difficult format to shoot. Good to know. IMO, it shouldn't be optional for such expensive cameras.
  19. I was originally going to reply directly to comments in quotes but it was getting to long and complicated, so I'm just going to touch on some of the ideas addressed in the replies. The BM cameras do alias, badly, and using "soft" glass is not an acceptable solution. I don't see rolling shutter or alias distortion on film or professional video. Film's color rendition is much closer to the human eye as well. Most digital cameras don't pick up deep blue or red well and the the green band tends to be broader, due to having higher resolution. The dynamic range of a sensor does matter even in Rec709. A 10-stop sensor, when properly encoded, will look much better than an 8-stop sensor. That said, the BMs are not suitable substitutes for film, nor professional video, despite claims to the contrary. Yes, the story is more important, but If that's all that mattered, we'd all be shooting on cell phones. It's all about compromise in the lower price range. It's just a matter of WHAT compromises you want to make. The sensor the BMPCC uses costs a lot of money, and I do believe they used the best CMOS sensor possible at the time. They easily could have eliminated various features to improve other things though. While I find the compromises in low-end cameras unacceptable, I understand why they made those compromises. People value features over quality. Not me; I'd gladly accept a simple passive lens mount with a 2/3" 1920x1080 global shutter sensor fixed at 200 ISO, 24fps, recorded in MJPEG at 200mbps with a black & white LCD viewfinder and no audio. There's no reason that can't exist in the $1,000 price range and have a proper OLPF. As for the variable latitude of the BMPCC, it doesn't actually have gain or adjustable ISO. It's natively about 400 ISO and everything else is metadata telling the software what to do with it. Treating it as if it's 800 may add a stop in the highlights but you lose a stop in the toe while shooting at 200 gives you a stop in the shadows but you lose highlights. When using actual gain, this isn't the case. You get more noise using higher ISOs, but no latitude is lost in the shadows. It's cheaper to use fixed-gain architecture though. As for color space etc. I shot green screen with 8-bit 4:2:0, 4:1:1 and 4:2:2, as well as 10-bit 4:2:2 uncompressed. 4:1:1 is extremely difficult but it can be done. There isn't that much difference betwen 4:2:0 and 4:2:2 and I didn't notice any difference with 10-bit vs 8-bit. The optical path makes a HUGE difference though. Soft lenses are almost impossible to composite without matte lines and if they have chromatic aberrations, forget it. The D16 was a very expensive camera to make. The CCD and OLPF alone cost THEM about $700. Then you have to add in the amplifiers, the ADCs, enterprise class SSD, custom FPGA, HDMI board etc. Don't forget it had audio capabilities akin to a $500 stand-alone field recorder, with high quality preamps, XLR inputs and true analogue gain control like you would find on a professional system (not a $200 Zoom). Unfortunately, they listened to their Kickstarter backers too much and had to redesign the camera almost from scratch to cram all the extra features into it. I estimate the redesign added about a year and $500 per unit cost to the D16. The D16 really doesn't have a look of its own. The CCD's output is digitized at 16-bit linear, converted to 12-bit Gamma-1 and stored. The rest is up to the user. The BMPCC on the other hand, has image processing built right into the sensor itself, including noise reduction and FPN cancellation. If you want that in a D16 image, you have to do it yourself. Any way, the D16 sold better than they expected but had issues with several parts manufacturers in China and a price hike on other parts made continuation impractical. Their margin was already low and they couldn't raise the price because people already claimed it was too expensive compared to the BMCC. So, they decided to get out of the business. I tried to convince them to go a different direction in the year leading up to that decision; use one of Sony's new global shutter sensors with a simplified FPGA, but it fell on deaf ears. People want "4K" because marketing told them to want it. These are the same people that insist on phone sensors being 18MP when the lens can't resolve more than 6MP or so. Marketing people have the masses convinced that UHD and 4K are the same thing (4K is a theatrical format) and that an LCD screen can really have a 10,000:1 contrast ratio. Professional HD cameras will produce sharper, cleaner results than a semi-pro UHD camera. You can easily bump up well-done 1080 up to 2160 and nobody will know it didn't originate as 2160. Video streams are so heavily compressed that the difference in bandwidth has more affect on the image than the actual number of pixels. That said, you can very well bump 1080 up to 2160 and have a noticeable improvement in image quality simply because more data reaches the screen. The lens is the limiting factor in most cases, and I've seen plenty HD and UHD video that resolved 600 lines, especially with folks that like to open up their lenses. 35mm can easily resolve 4K, but you're capturing on video, 4K costs dynamic range. That's why the Alexa is 3K S35. The .0083mm pixel format gives *almost* the same latitude as Vision3 stock, about 14 stops. 3K resized to 2K allows better luma resolution from a single-chip sensor than capturing 2K. More pixels mean smaller pixels, which means lower dynamic range. There's all sorts of claims from marketing people about this camera or that camera having 12-13 stops but the fact of the matter is, dynamic range is directly tied to hole size. You can cheat higher numbers with DNR, but lose detail. Engineers across the board will tell you pixels smaller than .005mm will cause noticeable degradation in the image, which is why pro video cameras have 1920 pixels on a 10mm wide sensor, or .0052mm, which is still 2 stops less than the Alexa. Now, some low end DSLRs do have OLPFs, but they're optimized for stills. In order to avoid ALL aliasing for HD video, the resolution needs to be less than 1000 lines or so for a 3-CCD camera, 500 for a single-chip camera. If you use a superior off-line demosaic algorithm, you can get away with 750 lines. Since most DSLRs are in the 5K range... I don't think Reds have OLPFs, as they have this edginess to them that annoys me. I suspect Alexas do. I have to stay with Windows. I can't afford to pay double for "shiny". I met with a guy that was working on a video game that includes some composited live action. He commented up front when showing me his workflow that "Mac is so much better for this than Windows". It turns out he doesn't own a Windows machine and hadn't even tried video composite work till a few weeks ago. I can say without a doubt, though, that the biggest issue he has is poor lighting in his studio. I got better composites than him in about five minutes using Vegas, but estimated he needed at least two stops more light to get clean composites. I think maybe 10 years ago, Mac may have had an edge. 20 years ago, Commodore did. Things change.
  20. Oh, I didn't think the price would fall that quickly. I'll take a look. I'd consider it a DSLR. It's internally no different from a consumer-grade mirrorless camera except it is optimized for video rather than stills. Interesting point. In viewing tests, most people can't even tell the difference between 720 and 1080, and Hollywood doesn't seem to be particularly fast at jumping on the UHD/4K bandwagon. I guess the power of consumer marketing shouldn't be underestimated. I'm not inherently against UHD or 4K but there's far more important factors IMO. Sure it aliases; it has no anti-aliasing filter at all and fairly weak IR filtering unless you use an after-market optical block (for $300). So in my book, the base price for a *new* BMPCC is $1300, then one must add a lens and some way of handling it. The word-length has nothing to do with aliasing and the only way 4:2:0 subsampling can cause it is if you used a bad encoder that didn't low-pass the chroma before resampling. On-the-fly window resizing used in You Tube etc. can cause aliasing, but I base my opinion on hands-on usage viewed at its native 1920x1080p24. Nice shoot by the way, very classy approach. I watched it on my computer and my 50" plasma screen. I did noticed some chromatic aberration, which, in conjunction with your low F-stops, may be softening potential alias distortion. Rolling shutter is ever present, though not as nauseating as many cameras. Well, no. Rec709 is designed for CRT displays and its contemporaries, which have about a 6-7 stop output range, but Input range does not necessarily equal output range. Even my Canon G20 can give about nine stops under the right conditions. The studio where I used to work used $25,000 CCD cameras that have a good 10-11 stops or so despite being Rec709 compliant. True, but there's really no reason to expand the color space, screw with it and re-condense it back to Rec709. You can do color correction in the Rec709 space as long as it's not too heavy-handed. I should say that I do probably 95% of my work without any manipulation in post, aside from maybe suppress some green spill. I do my best to light/expose for the look I want. I do love the idea of raw recording, which is why I was initially looking at a Digital Bolex (and was willing to pay the extra price for global shutter, no aliasing etc.). Those guys really did their best to make a suitable digital replacement for Super-16. It's a shame most D16 shooters use crap lenses and don't know how to grade the image properly. I got a very natural image very quickly with it while others were complaining about strange tints, over-saturation and limited latitude. When they announced they were discontinuing the D16, I revisited the BMPCC again (and again just recently). I forget which sensor the BMPCC uses (I sourced them at one point. They cost almost $1,000 by themselves in quantities of less than 10x BTW), but the color purity is nowhere near that of the KAI-04050 used in the D16, so I suspect people were failing to pull back the saturation and high/low knees to fit into a Rec709 output format. Kodak's (now On-Semi) CCDs have a spectral response very similar to film and the human eye while most CMOS sensors have weaker filters to "cheat" up the ISO and thus require a saturation boost in processing. I've never been able to get DaVinci to work right on any of my computers. One of my clients had the same problem, despite meeting system requirements. Any way, I use Vegas 14 to do my compositing, which allows multiple points of control over luma, hue and saturation. Somebody can even wear a pale green shirt on a green screen and it would look correct upon output. I can set up the composites so the shadow cast on a green floor will carry over to the composited image, though it requires a little more tweaking. I am not a fan of green screen either, but it's a necessity these days. It may sound like I'm an undiscriminant hater, but I do keep revisiting it as an option. However, every time I work with BMs, I am quickly reminded of why I don't like them. I know some people can get good results with them, but I haven't the patience for their short-comings. I've worked on many BM shoots and its issues become especially obvious when used in multi-cam shoots against professional systems. I did a 5-cam concert a couple of years ago with one and I couldn't for the life of me get the same color as the 3-chip cameras. I know, apples vs. oranges, but I wound up having to dumb down the 3-chip cameras' video to match the BM camera. While it has less rolling shutter than many other DSLRs, it's still bad enough that one must stick to fairly graceful moves (must be on a fluid-head tripod or stabilizer). It doesn't have nearly the latitude they claim except under the absolute ideal conditions AND you use DNR as a matter of course. Any way, I think this is a great discussion. I am actually thinking about building my own modular camera system based around the Sony IMX249. It's a CMOS sensor but it uses analogue memory like CCDs for native global shutter operation without loss of dynamic range (many CMOS sensors like what's used in the BMPCC have global shutter modes but lose their internal noise reduction and half the dynamic range with it). The color purity is close to that of the Kodak CCDs as well, so that's a bonus. Any way, if I can get this system working the way I want, I can potentially get a 1080p camera with raw output, recorded on swappable SSDs, for about $1,000. If I can get it working the way I want, maybe I can make a "prettier" version to sell. There would be no automatic anything and probably no audio though. Maybe one day, Sony will have a UHD sensor like that but it would have to be S35 sized to avoid loss of dynamic range.
  21. I know this sort of question gets asked all the time "what's the best DSLR under $xxx" but I know what compromises I'm willing to make vs. not and simply lack knowledge of all the different makes/models out there. I keep looking at this market because everybody seems to swear by it, but I have yet to be impressed by the normal "go-to" models compared to conventional video cameras in the same price range. This will not be my only camera. I already have a Super-16 camera with a collection of prime lenses as well as a 1/3" HD video camera with built-in zoom lens. I would like another camera somewhere in between that can be used for semi-pro video productions. I care most about: Minimum rolling shutter, this drives me absolutely nuts and is why most DSLRs wind up in the "useless" list for me. Minimum aliasing, this also drives me nuts and is why I can't consider cams like the GH3 or BMPCC Good dynamic range, not so much low noise but the ability to preserve highlight and shadow detail. Why I can't choose the GH2 (as well as rolling shutter) Natural color, I don't want to make the look in post, another reason I can't consider Blackmagic anything. If I can get cleaner green screen composites than with my Canon G20, so much the better. I actually do pretty well with this but it's harder than with say a $25K studio camera. I DON'T care about: UHD or 4K High frame rates, 24p and 30p are all I need. High bit rates (I have an Atomos Ninja 2) High ISOs, I light my scenes. Audio, I'll be recording on an external device I don't understand how people can talk about this or that camera and how "cinematic" they are but are full of horrible artifacts that no true cinema camera has. Perceived sharpness and shallow DoF matter less to me than avoiding typical DSLR artifacts. This will mostly be for weddings, You Tube videos and the occasional commercial. I am hoping to keep the price under $1,000, as I hire cameras/crew or just shoot S16mm on bigger shoots. I imagine I'd most likely be looking at the M 3/4" range but am open to ideas. I'm close to considering an industrial USB-3 camera and making some kind of portable recording computer. I've found a few with native HD raw DNG capture with global shutter like the BFLY-U3-23S6C-C. Thanks for any insight.
  22. Yeah, most of the Spaghetti Westerns were Techniscope as far as I know. It saved money not only in production but also in licensing fees. The down side is not only are they starting with half the real estate, but using an optical printer to enlarge the image further reduces resolution. They often used unsharp masks in post, so people complaining the Blu-Rays have edge enhancement can shove it. That brings me to "Super-35", which combines the expense of 4-perf with the low quality of 2-perf.
  23. This is a common myth. The shutter is always running at 24Hz (or 23.976), so reducing the shutter angle simply means it spends more time being closed, reducing open/closed ratio and making flicker worse. Luckily, most lights don't flicker noticeably, save maybe bad fluorescents. If you want to get into the math of it, let's assume you're using a lamp that puts out no light at all (virtually none do this but...) at minimum voltage and 100% output at maximum voltage. That means it will be full-brightness 120x per second (peak AND trough give 100% output). At 180 degrees and 24fps, the shutter will be open for an average of 2.5 peaks (flashes), closed for 2.5. I say on average because if you're shooting for television, it's actually 23.976fps while the power runs at 60Hz. The slight difference means sometimes the shutter will be open for 3 peaks. Sometimes it only catches two peaks. That equates to a 1/2 stop max variation over several minutes. By going to 172.8 degrees, it's open 2.4 peaks on average, closed for 2.6. There will still be times where the camera sees three peaks but has greater likelihood of seeing two. However, the wider the shutter angle, the longer the exposure (motion blur applies to light too). A 360 degree shutter will always get 5 peaks while a 288 degree shutter will get 4-5 peaks, which is 1/3rd stop max variation stop over several minutes. In reality, the difference in flicker between 180 and 172.8 is almost nonexistent because even fluorescent lights only dim by a small percentage between peak and trough, assuming the ballasts are decent. The only time I know of film productions running at 29.97fps are where there's a lot of practical CRT TVs, like Max Headroom or The Wonder Years (3-perf 35mm and 16mm respectively). In countries using 50Hz power, flicker is worse because the lamps have more time to dim between peaks, so it's a good idea to only use 25fps if you aren't using flicker-free fixtures.
  24. I don't mind hand-held when it's done tastefully, though it totally bit me in the butt once. I shot a project (S-8, 16mm and some 35mm with standard def digital post) for TV a few years ago, but then it got played in a big theater and I felt sick. To add insult to injury, I think I was using a 90 degree shutter angle. It was also rather soft because I was shooting around F2. On a side note, I was using my trusty 7212 exposed as 50 ISO. It looked really cool on TV! Had I known it would be on a 10M tall screen, I would have produced it with an entirely different style. For starters, I would have done it all on S16 and 35mm, used a 180 degree shutter angle and shot at F4. I would have stuck with my 28mm lens instead of using a zoom and also would have used a matte box to cut down lens flares. Most important, I would have put forth some kind of effort to keep the camera steady! Thank goodness; though the only IMAX theater here is running video now. Yeah, DCP's track record is nowhere near as good even over a 1-year span. When everything is a computer in disguise... NICE! I think the only way to see film in my area is at my house, but I sold most of my collection recently so it's slim pickings. I'm not a Tarantino fan in general, but I AM curious about that one.
  25. I'm glad you crossed out "the story is good" because, hey, Michael Bay movies. :) There's a lot of truth to this. I dabble in painting and I definitely have my favorite brushes. I prefer oil on canvas when everybody wants to save $0.35 by using acrylic. Yes it's a tiny bit more expensive and takes a little longer, but it's also much more flexible and behaves the way I expect. ... and camera moves need to stay slow. A quick pan is no big deal when the screen is 30 degrees of your field of vision but when it's 60 degrees, the entire image can jump several feet from one frame to the next. The last authentic, made for IMAX film I saw was "Dinosaurs Alive"... what a disappointment that was! It started like a documentary and was interesting, used the 3D process well etc. but the rest of it was that stupid acid trip excuse to show off their CG. On the other hand, the first one I saw was "The Dream is Alive", which was perfect for me because I've always been big on space exploration. My college roomie had it on laserdisc, but I thought "what's the point?" What are they doing? I know more stuff is being shot on film now and they are at least THINKING about bringing back Kodachrome (though I'm under the impression that all their manufacturing is outsourced). Well, I think it was an important step in the right direction and I still follow its standards for setting up my studio and home theater, for mixing etc. Any way, the audience seems to have thought it was just another sound format, like Dolby stereo. How were they supposed to know it was a quality control thing? Agreed. Though modern mixing/mastering techniques are by far the most harmful aspect. Hyper-editing, pitch correction, sample replacement, hard limiters and clipping for the sake of "loud" have all been summed together to turn a decent medium into a wash of eye-watering noise. Sadly, Hollywood (both visually and sonically) has gone the same direction. Yeah, specs aren't everything. Even the first generation of digital recording systems were "better" on paper but are really hard on the ears. I did a series of blind tests where several dozen people listened to material I mastered. Everybody loved my 1/4" material, hated the digital U-Matic and the stuff I did at 24-bit 48K was somewhere in between. They were all the same masters, just captured via different media. One guy, who said 1/4" tape was "good enough in the 60s but doesn't cut it now" unwittingly chose my 1/4" master as his favorite. Sadly, most vinyl these days is made from the same uber-crushed masters as the CDs, so I gave up on trying to find good vinyl. Totally. That movie is still on my "to do" list though. Did you see it in 70mm? I agree that a lot is lost in conversion but there's still an advantage in color and motion with shooting and scanning film. What a lot of people fail to realize is that it's not just about resolution and dynamic range. There's also another set of optics that alter the image. On a related note; I publicly said that any mix engineer who throws away what the band sent him in order to use canned drum samples is a lazy jackass and has no respect for his clients. Somebody responded with something like "The greatest mix engineer, Chris Lord Alge is jackass?" I told him I don't care if it's common practice or if popular engineers do it, he isn't doing his job. He's supposed to MIX the band, not replace them just because he feels like it. It would be like an editor replacing a character of a movie with CG without any input from the director. Any way, I actually appreciate the Alexa. It's the only video camera that can *almost* fool me. Still, I prefer 5213 or better yet, 5212 (discontinued :( ).
  • Create New...