Jump to content

Looking at Used DSLR for Video


Stephen Baldassarre

Recommended Posts

  • Premium Member

I don't expect perfection by any means, but the fact that nobody cares about easily rectified problems that would have been considered inexcusable in pro video 15 years ago really tells how standards have changed.

Wait, what about composite video and demodulating issues and dot crawl that was "the standard" until component digital which still had lots of issues? I could go on all night about how shitty technology has been until the last 10 years.

 

There have ALWAYS been problems, it's just most people are willing to ignore the problems in favor of storytelling. Where you're correct, that modern 3 chip CCD cameras are great, they can't do 15,000 ISO, they can't give you a S35mm imager look (without developing an all-new imager block) and how that data is captured and stored is still an issue.

 

Single CCD's have HUGE problems, you make them out to be magical, but they still use a "color filter" to assign certain pixels to certain colors. They're also nowhere near as efficient as CMOS in terms of ISO.

 

So we choose which "issues" we want to live with and since I don't see the problems you see, AT ALL... it doesn't bother me.

 

How is it we live in a time where obvious banding and macro-blocking are worse than ever, yet we insist and having more than double the pixels the eye can see?

I very much dislike banding and compression noise, it drives me crazy. At the same time, if you're going to watch something on online, you have NO CHOICE! Heck, it's really bad on satellite, cable and fiber. Really the only home media that's any good is HDR UHD BluRay. It's 10 bit 4:2:2, that's the best it gets for home viewing!

 

Why do people insist on capturing with the "best" 10-bit or 12-bit CODECs to guarantee optimal image quality but it's OK to leave out a simple part that prevents strange colors/patterns?

Well higher bit depth codec's and better chroma sampling, make it a lot easier to color in post. You're still distributing in 8 bit 4:2:0 19Mbps if your LUCKY.

 

Maybe I'm in the minority for being bothered by those things, but considering there are currently ZERO camcorders in the sub-$20,000 market that are free of those problems, shows me that there's a niche market not being tapped.

I think you are honestly, nobody ever mentions or talks about it. Probably because modern cameras look so crisp and look so cinematic, they kinda ignore the issues.

 

I don't know of any "industrial" 3 CCD camera that is 12 bit 444 color space and shoots 4k or UHD and captures RAW to allow full imager bandwidth into the post production bay. What about the large imager look? Can't do that with 3 CCD's. What about high ISO? Can't do that with 3 CCD's.

 

Imagine, an HD camera with no rolling shutter, no alias/moire, raw capture, S16-sized sensor, pure color, all for $1,000.

But it's not "pure" color, single imager cameras still have to divide up the imager into green, blue and red pixels. So you're pretty much dealing with a 4:2:2 signal no matter what.

 

Rolling shutter isn't an issue thanks to the software fixes and modern cameras having very fast imagers and processors.

 

Nobody wants a S16 sized imager, everyone wants S35 minimal and MOST PEOPLE want a full frame imager to get that super "cinematic" look.

 

Again, I don't think the D16 looked good at all. Yes it may technically be better, but the look of the imager isn't something I'd ever want to shoot with. Also, good luck getting a decent noise floor at over 1600 ISO, which is basically the minimal one needs these days.

 

That sounds pretty much the same as your BMPCC except free of all the annoying problems.

But it adds a bunch of NEW issues that the Pocket doesn't have, including size.

 

Is "pocket-sized" really worth all the problems that come with it?

I'd say the top 3 reasons I own the pockets vs something else are: size, price, codec's. Size is pretty much everything for me because I shoot documentaries and I can't afford to have a big camera case to lug around.

 

I got most of the parts I need on order now. Some of them are coming from England and the programming side is rather foreign to me, so it may be a while before I get to testing. I'll be sure to post results if I'm successful though.

I'd love to see the results! :D

Link to comment
Share on other sites

 

MOST PEOPLE want a full frame imager to get that super "cinematic" look.

 

I wouldn't go that far. Most cinematographers would be happy with a Super35-sized chip, the same type that films have been shot on for over a hundred years. The only people vying for Full-frame sensors as the standard are these modern kids-turned-filmmakers who were raised during the DSLR age, when the full-frame canons where the 'go to' for low-budget cinema work. They assume shallow depth of field means cinematic - and therefore everything they shoot is out of focus, because they think it looks 'filmic'. I think it looks like trash in most cases. Shallow depth of field is used a story element, not for the purpose of film look. And lets not kid ourselves, motion pictures have never been shot (at least in real number) on a full frame format, so those thinking that they need a full-frame camera to get the classic film look clearly don't understand the whole situation.

  • Upvote 2
Link to comment
Share on other sites

 

Imagine, an HD camera with no rolling shutter, no alias/moire, raw capture, S16-sized sensor, pure color, all for $1,000.

Rolling shutter can be partially compensated for in software, or by increasing the read/refresh rate of the sensor, which requires more powerful processors. Aliasing and moire can be fixed, either by an OLPF or by increasing the resolution of the sensor. Trying to fit high resolutions onto a s16 sized sensor is going to mean small pixels, which in turn means relatively low native ISO. Raw capture means shifting huge amounts of data off the sensor and onto a card suitable for high data rates, such as CFast.

 

The point is that all of these issues can be dealt with, just not for less than $1000. Honestly, don't you think someone would have done it already if it was remotely economically feasible?

Link to comment
Share on other sites

Wait, what about composite video and demodulating issues and dot crawl that was "the standard" until component digital which still had lots of issues?

We've had viable ways to avoid composite video issues since the early 80s for production. Delivery may still be composite, but delivery media have always sucked and probably always will, though I find Blu-Ray can be quite good when done right. Even my first paid video productions in the 90s stayed component till the print.

 

 

Single CCD's have HUGE problems, you make them out to be magical, but they still use a "color filter" to assign certain pixels to certain colors. They're also nowhere near as efficient as CMOS in terms of ISO.

I don't make them out to be "magical", just that they don't share all the problems of CMOS like you say. As for "efficiency" in ISO, CMOS does it by artificial means. Weak color dyes (not always, but in many situations) are used to let more light hit each pixel. Since every pixel has an amplifier next to it, it can't possibly gather light as efficiently as CCD, but makes up for it with internal gain whereas CCDs are passive components, containing no gain of their own.

 

 

Well higher bit depth codec's and better chroma sampling, make it a lot easier to color in post.

I find the current trend of making the look in post to be tedious and produces a very alien image that can be outright nauseating to me. So, I might make slight tweaks to white balance or level but that's it.

 

 

Probably because modern cameras look so crisp and look so cinematic, they kinda ignore the issues.

I suspect "cinematic" is one of those meaningless buzz words like "warm" in the audio field. Many iconic movies were not crisp at all. Many DPs went out of their way to soften the image, especially on close-up shots, to the point where specialized diffusion filters were created to gradually soften the image while dollying into a tight shot of an actor.

 

 

I don't know of any "industrial" 3 CCD camera that is 12 bit 444 color space and shoots 4k or UHD and captures RAW to allow full imager bandwidth into the post production bay. What about the large imager look? Can't do that with 3 CCD's. What about high ISO? Can't do that with 3 CCD's.

400 ISO isn't enough? Almost all of them have gain, you just don't have bake-in DNR like CMOS cameras do. You're right in that there aren't any UHD CCD cameras (as far as I know) but 12-bit and 444 is somewhat common for industrial cameras. While single-chip cameras may have a 444 option, that's not what the sensor is outputing, so I'm not sure why it's that important. The HIGHEST density color on a single-chip camera is green, which is only every other sample. The other colors are 1/4 resolution. These colors are merely averaged together to get complete color channels, then converted to luma/chroma channels. If you are using RAW capture, you have much better interpolation algorithms available on your computer, but you're still dealing with "educated guesses" based on localized and non-localized spatial analysis of the frame.

 

 

Rolling shutter isn't an issue thanks to the software fixes and modern cameras having very fast imagers and processors.

If you think so, I trust that you have become numb to the issue.

 

Nobody wants a S16 sized imager, everyone wants S35 minimal and MOST PEOPLE want a full frame imager to get that super "cinematic" look.

Bull crap. Marketing mooks may have the brainless masses convinced that VistaVision sized imagers are the only way to get a "cinematic" look (there's that word again), but the fact of the matter is that the vast majority of film material has been produced in flat 1.85:1 35mm (21mm x 11.33mm) and Academy (21 x 15.24). On top of that, most cinematographers fought to get longer DoF, not shallower. Indoor shoots were routinely done between F5.6 and F8 unless there's a reason to do otherwise. I don't have the exact percentage, but very few movies originated on VistaVision or 65mm. The standard in Hollywood these days is S35. Bear in mind, almost all broadcast is done with 2/3" sensors (regular 16mm sized) and you're personally pushing the BMPCC, which is S16 sized.

 

 

Again, I don't think the D16 looked good at all. Yes it may technically be better, but the look of the imager isn't something I'd ever want to shoot with. Also, good luck getting a decent noise floor at over 1600 ISO, which is basically the minimal one needs these days.

Once again, that's your fault that you don't know how to grade the image. It's truly as clean and unprocessed as it gets. Do you not own lights? Why do you need 1,600 ISO for professional work? Have you not paid any attention to my remarks about CMOS cameras having DNR built into them? If you want a noiseless image at absurd ISOs, you need DNR and the D16 doesn't do that automatically.

 

 

But it adds a bunch of NEW issues that the Pocket doesn't have, including size.

If size is all that matters, keep your BMPCC.

 

 

I'd say the top 3 reasons I own the pockets vs something else are: size, price, codec's. Size is pretty much everything for me because I shoot documentaries and I can't afford to have a big camera case to lug around.

OK. I've shot plenty documentaries on conventional video cameras and even CP-16 without issues.

  • Upvote 1
Link to comment
Share on other sites

Rolling shutter can be partially compensated for in software, or by increasing the read/refresh rate of the sensor, which requires more powerful processors. Aliasing and moire can be fixed, either by an OLPF or by increasing the resolution of the sensor. Trying to fit high resolutions onto a s16 sized sensor is going to mean small pixels, which in turn means relatively low native ISO.

That's what we've been discussing. Software solutions take time and don't do a perfect job. It's better to make a camera that doesn't have the problem. As for the OLPF issues, the solution is to use an OLPF. It's the only way to get optimal image quality.

 

 

The point is that all of these issues can be dealt with, just not for less than $1000. Honestly, don't you think someone would have done it already if it was remotely economically feasible?

Sure they can. There used to be several cameras on the market that had global shutter and were relatively free of aliasing for around $1,000. That was back in the CCD days, and CCDs cost about 5x as much as CMOS for equal size, so sensors were generally fairly small. With the new crop of global shutter CMOS sensors, none of which are being used in camcorders, there's no reason it can't be done with a somewhat larger sensor now.

 

The road block in the current market is that a lot of processing power (and licensing fees) are added by capturing in a conventional video CODEC. Three color channels must be interpolated, gamma transformed, gamma corrected, white balanced, then converted to YCbCr, saturation boosted, knee adjusted etc. If it's a better camera, a "black frame" average image is subtracted from the source to remove fixed pattern noise. Then it has to be converted into whatever CODEC they want. ProRes is not especially processor intensive, but there's fees involved. H.264 is an open standard but requires massive processing power. All that has to be done internally and in real time, so corners have to be cut elsewhere. It's actually CHEAPER to have a minimal processor running fairly little code to capture the raw signal, even if storage requirements are a lot higher. That doesn't bother me, I already have a couple 240GB SSDs that could give me about an hour each of record time of raw HD. It's really no different from back in the film days where you had multiple magazines, except now we can dump an SSD to a laptop and reuse it on-site if need be.

Edited by Stephen Baldassarre
Link to comment
Share on other sites

In my humble opinion, on a professional film set - if you're shooting above 800 ISO, you should have a very good reason to do so. I understand it can't always be avoided, but very few cameras actually look good good at such an ISO. Ultimately, a lot of low-light performance is from smaller pixels on a larger sensor - and there are very few cameras that do that well. Even BMPCC suffers, probably more than most - given they tried to stuff so many pixels on such a small sensor. ISO 1600 can look good on many cameras, but anything above that and its going to start displaying all kinds of digital noise, fixed-pattern noise, and become more muddled.

Edited by Landon D. Parks
Link to comment
Share on other sites

That's what we've been discussing. Software solutions take time and don't do a perfect job. It's better to make a camera that doesn't have the problem. As for the OLPF issues, the solution is to use an OLPF. It's the only way to get optimal image quality.

 

I'm far from being an expert, but it seems like 2k is about as far as you can go with a s16 sized sensor, while still retaining a reasonable native ISO. At that resolution, wouldn't an OLPF need to be fairly strong to counter aliasing and moire? Wouldn't that in turn lead to a level of softness that was unacceptable by today's (very sharp) standards?

 

 

The road block in the current market is that a lot of processing power (and licensing fees) are added by capturing in a conventional video CODEC. Three color channels must be interpolated, gamma transformed, gamma corrected, white balanced, then converted to YCbCr, saturation boosted, knee adjusted etc. If it's a better camera, a "black frame" average image is subtracted from the source to remove fixed pattern noise. Then it has to be converted into whatever CODEC they want. ProRes is not especially processor intensive, but there's fees involved. H.264 is an open standard but requires massive processing power. All that has to be done internally and in real time, so corners have to be cut elsewhere. It's actually CHEAPER to have a minimal processor running fairly little code to capture the raw signal, even if storage requirements are a lot higher. That doesn't bother me, I already have a couple 240GB SSDs that could give me about an hour each of record time of raw HD. It's really no different from back in the film days where you had multiple magazines, except now we can dump an SSD to a laptop and reuse it on-site if need be.

I'm sure you're right in this, but it sounds like you are describing a camera that is nothing but a sensor, a lens mount, and a recorder. What about all the software and circuitry necessary for EVFs, monitor outputs etc? Can all this be achieved for your sub $1k target?

Link to comment
Share on other sites

  • Premium Member

We've had viable ways to avoid composite video issues since the early 80s for production.

Really, I'd love to see them. All of the acquisition formats that I know of were all composite. 1", 3/4" even betacam was pretty much composite, even though it did separate the chrominance from the luminance. It didn’t matter, outside of the very top post production facilities, everything was composite. Yes, component switchers, monitors and such did exist in the very late 80’s, but most people switched to serial digital. It wasn’t until Digibeta that we had true portable ENG component capture and as you may remember, those camera heads sucked! The analog heads were always so much better, but nobody used them. Still, Digibeta and HDCAM were both chroma subsampled capture devices, nowhere near the amount of color information we have today. The mere concept of 12 bit 444 wasn’t even around until more recently.

 

I switched to digital in 1998/1999 ish and that was the first time I used component.

 

I find the current trend of making the look in post to be tedious and produces a very alien image that can be outright nauseating to me. So, I might make slight tweaks to white balance or level but that's it.

I agree, but this is the case with all digital mediums. I was doing color grading back in the betacam days, just like I do today, only using TBC’s instead of digital tools. When I switched to NLE’s, I started grading really no differently then I do today. So that’s 20 years of doing the same thing I do today.

 

I suspect "cinematic" is one of those meaningless buzz words like "warm" in the audio field. Many iconic movies were not crisp at all. Many DPs went out of their way to soften the image, especially on close-up shots, to the point where specialized diffusion filters were created to gradually soften the image while dollying into a tight shot of an actor.

I don’t like crisp, I like “cinematic”, a more softer and filmic image. I know I can get “film” colors out of a CMOS camera, it’s nearly impossible. However, I can still create a filmic look.

 

400 ISO isn't enough? Almost all of them have gain, you just don't have bake-in DNR like CMOS cameras do.

Nah, 400 ain’t enough as the only ISO. There are times you need 800, sometimes due to location issues, sometimes due to a look. I mean I push 5219 (500T) a stop when I need to and it comes out great. I think 500 - 600 would be fine as a base ISO, but 400 is just a bit too weak. 1600ISO isn’t “necessary”, but I think it’s important these days to have.

 

You're right in that there aren't any UHD CCD cameras (as far as I know) but 12-bit and 444 is somewhat common for industrial cameras.

Yea I guess some of the super high end 1080p ENG style cameras do, but I don’t know of they have double stream HDSDI outputs.

 

While single-chip cameras may have a 444 option, that's not what the sensor is outputing, so I'm not sure why it's that important.

Well, it goes to the fact that 3 chip cameras have a one up on single chip. They are the only true way of getting 444.

 

The problem with ALL single chips is the fact the colors are at best “half” the resolution. 4:2:2 is basically the best you can get out of any single imager.

 

Marketing mooks may have the brainless masses convinced that VistaVision sized imagers are the only way to get a "cinematic" look (there's that word again), but the fact of the matter is that the vast majority of film material has been produced in flat 1.85:1 35mm (21mm x 11.33mm) and Academy (21 x 15.24).

Meh, I love the look of large imagers.

 

S16 is 7.41x12.52 and Academy 35mm is 16x22.

S35mm 3 perf is what I shoot these days and it’s 14x25 which is exactly how it’s presented digitally.

 

The difference is night and day, you can use longer glass to get wider shots which means less lens distortion over-all and a different less-flat look. With longer glass the differences are negligible, it’s only with wide stuff you really notice it and frankly, 16mm - 50mm is the range I generally use the most anyway on both formats.

 

Indoor shoots were routinely done between F5.6 and F8 unless there's a reason to do otherwise.

Not me, I’m usually on the borderline of wide open. I love shallow depth of field.

 

It appears the standard is 3 perf 35mm these days, with the occasional anamorphic show. (this includes digital imager size)

 

I’d accept a Super 35mm 3 perf sized imager on a digital cinema camera. Ohh wait, they make it and I’m gonna go buy one! LOL :P

 

Bear in mind, almost all broadcast is done with 2/3" sensors (regular 16mm sized) and you're personally pushing the BMPCC, which is S16 sized.

Yea, but that’s me and I can make S16 look good. Ya can’t do that to the general public.

 

OK. I've shot plenty documentaries on conventional video cameras and even CP-16 without issues.

Issues? We’re talking about people who “own” equipment here and I’m mentioning traveling with it. Put it to you another way, I couldn’t afford to make movies if I needed to travel with an old school ENG camera everywhere I went. Right now I get off the plane at my destination with a backpack, tripod and a little roller. I can literally walk anywhere I want and it doesn’t phase me. Show up to a city with cases of equipment, it slows everything down AND costs a lot more. It’s $50 PER CASE to fly with camera equipment and even with my best packing, it’s still around 3 cases per camera.

Link to comment
Share on other sites

The difference is night and day, you can use longer glass to get wider shots which means less lens distortion over-all and a different less-flat look.

Without wishing to derail the discussion, I must point out that there is zero proof for this. There is geometrically no difference between matching FoV on different formats, and any slight differences in distortion are likely down to the individual characteristics of the lenses used.

 

Here's a link to a forum post I made with some example images demonstrating this on different formats, and a link to another similar test made at ProVideoCoalition.

 

http://www.cinematography.com/index.php?showtopic=76028&p=489075

Link to comment
Share on other sites

I'm far from being an expert, but it seems like 2k is about as far as you can go with a s16 sized sensor, while still retaining a reasonable native ISO. At that resolution, wouldn't an OLPF need to be fairly strong to counter aliasing and moire? Wouldn't that in turn lead to a level of softness that was unacceptable by today's (very sharp) standards?

Ah, so, 2K on a 2/3" sensor (regular 16mm) is about as dense as you can get without sacrificing performance, this is true.

 

As for the OLPF, you have to figure out where your compromises need to be. With a conventional single-sensor camera doing its demosaic process internally (usually bilinear, nearest-neighbor in really cheap cameras), you'd have to limit the resolution to about 500 lines to avoid ALL aliasing (and that's what the first prosumer HD cameras did). However, you can "get away" with a less aggressive OLPF to allow for somewhat higher resolution while allowing a little aliasing that most people won't notice. Now, move the demosaic process to a computer, where you have much more power and no longer need to be real-time. You can use a more sophisticated algorithm that can figure out where true fine details are by looking at contrast changes across all three color channels and adjust its averaging from one pixel to the next. In the attached image, we have an image that (I believe) was shot on film, scanned to digital and converted to a Bayer pattern (left) to show what the processor has to do. The middle image is a bilinear interpolation, which fills in missing color information by taking the average of adjacent pixels. Despite the source image being free of aliases, there's pronounced aliasing in the center image because the Bayer pattern essentially sub-samples the image. On the right is a more processor-intensive algorithm that can be performed on a computer.

li_vs_dfpd.jpg

So for a single-chip camera, one could use an OLPF that restricts the resolution to, say 900 lines and have almost no aliasing upon output. A camera with no OLPF at all may resolve about 1000 lines with a really sharp lens, but any details above that get "folded" down into the sensor's resolution as aliases. The "sharpness" people perceive is not so much the extra 10% resolution, but false details that didn't exist in the real world. One can also increase perceived sharpness by increasing the contrast in the 800 line range. Vision3 film is actually lower resolution than any of its recent predecessors, but people say its sharper because the contrast increases in around the 40ln/mm range, or roughly 600-800 lines if you're shooting on 35mm. Vision1 could easily resolve 5K (as opposed to V3's 4K) but it had a more gradual roll-off in contrast leading up to that 5K, so it looked less sharp. Of course, if you push the contrast of the fine details too far, it looks like cheap video.

Sorry, I'm sure that's way more complicated of an explanation than you were hoping.

 

I'm sure you're right in this, but it sounds like you are describing a camera that is nothing but a sensor, a lens mount, and a recorder. What about all the software and circuitry necessary for EVFs, monitor outputs etc? Can all this be achieved for your sub $1k target?

My prototype will have an HDMI port, which will initially feed an inexpensive LCD panel. The preview will not be full resolution and possibly not even color, depending on how much processing overhead I have, but it will be enough to accurately frame the shot and that's better than I can say about 90% of the optical viewfinders out there. One might use the same port to feed an EVF, but that is a later experiment.

 

 

Really, I'd love to see them. All of the acquisition formats that I know of were all composite. 1", 3/4" even betacam was pretty much composite

Betacam is a true component system, roughly equivalent to 4:2:2. Almost any switcher could be converted to component by replacing the I/O cards. Betacam's main advantage is its component recording, so if you're going to use composite infrastructure, you would be in better shape sticking with 1".

 

 

Still, Digibeta and HDCAM were both chroma subsampled capture devices, nowhere near the amount of color information we have today. The mere concept of 12 bit 444 wasn’t even around until more recently.

Considering the best practical performance you'd get out of a single-chip camera is 4:2:0, I still don't understand why this is important. You said yourself, we aren't producing DCPs, we're producing ATSC, Netflix and You Tube video.

 

 

I switched to digital in 1998/1999 ish and that was the first time I used component.

That's the same time that I switched to digital, but I was using Y/C component before that. Less expensive than RGB or Y/Pb/Br but sharper than composite and no dot crawl. I couldn't afford "professional" preview monitors, so I got a pair of C64 monitors for $20! I still have a slightly modded one and use it with my 16mm HD film chain. :p

 

Nah, 400 ain’t enough as the only ISO. There are times you need 800, sometimes due to location issues, sometimes due to a look. I mean I push 5219 (500T) a stop when I need to and it comes out great. I think 500 - 600 would be fine as a base ISO, but 400 is just a bit too weak.

OK, so use +6dB gain then. Either way, you have to live with some extra noise or add DNR and live with lost details. 500 ISO is not even 1/4 stop difference. Unless you're shooting on something like an Alexa, you're not going to get TRUE 800 ISO performance without compromise (I know what BM advertises, they're 400 ISO cameras). I find 400 is already difficult to shoot outdoors as the base ISO. You have to have a 3-stop ND just to get F16. If you want to open up to F8 or higher to get a sharper image, you better have a top quality ND filter to avoid more IR contamination.

 

 

Meh, I love the look of large imagers.

Yet your main camera is a BMPCC? There's nothing wrong with S16. You can get shallow DoF if you want or long DoF, depending on your lenses. The image plane is large enough to get sharp images out of mediocre lenses and really good lenses can be cheaper/more compact than 35mm lenses.

 

 

S16 is 7.41x12.52 and Academy 35mm is 16x22.

S35mm 3 perf is what I shoot these days and it’s 14x25 which is exactly how it’s presented digitally.

Don't forget the "guard band". The actual film apertures are slightly smaller than the available area to avoid problems (like seeing the sound track or perfs) due to gate weave on that $5,000 counterfeit Indian projector head they used at the $1 theater.

 

 

Not me, I’m usually on the borderline of wide open. I love shallow depth of field.

A very common trend these days. I'm not putting words in your mouth but I suspect most people do it as backlash against the 1/4" and 1/3" sensors previously found in prosumer devices. My budget minded contemporaries complained endlessly about not having selective focus in those days and we (me included) fought tooth and nail to get our lenses up to F2, which made for a pretty crappy image. In retrospect, I should have been aiming for optimal image quality rather than "film-like". Getting back on track, it's gotten to the point of being ridiculous, with product reviews being shot on full-frame DSLRs with long lenses at F1.4, insuring only 1mm of said production is visible at a time. I suspect people will consider shallow DoF a very dated look in the future, just like how the advent of cheap digital reverbs and keyboards makes a lot of 80s songs sound dated now.

 

Put it to you another way, I couldn’t afford to make movies if I needed to travel with an old school ENG camera everywhere I went.

I admit ENG cameras can be a bit cumbersome, particularly the support. However, I don't think I've ever heard somebody say "Awe man, I can't fit my camera in my pocket, better cancel the shoot!". My prototype camera should be about the size of a GL1, so it should fit in a decent sized bag and not draw too much attention from authorities on-location.

  • Upvote 2
Link to comment
Share on other sites

  • Premium Member

Cough.

 

Blackmagic 4K.

 

No rolling shutter.

 

Reasonably pocket-sized.

 

Micro four-thirds mount compatible with effectively anything.

 

Shoot it 4K and blow it down to HD for extra quality, or shoot it cropped for lens compatibility.

 

Not the biggest dynamic range numbers but perfectly usable.

 

P

Link to comment
Share on other sites

  • Premium Member

Either. They both look pretty similar to me (they may be identical from an imaging perspective, but I haven't tested.)

 

Our correspondent wanted something unobtrusive, though, so the little one may be the right choice.

Link to comment
Share on other sites

Getting back on track, it's gotten to the point of being ridiculous, with product reviews being shot on full-frame DSLRs with long lenses at F1.4, insuring only 1mm of said production is visible at a time. I suspect people will consider shallow DoF a very dated look in the future, just like how the advent of cheap digital reverbs and keyboards makes a lot of 80s songs sound dated now.

 

A million times this.

Link to comment
Share on other sites

Cough

 

Is it $1000?

 

Does it have a proper optical block?

 

Probably not

 

I read your several walls of text and I can tell you're not going to be happy with your results. You're chasing the digital dragon.

But, you can still make money out of it. Sounds like you can make a heck of a Kickstarter pitch.

Link to comment
Share on other sites

The 4K original body is going for around $1,800 used - it's no longer in production. The URSA with the 4K sensor is going for around $3,000 - the original 'new' price-point of the original 4k. So in a sense, you could say the URSA with the 4K sensor has 'replaced' the 4k production company. Just know that the original 4K sensor has some major issues with fixed noise patterns that was never truly fixed by Blackmagic. Yes, it can be avoided by avoiding high ISO and avoiding dark areas, and even some post work, but that is still a glaring issue that would prevent me from purchasing.

 

Honestly, if you can still find one, the original 'blackmagic cinema camera 2.5k' is still a gem - with some of the best color re-production of any of the black magics - and it is enough resolution to get a real 2K image from rather than 1080p.

Edited by Landon D. Parks
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...