Jump to content

Ted Johanson

Basic Member
  • Posts

    46
  • Joined

  • Last visited

Everything posted by Ted Johanson

  1. Looking at the comparison images in post #7, I'm surprised nobody mentioned the fact the HVX200 actually appears to be holding the highlights better than RED. :huh:
  2. I noticed that some person on this forum (who will remain unnamed) made the following ridiculous statement: I intend to put that lie to death with the truth of real-world examples. Please visit the site below to see resolution tests comparing a Bayer-based sensor to a true resolution sensor. http://www.ddisoftware.com/sd14-5d/ These resolution tests involved a Canon 20D (8.2 megapixels) and a Sigma SD14 (4.6 megapixels). The difference between these two cameras is that the SD14 uses a Foveon sensor, which is capable of full-color sampling at each each pixel site. The 20D can only sample one color at each pixel site. Mathematics would suggest that the resolution of a Bayer-filtered image is only worth half of the advertised pixel count. That is because half of the "pixels" on the chip are green. Why are there more green than red or blue? Because, green is more important to a human's visual system when perceiving detail. That's why Bryce Bayer chose to give more resolving power to the green component. However, green pixels aren't always "stimulated", thereby leaving the red or blue "pixels" to fend for themselves. In that case, the resolution can be as low as 1/4 the advertised pixel count. Can a human's visual system tell the difference in that case? Yes! Even if the human visual system couldn't tell the difference in resolution between a two megapixel red image and four megapixel green image, that still doesn't negate the fact that such poor sampling resolution is a farce for something that is supposedly so perfect. What would happen if you wanted to use the red component only for black and white conversion? Or what if you wanted to enlarge the image; whether digitally or by physically stepping closer to the image? The decreased resolution would stick out like a sore thumb. It doesn't matter if a human can notice the difference or not, because certain other (computer) processes need all the resolution they can get. So, according to the methodology outlined above, an 8 "megapixel" Bayer sensor really only has 4 megapixels of true resolution at best. Furthermore, Bayer pattern sensors require low-pass (blur) filters to reduce color artifacts on edges. This further reduces the ability of the sensor to resolve fine detail. This all seems to ring true judging by the test images. At the site linked to above, scroll down to the resolution pinwheels. Note that the Canon 20D, which supposedly has almost twice as many pixels, really only has equivalent resolution to the SD14 in the white and green quadrants of the pinwheel. Meanwhile, the Sigma SD14 - in spite of it's lower pixel count - clearly outresolves the 20D in the red and blue quadrants of the pinwheel! Also of note is that the 20D's images are significantly softer all across the board! With all that said, anyone with a decent pair of eyes should be able to see that an image from a Bayer-based digital camera - when viewed at 1:1 pixel magnification - is nowhere near as sharp as the same image viewed at 1:2 pixel magnification (50% zoom). Doesn't that tell them something? It should be obvious that real pixels can do a lot better.
  3. I suppose it is obvious...to some extent. But as you mentioned, it sometimes goes overboard. In my opinion, when it does go overboard, it goes too far overboard (maybe even taking a few innocent people with it :lol: ). It's a distracting look. The show's already hard enough to watch because of the seemingly hopeless outlook for the good side. With all due respect, it seems silly to intentionally go to the other exteme in lighting and contrast just because a video camera is being used. That's almost like stepping on the accelerator when you realize there's no hope of avoiding the brick wall straight ahead. Or...it's like a person with poor vision deciding they might as well go all the way by getting glasses that blur their vision further. Is that contrasty look actually recorded that way onto the original camera tape?
  4. Jan, please tell me why BG's particular shooting style and image aesthetics were chosen.
  5. I've heard so many people express my exact thoughts: that show has a repulsive look to it. There seems to be no special consideration for lighting, which is made all the more obvious by the exaggerated video camera burn-out-style appearance. The jerky and always-zooming shooting style is akin to (very) amateur home videos. Ooooh, I'm impressed (NOT!). Is that supposed to make me think that they don't make mistakes? Is that supposed to mean that they're right and the VAST MAJORITY of people are wrong? Seriously, do you think that show has a good look to it? Do really think all of that jerky camera work is necessary? Do you think the ultra-contrasty look is impressing and even attracting viewers? These guys probably thought they'd try something new and take the entertainment world by storm. Once they had that horrible look going, they couldn't stop it because of consistency reasons. It's funny, that show all too often has scenes that aren't even supposed to be dramatic and yet the camera is whipping all over the place. If that's supposed to make you feel like you're there, since when can you zoom your vision? Why would you swing your head around as if excited to look at someone, even though nothing exciting is happening. If it's supposed to look like ENG "footage", I've never seen an ENG camera operator who so carelessly swings the camera constantly and zooms far too often during the take and even has trouble deciding what zoom position to be at WHILE shooting. "The Peabody Awards are generally regarded as the most prestigious awards honoring distinction..." Battlestar Galactica is distinct alright! It's no wonder they got a Peabody Award.
  6. It's funny you should mention that show! It has to be one of the worst-shot television series ever! What's up that stupid exaggerated video camera look anyway? And what about the water hose-style shooting; did the camera operator have a little too much caffeine? Producer: "We bought that new $15,000 zoom lens and I don't want it going to waste. I want to see it used more! Now, now, NOW!" DOP: "Yes, sir. Right away, sir. I'll use it in every shot." Producer: "And make sure you ZOOM it during each take! I want the audience to notice we have a zoom lens. Otherwise, we might as well have bought a prime lens!" As Richard mentions, they had their reasons for shooting it in HD. They probably figured there is no point in using the quality of 35mm film when their intentions are to destroy the image anyway. Also, it's a series produced for the low-budget Sci-Fi channel (which seems to have a run of F900-shot productions lately). How can I tell on my SD TV? By looking at the skin tones and contrast range. It's either very poorly graded film material or it's the typical result from the F900. Battlestar Galactica...probably one of the last series that Sony ever wants to be credited to their video cameras! -Ted Johanson
  7. I'll just assume you meant to say "contrast range" or "dynamic range" rather than "latitude". Latitude is dependent upon - but not equal to - contrast range. Nevertheless, there is a big difference. -Ted Johanson
  8. You make a good point (quite funny, too). But it's not so much the lie (or marketing if you prefer) that bothers me. It's the fact that it is added to other things I've noticed. If this camera turns out to be as great as claimed, they won't even need any marketing anyway. This thing would spread by word-of-mouth like wildfire. Adding misleading statements into their marketing doesn't help their credibility at all. Anyway, I'll take your advice and just silently wait this one out. -Ted Johanson
  9. That was to say that there are cameras out there that can make images "tack sharp" at 100%. You said "If you're looking at a 100% crop on a computer monitor, it's never going to look razor-sharp". I simply pointed out the fact that you are wrong. There are cameras out there capable using the full potential (very nearly, anyway) of every pixel in the output image. Actually, they were posted four months ago. There haven't been any new ones posted since then. I have to wonder what's up with that. There are cameras out there that can parallel it terms of image fidelity. Call it marketing hype if you wish, but that doesn't change the fact that their claim (of unparalleled fidelity) is a lie. -Ted Johanson
  10. Let's just put it this way...I've never seen a digital SLR produce results that soft. Have you ever viewed a non bayer-captured image at 100%? Take the Sigma SD10 for example, it's images are very sharp at 100%. These images quite obviously are nowhere near the claimed 4520 resolution. Naturally, they wouldn't be due to the use of Bayer pattern filtering. There seems to be more to it than that, though. Maybe they're using an anti-alias filter which is too strong. I agree that it's not hugely smaller, but it is still significantly smaller. It is generally accepted that pixel size is a good way for the end user to determine the contrast handling capabilities of one electronic sensor compared to another. The fact is that Red has pixels smaller than those of the typical digital SLR (which aren't too impressive in the contrast range department to begin with). The Origin at least exists in a "production" version that has been tested by numerous people. It's specs are very nearly identical to Red's proposed specs and therefore should be capable of being parallel to Red. In fact, I know of one test which shows the Origin's contrast range being much greater than Red's. Of course, the Red camera was a prototype. Even at that, the Red team has got a long way to go if they ever intend to get up to speed with the Origin. Hmmm...I guess I already listed some of them in my first post. -Ted Johanson
  11. Nope. I'm just someone asking questions about some obvious problems with Red. -Ted Johanson
  12. Has anyone else noticed how soft the so-called 4k image samples are from the Red website? "Ultra High Definition" they are not! It is claimed that 29 sq. micron pixels are used. That's significantly smaller than the pixel size employed in all typical digital SLRs. Anyone noticed how ergonomically incorrect the camera appears to be? "Mysterium? puts pure digital Ultra-High Def in the palm of your hand." Ha! Why did they choose to make a 300mm prime lens as their first lens? Hasn't anyone started to wonder yet why the Red website still doesn't have any real pictures of an actually existing camera? The Red web site claims the camera provides "unparalleled fidelity". I know the Dalsa Origin can easily parallel Red. What's up with all of these management mistakes (lies, poor choice of lenses, more lies, misleading statements, immature promises, etc., etc.)? -Ted Johanson
  13. You must be joking! Four years?! I see consistently equal or better results from Canon digital SLRs on a regular basis. Most notably, I don't see "color sparklies" as often as I see them in the Origin samples, in spite of it's obviously heavy use of low-pass filtering. Perhaps that's what's causing the problem. Excessive blurring of the image is giving the edge detection a hard time. How good do you think this stuff (Bayer reconstruction) can get anyway? You've got a limited amount of data to work from and that's the way it will always be. No amount of mathematical ingenuity can truly replace lost detail. If it can, then you might as well get to work on an algorithm that can also restore lost highlight detail. I've seen images from many brands of cameras...mostly Canon, Kodak, Nikon, Sony, Minolta. They're all generally the same. Some of the older cameras seem to have used a very innaccurate interpolation in which an overly complicated method was used to try to determine the direction of an edge to fill in the missing "line". I tried that interpolation method once and it produced the most idiotic looking results I have ever seen. It really messes up random, high frequency details. That's obvious...we've all seen spectral response curves. The point is it's all too often not overlapped enough to provided sufficient detail to the green channel. This happens a lot more than many people would like to admit: ie. red signs or letters against a dark background. The darker the background, the worse the effect. The end result is that the red on the object looks to have been sub-sampled by JPEG compression. Nobody ever said the technology is useless. While it may look acceptable, it is a far cry from the accuracy of film or 3CCD systems (please don't mention all of the technical problems with 3CCDs). True, but at least it happens in more natural and pleasing way. There are no blundtly obvious monotone edge halos because edges aren't deliberately copied from one channel to another in a desperate attempt to hide color sparklies and low resolution. Again, true. But this all happens in a truly natural process. It is the effect of optical softening and is most prominent in the red channel. The difference in resolution between the green and red layer in film is nowhere near as pronounced as the problems induced by wildly unnatural Bayer reconstruction algorithms. I don't think it is anywhere near as noticeable as you make it out to be. The simple fact remains that edge copying has a FAR more pronounced effect on the saturation of high frequency details; far more so than color-crosstalk and layer softness combined. Really? Have you never seen a Kodachrome or Velvia slide projected? And what about this image... It is a crop from an image shot using FujiFilm Superia 1600. I didn't increase the saturation. All I did was scan and color balance it. I think this high-speed film has surprisingly good separation. Would you really want the red to be anymore saturated than that? I looks quite accurate compared to the original scene. No, I don't know that. Your statement seems to imply that Bayer reconstruction algorithms will forever improve. How can that be possible? Does that mean that someday we'll be using 1 PIXEL cameras which use an extremely advanced alien algorithm to construct a 10 million pixel image filled with incredible details and colors from the original scene? I know that. That's why I said "one of the major causes" or something to that effect. What do you expect? A 33% increase in resolution isn't going to work wonders; it will provide "little improvement in resolution", nothing more. -Ted Johanson
  14. Well, if a Super35 frame is capable of resolving at 4k, then vertically run 65mm negative is capable of resolving at 8k and IMAX is capable of resolving at 12k. This of course doesn't take into account things like focus, depth of field, lens quality, etc. Just a bit of trivia here...if one does scan IMAX at 12k, 10bit, then IMAX film uses the equivalent of 420 megabytes per frame, 10 gigabytes per second (at 24FPS), and about 24 terabytes for a typical length IMAX feature (40 minutes). WOW! -Ted Johanson
  15. Well, obviously not the one which Dalsa is using for the Origin - judging by the examples I have seen. They appear to be using some more generic algorithm which is simply filling in the voids by using neighboring "pixels". Take a look at the red channel in an image from (almost) any digital camera. How do think it could be so high resolution even though only 1/4 of the photodiodes were actually red? Did you ever notice that where there isn't an edge in the green channel, the red channel is left to fend for itself and therefore looks extra pixelated? Did you ever notice that some edges in the red and blue channels have a luminance that is exactly the same as that of the same edge in the green channel? Ultimately this process causes high frequency details to become desaturated. I wouldn't call this method "simplistic". I found it much harder to duplicate than the regular Bayer-reconstruction algorithms. That's right, I have actual experience in building these algorithms. Haven't you ever noticed all of those edges artifacts on digital camera images - do you think they're simply caused by over-sharpening or something? If you have any experience at duplicating the results of Bayer demosaicing algorithms and have a better explanation for the edge artifacts, I'd love to hear your side of the story. True; but that also applies to the digital back as well anyway. I'm sure there are lenses out there that are capable of pushing MF film to the limit. 30 megapixels is quite a conservative figure for medium format - especially if considering the capability of the sensor itself without limitations imposed upon it by the system. -Ted Johanson
  16. Very simple. I regularly shoot 135 negative film and scan it at 10 megapixels. Since a film scanner is used, it is able to obtain a true full-color sample for every pixel location. This means that the scanner produces true 10 megapixel resolution for each scan. The scanner can actually do 40 megapixel scans. At that resolution, I find that it is possible to reveal much more detail on the film than the TRUE 10 megapixel scan is capable of doing. By the way, a 10 megapixel from 135 format is about the equivalent of a 2.5K scan of Super35 film. I know everyone here agrees that Super35 is easily worth more than 2.5K! Anyway, assuming 135 format is only 10 TRUE megapixels, then by simple math, a negative with ~4 times as much area can produce ~4 times as many "pixels". As for the digital camera resolution not being the full 33 megapixels, that has simply to do with Bayer pattern sampling. It is the green photodiodes which are giving the digital camera most of it's detail. In a Bayer pattern, there are half as many green photodiodes as the total "pixel" count of the camera. The remaining half of the photodiodes are evenly distributed red and blue. Typical Bayer demosaicing algorithms will actually copy edges from the green channel of the image into the red and blue channels. The edges from the green channel are used since the green channel has more resolution (after all, more pixels were used for it). This helps remove some of the unwanted "color sparkle" edge artifacts that are inherent in Bayer-captured images. It also helps the red and blue channels appear to have greater resolution. This process is a major cause of edge halos because all too often, edges with differing luminance are copied into another channel with conflicting edge luminance. This is why edges halos are usually gray. Under the worst conditions (ie. red objects on black background), the green channel will not contain any detail. Therefore, there are no edges to copy from the green channel into the red channel. Thus, the image will only have a resolution (pixel count) as great as the number of red photodiodes (1/4 the advertised megapixels). -Ted Johanson
  17. Do you seriously think the resolution of MF film is less than that of a 22 or even 33 mega"pixel" Bayer-based sensor? They may be equal to or better than MF film in terms of grain, but that's as far as it goes. Resolution is not better and neither is the dynamic range. The same can be said for a typical digital SLR compared to 35mm film. A 33MP Bayer-based back has a resolution no greater than 16.5 true MP under ideal conditions and no less than 8.25 true MP under the very worst conditions. MF film has at the very least 30 true MP resolution (no one would dare dispute that figure as being too high). Saying you are a film guy doesn't impress me much. I know of many people, who said the same thing, all-too-often churn out excessively grainy film scans from a flat bed scanner. Some have even managed to make MF scans look far grainer than my typical everyday 35mm scans. Let's get to the heart of the matter...just how much does that back (it's not even a full camera!) cost? Just because you can get an insanely expensive still camera with a medium format sensor to capture a single 22MP frame every second or so doesn't mean you can get a Super35-sized sensor to capture 22MP frames at a rate of 24 frames per second, let alone 150+ frames per second, for minutes on end. You would not only need to deal with the massive bandwidth requirements, but you also need to deal with the inherent thermal issues of such a high-performance sensor. Question for one of the Dalsa team: how exactly does one go about archiving Origin originated material?
  18. Who can we believe? I seem to recall John Pytlak saying that Kodak sold more film recently than ever before. It seems to me that the massive sales of film are the only thing keeping Kodak afloat. If they had to rely solely on their digital camera and sensor sales, they'd be far worse off than they are now. But naturally the crAP articles never mention that. Instead, they are always VERY obviously biased against film. What's dragging Kodak under? Not their film sales, but rather their insane expenditures to "go digital". Besides, even if Kodak is going under, I'm not surprised. I completely stopped using Kodak still film more than a year. I chose instead to use FujiFilm's still film products. In my opinion, their still film is superior and is significantly less expensive. For the price reasons alone, I'm surprised anybody buys Kodak still film anymore. FujiFilm also cares enough about their still film market to keep it updated with their latest technologies from motion picture stocks. They also don't discontinue great films left and right (which Kodak has always been doing, even before the "digital revolution"). FujiFilm continues to beat Kodak still films in every respect; whether it be image quality, speed, full product lineup, etc. Anyway, back to the Dalsa sensor. As Jim Murdoch has mentioned a few times, you're not going to get all of the supposed "4k" image size off of that sensor due to the oversized chip. Second, due to the fact that it is Bayer sampling, it is NOT going to give true 4k resolution anyway; even if you could use the whole chip. Simple physics and math dictate that the sensor can provide ABSOLUTELY no more than approximately 3k resolution under the most ideal conditions and no less than 2k under the worst conditions. Get your terms straight. It's ignorant people like you that created the megapixel myth that led people to believe they'd get more detail from their digital cameras than they are actually ending up with. And now thanks the blind acceptance of the term "megapixels" being used for Bayer-based sensor produced images, we now seem to be permanently stuck with a misleading term. And now the "digital revolution" is trying to drag a misleading term into the motion picture indutry. A color-blind photodiode is NOT a pixel; no matter what kind of color interpolation algorithm you try to use on it. Next up, let's consider the effect of that high-strength low-pass filter being used on the Origin. That has a far greater effect on sharpness and image quality than a film scanner lens will ever have. Oh, but that's right, you MUST have that low pass filter to quite literally blur the Bayer pattern induced artifacts out of the image. Even in spite of that, I can still quite easily see Bayer-demosaicing artifacts. And that brings up another point. Has anyone else noticed how inferior Dalsa's demosaicing algorithm is compared to Canon's, for example? I can quite easily see Bayer demosaicing artifacts in spite of their heavy low-pass filtering. At least Canon's images have edge artifacts that are more natural; such as edge halos rather than "color sparklies". I cannot believe how readily some people accept Bayer-based image sensing. Is that really the best electronic technology can provide after all this time?! "Dalsa: Fidelity Beyond Film"... What a joke! -Ted Johanson PS. How convenient that Dalsa could only post 2k latitude test images on their website. Their excuse is probably "4k images would be too big to download and we need to maintain full image quality by not compressing the images". JPEG compression is not a concern when all you are doing is displaying pre-corrected latitude tests. In reality, their "4k" latitude test images would really show off how noisy that sensor gets when underexposed. It's already blatanly obvious in the 2k samples.
  19. I have to agree with Karl. Some professionals may be using XDCam now, but that doesn't make XDCam a truly professional format. All that shows is that Sony will release a thousand different video formats every year AND that some professionals have a nasty habit of remaining loyal to Sony by buying their products until the day they die. Using optical discs for professional use is driven by nothing more than things like peer pressure, the "cool factor", or even simple ignorance. The future of professional video recording - as I see it - is using a solid-state device such as a memory card. Panasonic has got the right idea! Solid-state recorders are much more reliable under adverse conditions such as roller coaster rides. There is never any worry about dust (or other surface defect) on the disc or lens causing an error in recording. The bandwidth capability is far greater. They consume less power. There is no such thing as "spin-up time". There is no motor whirring sound to be picked-up by a camera mounted mic. Etc., Etc., Etc. It's kind of like all of those people who buy those DVD camcorders. They don't care about the fact that MiniDV provides better image quality, longer recording time, less expense, and more reliable recording! Instead all they can think is "My peers sure would hold me in high esteem if I had a camcorder that recorded to DVD! Imagine how cool I'd look having the latest technology!". Yep, XDCam is very likely to be another one of Sony's (typically very) temporary videos formats. I could list all of Sony's failed formats...but I'd probably exceed the post length limit. :D -Ted Johanson
  20. Hi Jim, Are you sure "Click" was shot using the Genesis? I downloaded the 1080p version of the trailer for "Click" from Apple's website. The first thing I noticed (in a techincal sense) was that the resolution didn't at all look like true 1080p; it looked more like 720p interpolated to 1080p. I also noticed that some of the scenes appeared quite grainy with what looked like grain reduction. Maybe Apple got their hands on a cheap copy for re-encoding, hence the relatively poor resolution. Maybe that grain was introduced when the prints were made --- perhaps the prints were scanned into a digital format which was used by Apple. And perhaps that grain reduction look is just caused solely by H.264 compression artifacts. Who knows? Can anyone shed more light on this subject? :-) -Ted Johanson
  21. How do all of you Nikon users feel about the mechanical diaphragm linkage between the camera body and lens? Have you ever had problems with it being bent or anything like that? -Ted Johanson
  22. I checked the technical info for the Portra stocks and it appears that something in that lineup was updated (judging by the date in the top right corner). However, I can't see any mention of "two-electron sensitization". The "Print Grain Index" values for all Portra stocks remain the same as well. -Ted Johanson
  23. Mr. Pytlak, Has Kodak finally managed to apply their "two-electron sensitization" technology to any of their still films yet? Over in FujiFilm world, people are enjoying improved technology in thePro160S/C stocks. FujiFilm uses the same name for their technologies regardless of whether it is applied to still picture films or not. The new Pro160S/C stocks both use "Super Nano-Structured Sigma Grain", "Super Efficient DIR Coupler", and "Super Efficient Coupler" technologies; the same three technologies which are used in the new Eterna 500 motion picture stock. Those three technologies are what place Eterna stocks on par with Vision2 stocks. By the way, those Pro160S/C stocks also use 4th Color Layer technology. So, did Kodak already employ "two-electron sensitization" technology in a still film but foolishly call it something else? OR...are they just a little slow in the still picture film department? -Ted Johanson
  24. Hi Jim, As I see from one of your earlier posts in this thread, you remembered my post from a year ago in which I gave my explanation for the Genesis using that particular sensor design. For everyone who didn't see that post, here's my explanation... Every first row of pixels overexposes to capture shadow detail while every second row of pixels underexposes to capture highlight detail (see attachment). So, if you have 12,440,800 "pixels" (really photodiodes) to begin with, and you need to average every two rows of photodiodes into one row, you would then be left with 6,220,400 photodiodes. Then, because you need a red, green, and blue photodiode to produce a single pixel, you need to divide 6,220,400 by three, at which time you arrive at producing an approximate 2k, full-color image. It's as simple as that. This seems to be the best explanation for using 12 million photodiodes to produce a 2 million pixel image. It would be the most advantageous use of having multiple photodiodes for each single primary color of every pixel. It accounts for the increased contrast handling capabilities of the Genesis. And now that I think more about it, Jim is right in saying it would be too complex to build a chip in which the actual sensitivity of each row of pixels is different than that of the adjacent row. So, instead, they most likely place strips of ND filters over every other row of pixels so those rows can capture highlights better. However, after averaging the two rows together, the image would appear underexposed. The simple solution to that problem would be to recalibrate the sensitivity of the entire sensor to counteract the darkening caused by the ND filters. I think those are the (former) "secrets" John Galt couldn't reveal. -Ted Johanson
  25. A 13 stop range?!!! Where did that specification come from? What are we getting now with the typical pro-sumer video cameras; 7 stops, maybe 8 stops, tops? Let's consider one of the options for increasing dynamic range on an electronic chip: increasing the electron well capacity. Because one extra stop of light has twice as many photons, and because to capture twice as many photons means you need twice the electron capacity...in order to increase the dynamic range of each "pixel" by 5 stops(!), the electron capacity will need to be 32 times greater! If it's 6 stops, it needs to be 64 times greater, etc. Perhaps those electron wells are akin to the TARDIS from the British television series "Doctor Who". Small on the outside, big on the inside, eh? Let's consider another option. It is possible to multi-sample each pixel for every frame. The first sample is an underexposure, the second is an overexposure. The first captures the highlight detail, the second captures the shadow detail. The two exposures are then blended together for an increased dynamic range effect. There are three ways to achieve the under/overexposure. The first option is to increase or decrease the sensitivity of each pixel accordingly. The second is to alter the exposure time accordingly. The third and most idiotic option is to install microscopic irises over each pixel. All three options require the two exposures to be temporally offset. That would result in highlights being misaligned with shadows whenever there is motion. The second option (altering exposure time) would result in shadows having more motion blur than highlights. All three options remove the possibility of having a 360 degree shutter. Another option is to use multiple sensors for one pixel. This approach will degrade accuracy of detail, and will compound the already existent detail sampling inaccuracies of Bayer pattern CFA sensors. This approach also results in reduced maximum resolution. So exactly what is it about CMOS sensors that makes them so superior to CCDs? Oh, wait...I can answer this one! They are superior simply because they are used in the best digital SLRs. A 13 stop range? I don't think so! The biggest reason for using CMOS sensors is their reduced cost. -Ted Johanson
×
×
  • Create New...