Jump to content

Ted Johanson

Basic Member
  • Posts

    46
  • Joined

  • Last visited

About Ted Johanson

  • Birthday 07/29/1984

Profile Information

  • Occupation
    Student
  • Location
    Wisconsin
  • Specialties
    Still photography, cinematography, computer programming

Contact Methods

  • Website URL
    http://
  1. Looking at the comparison images in post #7, I'm surprised nobody mentioned the fact the HVX200 actually appears to be holding the highlights better than RED. :huh:
  2. I noticed that some person on this forum (who will remain unnamed) made the following ridiculous statement: I intend to put that lie to death with the truth of real-world examples. Please visit the site below to see resolution tests comparing a Bayer-based sensor to a true resolution sensor. http://www.ddisoftware.com/sd14-5d/ These resolution tests involved a Canon 20D (8.2 megapixels) and a Sigma SD14 (4.6 megapixels). The difference between these two cameras is that the SD14 uses a Foveon sensor, which is capable of full-color sampling at each each pixel site. The 20D can only sample one color at each pixel site. Mathematics would suggest that the resolution of a Bayer-filtered image is only worth half of the advertised pixel count. That is because half of the "pixels" on the chip are green. Why are there more green than red or blue? Because, green is more important to a human's visual system when perceiving detail. That's why Bryce Bayer chose to give more resolving power to the green component. However, green pixels aren't always "stimulated", thereby leaving the red or blue "pixels" to fend for themselves. In that case, the resolution can be as low as 1/4 the advertised pixel count. Can a human's visual system tell the difference in that case? Yes! Even if the human visual system couldn't tell the difference in resolution between a two megapixel red image and four megapixel green image, that still doesn't negate the fact that such poor sampling resolution is a farce for something that is supposedly so perfect. What would happen if you wanted to use the red component only for black and white conversion? Or what if you wanted to enlarge the image; whether digitally or by physically stepping closer to the image? The decreased resolution would stick out like a sore thumb. It doesn't matter if a human can notice the difference or not, because certain other (computer) processes need all the resolution they can get. So, according to the methodology outlined above, an 8 "megapixel" Bayer sensor really only has 4 megapixels of true resolution at best. Furthermore, Bayer pattern sensors require low-pass (blur) filters to reduce color artifacts on edges. This further reduces the ability of the sensor to resolve fine detail. This all seems to ring true judging by the test images. At the site linked to above, scroll down to the resolution pinwheels. Note that the Canon 20D, which supposedly has almost twice as many pixels, really only has equivalent resolution to the SD14 in the white and green quadrants of the pinwheel. Meanwhile, the Sigma SD14 - in spite of it's lower pixel count - clearly outresolves the 20D in the red and blue quadrants of the pinwheel! Also of note is that the 20D's images are significantly softer all across the board! With all that said, anyone with a decent pair of eyes should be able to see that an image from a Bayer-based digital camera - when viewed at 1:1 pixel magnification - is nowhere near as sharp as the same image viewed at 1:2 pixel magnification (50% zoom). Doesn't that tell them something? It should be obvious that real pixels can do a lot better.
  3. I suppose it is obvious...to some extent. But as you mentioned, it sometimes goes overboard. In my opinion, when it does go overboard, it goes too far overboard (maybe even taking a few innocent people with it :lol: ). It's a distracting look. The show's already hard enough to watch because of the seemingly hopeless outlook for the good side. With all due respect, it seems silly to intentionally go to the other exteme in lighting and contrast just because a video camera is being used. That's almost like stepping on the accelerator when you realize there's no hope of avoiding the brick wall straight ahead. Or...it's like a person with poor vision deciding they might as well go all the way by getting glasses that blur their vision further. Is that contrasty look actually recorded that way onto the original camera tape?
  4. Jan, please tell me why BG's particular shooting style and image aesthetics were chosen.
  5. I've heard so many people express my exact thoughts: that show has a repulsive look to it. There seems to be no special consideration for lighting, which is made all the more obvious by the exaggerated video camera burn-out-style appearance. The jerky and always-zooming shooting style is akin to (very) amateur home videos. Ooooh, I'm impressed (NOT!). Is that supposed to make me think that they don't make mistakes? Is that supposed to mean that they're right and the VAST MAJORITY of people are wrong? Seriously, do you think that show has a good look to it? Do really think all of that jerky camera work is necessary? Do you think the ultra-contrasty look is impressing and even attracting viewers? These guys probably thought they'd try something new and take the entertainment world by storm. Once they had that horrible look going, they couldn't stop it because of consistency reasons. It's funny, that show all too often has scenes that aren't even supposed to be dramatic and yet the camera is whipping all over the place. If that's supposed to make you feel like you're there, since when can you zoom your vision? Why would you swing your head around as if excited to look at someone, even though nothing exciting is happening. If it's supposed to look like ENG "footage", I've never seen an ENG camera operator who so carelessly swings the camera constantly and zooms far too often during the take and even has trouble deciding what zoom position to be at WHILE shooting. "The Peabody Awards are generally regarded as the most prestigious awards honoring distinction..." Battlestar Galactica is distinct alright! It's no wonder they got a Peabody Award.
  6. It's funny you should mention that show! It has to be one of the worst-shot television series ever! What's up that stupid exaggerated video camera look anyway? And what about the water hose-style shooting; did the camera operator have a little too much caffeine? Producer: "We bought that new $15,000 zoom lens and I don't want it going to waste. I want to see it used more! Now, now, NOW!" DOP: "Yes, sir. Right away, sir. I'll use it in every shot." Producer: "And make sure you ZOOM it during each take! I want the audience to notice we have a zoom lens. Otherwise, we might as well have bought a prime lens!" As Richard mentions, they had their reasons for shooting it in HD. They probably figured there is no point in using the quality of 35mm film when their intentions are to destroy the image anyway. Also, it's a series produced for the low-budget Sci-Fi channel (which seems to have a run of F900-shot productions lately). How can I tell on my SD TV? By looking at the skin tones and contrast range. It's either very poorly graded film material or it's the typical result from the F900. Battlestar Galactica...probably one of the last series that Sony ever wants to be credited to their video cameras! -Ted Johanson
  7. I'll just assume you meant to say "contrast range" or "dynamic range" rather than "latitude". Latitude is dependent upon - but not equal to - contrast range. Nevertheless, there is a big difference. -Ted Johanson
  8. You make a good point (quite funny, too). But it's not so much the lie (or marketing if you prefer) that bothers me. It's the fact that it is added to other things I've noticed. If this camera turns out to be as great as claimed, they won't even need any marketing anyway. This thing would spread by word-of-mouth like wildfire. Adding misleading statements into their marketing doesn't help their credibility at all. Anyway, I'll take your advice and just silently wait this one out. -Ted Johanson
  9. That was to say that there are cameras out there that can make images "tack sharp" at 100%. You said "If you're looking at a 100% crop on a computer monitor, it's never going to look razor-sharp". I simply pointed out the fact that you are wrong. There are cameras out there capable using the full potential (very nearly, anyway) of every pixel in the output image. Actually, they were posted four months ago. There haven't been any new ones posted since then. I have to wonder what's up with that. There are cameras out there that can parallel it terms of image fidelity. Call it marketing hype if you wish, but that doesn't change the fact that their claim (of unparalleled fidelity) is a lie. -Ted Johanson
  10. Let's just put it this way...I've never seen a digital SLR produce results that soft. Have you ever viewed a non bayer-captured image at 100%? Take the Sigma SD10 for example, it's images are very sharp at 100%. These images quite obviously are nowhere near the claimed 4520 resolution. Naturally, they wouldn't be due to the use of Bayer pattern filtering. There seems to be more to it than that, though. Maybe they're using an anti-alias filter which is too strong. I agree that it's not hugely smaller, but it is still significantly smaller. It is generally accepted that pixel size is a good way for the end user to determine the contrast handling capabilities of one electronic sensor compared to another. The fact is that Red has pixels smaller than those of the typical digital SLR (which aren't too impressive in the contrast range department to begin with). The Origin at least exists in a "production" version that has been tested by numerous people. It's specs are very nearly identical to Red's proposed specs and therefore should be capable of being parallel to Red. In fact, I know of one test which shows the Origin's contrast range being much greater than Red's. Of course, the Red camera was a prototype. Even at that, the Red team has got a long way to go if they ever intend to get up to speed with the Origin. Hmmm...I guess I already listed some of them in my first post. -Ted Johanson
  11. Nope. I'm just someone asking questions about some obvious problems with Red. -Ted Johanson
  12. Has anyone else noticed how soft the so-called 4k image samples are from the Red website? "Ultra High Definition" they are not! It is claimed that 29 sq. micron pixels are used. That's significantly smaller than the pixel size employed in all typical digital SLRs. Anyone noticed how ergonomically incorrect the camera appears to be? "Mysterium? puts pure digital Ultra-High Def in the palm of your hand." Ha! Why did they choose to make a 300mm prime lens as their first lens? Hasn't anyone started to wonder yet why the Red website still doesn't have any real pictures of an actually existing camera? The Red web site claims the camera provides "unparalleled fidelity". I know the Dalsa Origin can easily parallel Red. What's up with all of these management mistakes (lies, poor choice of lenses, more lies, misleading statements, immature promises, etc., etc.)? -Ted Johanson
  13. You must be joking! Four years?! I see consistently equal or better results from Canon digital SLRs on a regular basis. Most notably, I don't see "color sparklies" as often as I see them in the Origin samples, in spite of it's obviously heavy use of low-pass filtering. Perhaps that's what's causing the problem. Excessive blurring of the image is giving the edge detection a hard time. How good do you think this stuff (Bayer reconstruction) can get anyway? You've got a limited amount of data to work from and that's the way it will always be. No amount of mathematical ingenuity can truly replace lost detail. If it can, then you might as well get to work on an algorithm that can also restore lost highlight detail. I've seen images from many brands of cameras...mostly Canon, Kodak, Nikon, Sony, Minolta. They're all generally the same. Some of the older cameras seem to have used a very innaccurate interpolation in which an overly complicated method was used to try to determine the direction of an edge to fill in the missing "line". I tried that interpolation method once and it produced the most idiotic looking results I have ever seen. It really messes up random, high frequency details. That's obvious...we've all seen spectral response curves. The point is it's all too often not overlapped enough to provided sufficient detail to the green channel. This happens a lot more than many people would like to admit: ie. red signs or letters against a dark background. The darker the background, the worse the effect. The end result is that the red on the object looks to have been sub-sampled by JPEG compression. Nobody ever said the technology is useless. While it may look acceptable, it is a far cry from the accuracy of film or 3CCD systems (please don't mention all of the technical problems with 3CCDs). True, but at least it happens in more natural and pleasing way. There are no blundtly obvious monotone edge halos because edges aren't deliberately copied from one channel to another in a desperate attempt to hide color sparklies and low resolution. Again, true. But this all happens in a truly natural process. It is the effect of optical softening and is most prominent in the red channel. The difference in resolution between the green and red layer in film is nowhere near as pronounced as the problems induced by wildly unnatural Bayer reconstruction algorithms. I don't think it is anywhere near as noticeable as you make it out to be. The simple fact remains that edge copying has a FAR more pronounced effect on the saturation of high frequency details; far more so than color-crosstalk and layer softness combined. Really? Have you never seen a Kodachrome or Velvia slide projected? And what about this image... It is a crop from an image shot using FujiFilm Superia 1600. I didn't increase the saturation. All I did was scan and color balance it. I think this high-speed film has surprisingly good separation. Would you really want the red to be anymore saturated than that? I looks quite accurate compared to the original scene. No, I don't know that. Your statement seems to imply that Bayer reconstruction algorithms will forever improve. How can that be possible? Does that mean that someday we'll be using 1 PIXEL cameras which use an extremely advanced alien algorithm to construct a 10 million pixel image filled with incredible details and colors from the original scene? I know that. That's why I said "one of the major causes" or something to that effect. What do you expect? A 33% increase in resolution isn't going to work wonders; it will provide "little improvement in resolution", nothing more. -Ted Johanson
  14. Well, if a Super35 frame is capable of resolving at 4k, then vertically run 65mm negative is capable of resolving at 8k and IMAX is capable of resolving at 12k. This of course doesn't take into account things like focus, depth of field, lens quality, etc. Just a bit of trivia here...if one does scan IMAX at 12k, 10bit, then IMAX film uses the equivalent of 420 megabytes per frame, 10 gigabytes per second (at 24FPS), and about 24 terabytes for a typical length IMAX feature (40 minutes). WOW! -Ted Johanson
  15. Well, obviously not the one which Dalsa is using for the Origin - judging by the examples I have seen. They appear to be using some more generic algorithm which is simply filling in the voids by using neighboring "pixels". Take a look at the red channel in an image from (almost) any digital camera. How do think it could be so high resolution even though only 1/4 of the photodiodes were actually red? Did you ever notice that where there isn't an edge in the green channel, the red channel is left to fend for itself and therefore looks extra pixelated? Did you ever notice that some edges in the red and blue channels have a luminance that is exactly the same as that of the same edge in the green channel? Ultimately this process causes high frequency details to become desaturated. I wouldn't call this method "simplistic". I found it much harder to duplicate than the regular Bayer-reconstruction algorithms. That's right, I have actual experience in building these algorithms. Haven't you ever noticed all of those edges artifacts on digital camera images - do you think they're simply caused by over-sharpening or something? If you have any experience at duplicating the results of Bayer demosaicing algorithms and have a better explanation for the edge artifacts, I'd love to hear your side of the story. True; but that also applies to the digital back as well anyway. I'm sure there are lenses out there that are capable of pushing MF film to the limit. 30 megapixels is quite a conservative figure for medium format - especially if considering the capability of the sensor itself without limitations imposed upon it by the system. -Ted Johanson
×
×
  • Create New...