Jump to content

Ted Johanson

Basic Member
  • Posts

    46
  • Joined

  • Last visited

Everything posted by Ted Johanson

  1. Hi Daniel, I have no word on the HDV2 format. However, I must congratulate you on helping to expose one of Sony's many lies! Big time congrats, man!!! I wish more people would stand up for the truth like you did; even in direct conflict with almighty Sony. -Ted Johanson
  2. I found this press release a few days ago... http://www.fujifilm.com/JSP/fuji/epartners...BID=NEWS_841903 It's good to see FujiFilm continuing to give Kodak plenty of competition! It looks like they are replacing the entire lineup with new emulsions of the same speed and color balance. First, there was F-500T, F-400T, F-250T and F-250D. Now we have Eterna 500T, 400T, 250T and 250D. Obviously, this can only mean Eterna 125T and 64D are on their way. -Ted Johanson
  3. Hi, John. It's good to hear from you! Wow...as small as under one micrometer! That's much smaller than I remembered it being. Are there any recent efforts being made to reduce the "clumping" of film grains? I read quite a while ago that Kodak scientists believe film's efficiency can be increased by 2 to 7 stops. If my memory serves me correctly, this was announced after two-electron sensitization was introduced into motion picture stocks. Any updates on the 2 to 7 stop increase? Does anybody have any more info on that "nanotechnology" powered sensor? Or does anyone agree that it must be a hoax? -Ted Johanson
  4. Jim or John, I recently saw a news headline stating that nanotechnology has been used to create a sensor with an 11 stop increase in sensitivity. This is in spite of the fact that each photodiode on the chip has less than half the area of other photodiodes. My question for either of you is how can nanotechnology be used to increase sensitivity? I actually believe this all to be some sort of lie. That belief is based on some discrepancies that I have noticed in the news articles. For example, they say nanotechnology is used, but the chips "can be mass-produced using standard CMOS process without additional investment for facilities" Huh? Am I missing something here? Since when can nanotechnology be mass produced with "standard CMOS process"? -Ted Johanson
  5. Here is the info on that picture according to the EXIF data: Lens: EF 17-35mm f/2.8L USM Focal length: 27mm EI: ISO 200 Aperture: 5.6 Exposure time: 1/100 second Also note that the crop was taken from very near the center of the frame. As for your technical specification comparisons between negative film stocks and the Dalsa sensor, I have some problems to point out. First of all, you say that Kodak says film has a maximum of 1800 line pairs per millimeter. Don't we all wish that could be possible?!!! They actually say it has 1800 lines per picture height. That equates to 3200 lines per picture width. Now, are you sure the Dalsa white papers are dealing with the same resolution measurements? Are they dealing with line pairs per millimeter, lines per millimeter, lines per picture width, lines per picture height, line pairs per picture width, or line pairs per picture height? Kodak's example was also shot on the old Vision 500T stock; not exactly the most ideal stock for testing ultimate resolution. Dalsa admits in their own sensor white paper that "Some film negatives have been tested to exceed 4000 lines of horizontal resolving power". That can be found in the first sentence on page 3. Then they go on to blab their brains out about internegatives, prints, etc. and generational loss as if an image captured on film MUST remain on film! Let's also not forget that Dalsa's sensor is larger than a Super35 frame, so not all lenses will fill the frame. Therefore, not all of the frame can be used. Has anyone ever seen any proof that the Origin has 12 stops of exposure latitude? -Ted
  6. Jason, you say you see no artifacts? Are you looking for edge artifacts on full-sized samples? I don't mean edge artifacts as in color artifacts. I am talking about unusual edges in the red or blue component of the image that are not supposed to be there. Some people would write this off as sharpening artifacts, but that's not the case. My so-called "crappy" resolution "numbers" were apparently too hard for you to understand. I never said anything about exclusively using "a very dumb 3x3 interoplation". I said that the full process of edge copying is used to hide edge color artifacts and cause an apparent increase is resolution to the red and blue components. I am talking about the best De-Bayering algorithms in the world! I guess I don't quite understand your point about a "4k luma sensor". Sure, if you wanted a B&W image off the sensor, then you would get true 4k resolution. But, the image would have an unusual pattern in it due to the possibility of the red and blue pixels having more or less luminosity than the green pixels. The absolutely indisputable fact of Bayer pattern sensors (when dealing with full color) is that their pixel count is, at best, only 1/2 the advertised count. Sometimes, it can be as bad as 1/4 the pixel count. And at anytime, it can vary wildly, within the same image, between 1/2 and 1/4 pixel count. Every single time I bring up this point, someone says "So what about the resolution, it's how it looks in the end that matters.". While it is true that the final image's appearance matters most, many people think it looks "OK" simply because they don't see any serious pixelation. It is because of interpolation that pixelation isn't much of a problem. But not even the most advanced interpolation algorithms from 50 years into the future will truly add real-world detail. When speaking of resolution, one must remember that resolution refers to the ability to actually resolve high frequency objects; it does NOT refer to the lack of obvious pixelation! Here is a crop from an image shot with the glorious, better-than-thou-art Canon EOS 20D digital still camera: This is a good place to find some obvious De-Bayering artifacts. Just one look at this image shows blatantly obvious inaccuracies. If you compare the different components of this image, you'll notice the following... 1. Because the green component has no edges inside of the flowers, the red component has completely soft looking edges inside of the flowers. 2. Because the green component does have an edge on the outside of the flowers, the red component flowers have a sharp outside edge. 3. The outside edges of the flowers in the red component "mysteriously" have the same luminance as the green component's outside flower edge. Now if you just stop and think for a while about the problems that this can ultimately cause, you'll realize what a problem it could be. Just because it looks okay, that doesn't mean it truly is okay. If you were to compare the same image shot with a Bayer sampled image and a truly sampled image, I don't think you'd accept the obviously flawed Bayer sampled image! Nobody ever said in this thread that Dalsa's camera isn't the best in some way. My whole point is to say that Dalsa is misleading people. -Ted
  7. Hi Everybody! Jim, I must confess that I share your apparent frustration with Dalsa. Most notably, I am frustrated by their claims of having created a 4k camera. The fact is that their "breakthrough" 4k camera truly has no more than a MAXIMUM of 4 megapixels of resolving power. This is due to the inherent drawbacks of using a Bayer pattern CFA based sensor. Many people reading this may think they already know about these problems, but they most likely attribute them simply to sharpening, chroma subsampling, or compression. This is very often not the case. If anyone is interested in knowing more about what I am talking about, please read on. Some of the problems induced by Bayer pattern sampling are explained below. Those of you with a passion for using three-chip cameras, read on; you'll get a kick out of this. To understand this, let's first consider the true meaning of the word "pixel". The word "pixel" comes from the two words "picture element". A pixel is the building "block" of a digital picture. As we all know by now, each pixel is made up of three color components: R, G, and B. The big confusion in megapixels is created when manufacturers call a color- blind photodiode a pixel. A photodiode is NOT an entire pixel, it's true effect is that of a single color component of a pixel. A photodiode cannot detect color, only brightness. Therefore, a color filtering dye must be placed over the photodiode so that it will only see ONE of the primary colors. Three of these photodiodes are necessary to produce a full color pixel. So, in reality, the so-called 8 megapixel Dalsa camera is more correctly called an 8 megaphotodiode camera. Each photodiode is, however, treated as though it were a full color pixel by filling in the two missing color component values. This is done by examining the nearest photodiodes that are of the same color as the missing components. For example, if finding the missing red and blue values for a green photodiode, look at the nearest red and blue photodiodes and use their values. This is known as interpolation and it is basically used in trying to determine the missing values that were most likely to have been in a certain location. Virtually every single-chip digital camera (including Dalsa's) uses a Bayer pattern CFA. In every Bayer pattern CFA, there are just as many green "pixels" as there are red and blue "pixels" combined! That means that an 8 megapixel camera, for example, truly has only 4 million green "pixels", 2 million red "pixels" and 2 million blue "pixels". Bayer pattern CFAs were designed to contain more green "pixels" to take advantage of the fact that the human eye (and brain) perceives most of an image's detail from the green wavelengths of the spectrum. So, in simple terms, the Bayer pattern CFA provides more green "pixels" on the sensor to give the image it's detail and the red and blue "pixels" are there to provide color to the image. Now, because the green component of an 8 mega"pixel" image was captured with only 4 million photodiodes, and because green provides the detail for the image, virtually all single-chip digital cameras have only half the advertised pixel count; and that is true only under ideal conditions. Now you may be asking, "If the red and blue components only have 1/2 the pixel count of the green component, then why do they appear to be the same resolution as the green component?". The simple answer to this question is "edge detection". Because the green component has the most detail, it's edges are carefully copied to the red and blue components. This process not only has the effect of increased resolution; it also helps to hide the artifacts caused by sampling the three color components for a single pixel from differing locations on the xy axis of the sensor. If you don't believe this to be true, you can prove to yourself that it is true. Simply use you Bayer CFA based digital camera to take a picture of a completely red colored object on a black background. Because the green channel will contain no edge detail, no edges can be copied from it. That means that the red channel must be left untouched and therefore shows it's original edge detail. Hence, you will notice an increased softness and pixelation. This is NOT due to JPEG subsampling of the red component. This can be proven by looking at the red component in an area where the green component contained an edge. It will have a sharper edge than the area where green had no edge. There are many irreparable artifacts introduced by copying edges from one component to another; but they are too complicated to go into at this time. This ultimately means that with ANY Bayer CFA based digital camera, if you take a picture of a completely red or blue object on a black background, the actual pixel count will be 1/4 of the advertised pixel count. This also means that if you were to take a picture of a completely gray scene, the green component would be able to provide almost completely accurate edge data to the red and blue components with very few adverse effects; thus providing a nearly completely accurate 1/2 advertised pixel count. A gray scene is an example of the most ideal condition for a Bayer CFA. A completely red or blue object on a black background is an example of the worst possible condition for a Bayer CFA. So, when using a Bayer CFA based digital camera, the best possible pixel count that can be expected is 1/2 of the advertised pixel count and the worst that can be expected is 1/4 the advertised pixel count. Now you are probably asking "So, if an 8 mega"pixel" image really only has a maximum pixel count of 4 megapixels, then why would an 8 megapixel image be needed to maintain ALL of the detail?". This is because the green "pixels" on a Bayer pattern CFA based sensor are basically arranged in a diagonal fashion. Every column and every row contains a green photodiode. This is not the case with red and blue photodiodes. If you remove any column or row to reduce the size of the image, you will be removing a green "pixel". Green "pixels" provide the image with it's detail, so removing them should be avoided. I hope this can be understood easily enough. It's quite a complicated subject, so I apologize if I haven't made everything clear. So, Dalsa, I suppose you just forgot to mention anywhere the fact that your camera has unavoidable artifacts and lower resolution than advertised due to the use of a Bayer based CFA? -Ted Oh, I must also say that I am annoyed by Dalsa's claims of "12 stops of Latitude". Has anyone else noticed the many discrepancies on Dalsa's web site? I will cite some examples from here http://www.dalsa.com/dc/origin/dc_sensor.asp and here http://www.dalsa.com/dc/origin/origin.asp First, they say "This gives us more dynamic range, which means much wider exposure latitude than any other cinema- tography sensor, CCD or CMOS.". Doesn't that sound as if they are implying that their sensor now has a larger dynamic range than negative film stocks? I mean, after all, isn't film considered a "cinematography sensor"? Later, on the same page, they say "Origin's exposure latitude is comparable to the best film stocks". So now it's only comparable and not better? Second, they say that their camera "...offers at least 12 stops of linear response...". On another page, they say their sensor offers "...more than 12 stops of exposure latitude...". You'll notice great ignorance at work here. In one place, they use the phrase "linear response" which is basically the same thing as "dynamic range". In another place, they use the phrase "exposure latitude". Now somebody please correct me if my many years of experience have provided me with incorrect knowledge: dynamic range and exposure latitude are two different things! Dynamic range refers to the total number of stops between the brightest white and darkest black a sensor can sense in a single exposure. Exposure latitude refers to the number of stops left over after subtracting the displayed dynamic range from the total dynamic range. A good example of this is: if your displayed dynamic range is 12 stops and you have a total dynamic range of 12 stops, then you have no latitude. But, if your displayed dynamic range is only 5 stops, then you can move that range around inside of the total dynamic range a total of 7 stops. -Ted
  8. I just had to REALLY laugh at that!!! In order to get a print that large from a 6 megapixel image, you'd need to print it at 85 DPI; a far cry from photographic quality. And that's not even taking into account the poor sampling accuracy (Bayer pattern and low-pass filter induced) of your precious 10D. I must say I have NEVER heard a claim this PATHETIC from a digital SLR owner! Yes, your image looks "crystal clear" if you are standing, what, 20 feet away or so. You say you save money? Ha! Not with the way people like you think they have to always have the lastest and "greatest"! When are you going to upgrade to the 20D anyway? Come on, get with it; the 10D is no longer the best digital SLR in it's price class. You've got to hurry up and buy a whole to new camera just to upgrade the image quality. Hurry up, you're losing business because you don't have the coolest digital SLR! -Ted Johanson
  9. Thanks, tenobell, for setting me straight on that. So my bitrate stands for the Genesis, not the F900. Are you sure the F900 doesn't do 10 bit output though? -Ted Johanson
  10. Yeah, I know, the F900 rocked the industry too. Apparently, you didn't read my first post. The Genesis is incredibly inefficient when it comes to produce a measly 2 megapixel image! It requires 12 million photodiodes on a chip the size 35mm MP frame just to produce a 2 megapixel image! Look at the F900, the three 2/3 inch chips are able to produce an image that truly has 2 million pixel resolution. The Genesis is pathetic for it's chip size! Once again, digital cameras are trying to resort to using a single chip design to cut costs. That causes serious setbacks in resolution or contrast range capability. IMAX film could be considered to be worth up to a 24k scan, 12k is definitely necessary. I've heard of it and so what? You can make 24" x 50" prints from micro film. Of course, they won't look perfect at that size but neither will a 22 "megapixel" Bayer pattern sampled image. What does this camera cost? $15,000, $20,000? A 22 megapixel Bayer pattern sampled image is pretty pathetic for chip as large as that camera has! The actual resolving power of film is far greater than the resolving power that camera. Did I mention yet that larger sensors become exponentially more expensive and can have increased noise? Oh, now it's 24 megapixel?! I thought it was to be 12 megapixel. Anyone else here agree? Here's what I've heard about it... It supposedly uses a special type of memory which is write-once and costs around $9,000 per minute. It supposedly uses a three-chip design; each chip being the size of a 35mm MP frame. The alignment on that must be causing some massive headaches. I also heard that it is currently the size of a small car. If this camera isn't some gigantically stupid rumor started by some digital propagandist such as yourself, it's going to be one hell of an expensive camera!!! -Ted Johanson
  11. Do you have any proof that it isn't?!!! Have you ever bothered to shoot film with real lenses and then scan the negative itself with a dedicated film scanner? Obviously not! Where in the <bleep> do some people get this extremely STUPID idea that negative film so limited compared to digital cameras? I once heard The History Channel claim that you need about 6 megapixels to match a 135 format negative. What a pathetic misconception!!! Especially considering that a digital camera uses Bayer pattern sampling! If you'd actually bother to learn about the technology that you obviously love so much, you'd know how Bayer pattern sampling works. You'd then realize why a 6 megapixel digital camera doesn't produce an image with true 6 megapixel resolution! Anybody can see the results of it too. Take a look at your precious digital images at 100%. Are you going to try to tell me that looks true and natural?! Even if 135 isn't worth true 40 megapixels, it's certainly worth 40 megapixels by digital camera standards. Would anybody like to see an example? Do you actually dare to claim you know more about "digital capture" than I do?! I use a digital camera EVERY day. I study this stuff in great depth every day. Actually, the sensor plays a significant role in this issue too. Even DSLRs have very significant issues with some lenses. You obviously have no clue as to how important exposure latitude can be. And I suppose you would shoot it with your digital camera instead? -Ted Johanson
  12. Don't forget the resolution loss, the noise that can easily become worse than the grain of film at the same speed, purple fringing, horrible shutter lag, horrible shot-to-shot performance, disgusting power consumption, non- interchangeable lens, pathetic EVFs and initial expenses. I believe that just about covers all of the disadvantages that your particular type of digital camera has. If you're really willing to have all those problems in order to have "convenience", that's your choice to make. John, you do realize that you are being very conservative with that statement, don't you? Especially if you are talking in terms of digital camera "megapixels". Take the highest of quality lenses, such as the Canon EF 135mm f/2 lens, focus them perfectly on a high quality 35mm negative, and it will easily provide more than 12 megapixels. Scan it with an excellent dedicated film scanner such as the Minolta DiMAGE Scan Elite 5400 II and you'll have a beautiful 40 megapixel image. Let's do the math... The Sony F900 outputs a 10 bit 1920x1080 image with 2:1 compression, right? That equates to 89 megabytes per second when running at 24 FPS. Super35, 4 perf negative film is worth at least a 4k scan. That's a 4096x3072 image. You'll need at least a 12 bit image to maintain all of gradients throughout the entire contrast range. The image should also be uncompressed. That equates to 1,300 megabytes per second at 24 FPS. Overcrank the camera to 96 FPS and the bandwidth required would be 5,200 megabytes per second. As long we're on this subject let's consider IMAX film. It is worth at least a 12k scan. That would produce an image with dimensions of 12288x10500. Again, you'll need a 12 bit uncompressed image. That equates to a bandwidth requirement of 13,289 megabytes per second at 24 FPS. Yep, things are looking good for digital capture. They'll be able to pipe all that data through in the very near future :lol: So many people seem to have the following misconception of digital camera technology: "Before, I could only buy a two megapixel digital camera. Now I can get 8 megapixel cameras! Wow! How do they do it. It must be new technology." Wow! What genius thought of making the chip larger to add more pixels?! Since when is adding adjacent pixels a new technology? Yes, they have occasionally made small increases in pixel density on the chips. And yes there's more noise and a tighter contrast range because of it! Some people seem to be under the impression that it is possible to infinitely increase the resolution of electronic sensors. What's the purpose of making a sensor with photodiodes smaller than photons?! -Ted Johanson
  13. You mean Sony isn't going to try to sell their version of the camera, just as they did with the F900? What a shock! Sony couldn't stick with something?! Oh my, what is this world coming to?! That would be a relief, if it's true. This seems all too often to be the case. You call up a company to ask them some technical questions and they either can't give you an answer or they'll give you an answer that they think sounds good for their company. So, when you call up a company to ask them if the resolution is scalable, don't expect to get a straight answer. -Ted Johanson
  14. Sony and Panavision don't want to tell anyone because if they did, their camera wouldn't look so technologically marvelous! And, of course, a technologically marvelous camera is good for business. Also, if you told a potential buyer that your camera had a 12 megapixel chip and that it can only output a 2 megapixel image---AND you didn't give them a reason for the difference---what do you think the buyer is most likely going to assume is the reason for the difference? Obviously their first assumption would be "it must be scalable for when recording technology is improved". That would make their camera more attractive to the buyer. The buyer would then think "Wow! It's got this improved contrast range AND it will be able to output 12 megapixel images some- day!" Those are pretty good reasons for them to not reveal the specifics of their camera. I'm sure some of you will disagree with me on this, but I don't think Sony is the best business partner for Panavision. We all know how misleading and erratic Sony can be (if anybody needs examples, let me know). Here, I'll give this example right now...oh wait, I can't. :unsure: Okay, does anyone else here have enough memory capacity to remember all of the different media formats Sony has created or helped to create in the last 30 years? :D -Ted Johanson
  15. That must be a limitation of the recorder. If they can make the recorder "faster", then 50 FPS would become available. Perhaps that is one of Panavision's ideas of the camera "resolution" being scalable. -Ted Johanson.
  16. Well, I suppose anyone who is interested in how this camera works (or may work). Everyone seems to want to know. Some people seem to think the extended contrast range is some sort of great technological breakthrough; but if I'm correct, it's nothing new. It could be quite simple. That's my point. On the other hand, this camera does have significant advances in the signal processing area. That is why it can be "over-cranked" to 50 FPS for example. I'm sure everyone already knows that though. With that said, yes, it is a great tool; certainly better than it's predecessors. -Ted Johanson
  17. PS. I already knew about the cooperation between Panavision and Sony. It's just like the cooperation they had when creating the F900; and we all know how misleading they were with that camera! -Ted Johanson
  18. Yes, I know. I believe everyone here would know that if they read my first post in it's entirety. Scalable...yes. But scalable in exactly which ways? Did they specifically say the resolution was scalable or are they just being misleading by not telling people that only things such as bit depth and compression quality are scalable? I know all about FujiFilm's SuperCCD. It works using a different method than the one I explained. Their photodiodes for capturing highlight details have a small area, while the photodiodes for capturing shadow details have a large area. The physical size of each photodiode has a direct impact on the sensitivity. Thus, actual size of the photodiode is what is causing the under/over exposure. I made no mention of different photodiode sizes in my specualtion about the Genesis. I believe that it works by controlling the gain of individual lines. Also, nearly every digital still camera manufacturer uses a Bayer pattern CFA. If my memory serves me correctly; Kodak, Canon, Nikon, Minolta, Sony, and HP all exclusively use Bayer CFA's for their digital still cameras. Why would they use that type of pattern if it wasn't better, especially when it requires an extra photodiode per pixel? Let's assume the resolution truly is scalable. Why did Sony and Panavision choose to use an RGB CFA if it isn't as accurate as a Bayer CFA? Clearly resolution wasn't the biggest concern. If Sony and Panavision were trying to achieve the most accurate sampling possible, the RGB CFA method was NOT the best way to go. Obviously, it was chosen for another reason. It's more accurate than a Bayer CFA would be; but ONLY IF you are using my speculated multi-sampling method. It would also certainly allow them to work-around FujiFilm's patent. Go ahead and call Panavision again...ask them specifically if the resolution is meant to be scalable; and make sure you are talking to someone who truly knows this camera. -Ted Johanson
  19. You're talking about a scalable camera, that's something I'm sure everyone would like to have. Nevertheless, I don't see any mention from Panavision about the Genesis being scalable. Has anyone read otherwise? Also, isn't it funny they chose to use an RGB color filter array rather than the very widely accepted Bayer pattern CFA? The RGB color filter array just happens to work better for the method I described. Exactly! There's nothing new about this. Lot's of people have been using the same general method for a long time. And let's not forget, I did conclude my first post by saying... In other words, I'm not giving a 100% guarantee that the camera truly does work the way I described. -Ted Johanson
  20. I have no concrete proof that it works this way...but I do have the knowledge obtained from many years of studying CCD and other digital acquisition technology in great detail. Do the simple math yourself. Isn't it a great coincidence that everything just happens to fit perfectly? Why do you think they would build a sensor with 12 million photodiodes on it when they knew the system as a whole would only output 2 million pixels?! Anyone care to attempt to give another explaination for this overkill? Perhaps you, Mr. Parks? -Ted Johanson
  21. Greetings everyone! This is my first post, but I have been reading the forum for around a year now. Anyway, back to the topic... Many people seem to think that the Panavision Genesis has made giant technological strides in solving the problem of limited contrast range for digital acquisition systems. While the results themselves have been improved, the actual sensor technology has not improved (at least nowhere near as much as they'd like you to believe). "Well then, how were the results improved?" To answer that question, let's first consider the specs of the Genesis. The CCD is claimed to be of the RGB CFA type and has 12 million photodiodes (sensors). The output "video" resolution is 2 million pixels (6 million photodiodes used, considering the RGB CFA design). As you've noticed, when using an RGB CFA type sensor, you only need 6 million photodiodes to produce a 2 megapixel color image. "So, what about the other 6 million photodiodes left on the Genesis' sensor?" Here's where the "magic" comes in. The first 6 million photodiodes are never correctly exposed, they are actually always underexposed to maintain highlight detail. The other 6 million photodiodes are always overexposed to maintain shadow detail. Both exposures are made at exactly the same time. Thus, two separate records of the exact same image are produced; one underexposed, the other overexposed. These two records are then combined digitally, in- camera, to produce a single image. So there's your answer to the "excess resolution" of this camera. All of those photodiodes are actually needed to produce the two-megapixel image...take any of them away, and you WILL degrade the quality of the output "video" in either contrast range or resolution. Try to increase the output "video" resolution, and you will lose contrast range. All in all, it is a very inefficient design; but effective. Now, in reality, it doesn't work quite as simply as I have explained it above. I used phrases such as "first 6 million photodiodes" in the hopes of providing an easier to understand explanation. This seems to imply that the top half of the sensor underexposes and the bottom half overexposes. Of course it wouldn't work that way. It actually works on a line-based system. The sensor underexposes every first line of photodiodes and overexposes every second line of photodiodes. For those of you who don't believe that this works, try it for yourself. You'll need to take two pictures; one underexposed and one overexposed. Bring both images into an image editing program such as Photoshop. Layer one image on top of the other. Then reduce the layer opacity of the top image to 50%. And there you have it, an image with increased contrast range. No, I didn't hack into PV's computers or anything like that. I just figured this one out all by myself. It's the most reasonable explanation for the resolution differences. I hope you enjoyed reading my post! -Ted Johanson
×
×
  • Create New...