Premium Member Stephen Williams Posted September 29, 2011 Author Premium Member Share Posted September 29, 2011 Ah. Well, here's the thing. Unlike the case of a 3-chip camera, or one like the Genesis with RGB filter stripes, all of the pixels you see are 100% synthesized, by a complex computer program. There is no straightforward "algorithm" for doing this; there is a basic De-Bayer routine which even I could write, and an enormous number of "fiddle-factor/massage" routines which work out the "best guess" processing for as many situations as the programming team can imagine, or get told about. (Much the same thing applies to MPEG and JPEG codecs). While you would like/expect this to be completely scalable, so you could do the exact same processing regardless of the number of pixels windowed, (as you can with JPEG/MPEG) that would enormously complicate the software. Bottom line, I suspect the de-Bayer algorithm has more-or-less "hard-wired" 5K and 4K versions, and they haven't put as much work into the 4K version, because in the real world there is not that much advantage in using it. This would explain their occasional claims to the effect that "Build XXX" can now extract dramatically improved picture quality even from orignial RED One footage from years ago. Simply a case of successive refinement. If they had unlimited programming resources, (as was available for international standards like JPEG and MPEG) they probably would have had an infinitely scalable system where you could select just about any number of pixels to suit your speed/storage constraints. Since the software is not open-source, and the Epic is built to a price, I can't see that happening anytime soon. I always thought 5k had 60% more resolution than 4k, my only surprise was how noticeable the difference was at HD. Quote Link to comment Share on other sites More sharing options...
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.