Jump to content

Line skipping / Pixel binning


Emanuel A Guedes

Recommended Posts

Interesting links posted @ dvxuser.com [ LINK ] by Richard aka richtrav (dvxuser username):

 

«(...)

 

OK, this is turning out much longer than I expected, hope I don't ramble. Some things I'm making up names for (like a superpixel, which is just a pixel that has a full set of RGB values). When I refer to subsampling it just means skipping over pixels, or binning, which I'm taking to mean the process of sampling several pixels of the same color and generating a single value for all of them before that reading gets transferred to the data pipeline

 

One place to look at a way chips use subsampling is by looking up a paper by Cypress for their full frame CMOS sensor, it's at:

 

http://www.cypress.com/?docID=18524

 

Pages 5-8 show the different kinds of subsampling routines available for this chip and I would think there are quite a few other chip makers that use a roughly similar routine for generating live view/video. The paper is slightly techy but the pictures are pretty self explanatory, the sensor pretty much reads out a square of 4 pixels (1R 1B 2G) then skips over a predetermined number or rows before sampling another group. This seems like a pretty basic subsampling routine to me, nothing fancy at all. This method easily produces a new smaller resolution pattern, where each new pixel in the layout comes with a full set of R,G and B values (a superpixel) so no demosaicing/interpolation is needed. Note that, with the Cypress chip at least, subsampling instructions are hardcoded onto the chip (Table 2 shows the codes required to activate a particular subsampling mode). I'm guessing that this has probably been one of the most common methods to sample sensors for video/LV in the past.

 

But that's just one way of sampling a sensor for video. Another method appears to involve scanning just one pixel (instead of a 2x2 group) then skipping to another single pixel farther along. This is apparently what Samsung does, and was deduced by the brilliant Falk Lumo when he was studying the Pentax K-7's video mode. By experimenting he concluded that the Samsung chip was reading every 6th pixel in video mode. A nice write-up of it is here:

 

http://falklumo.blogspot.com/2009/06/k-7-as-movie-camera-part-i-technical.html

 

Since neither Pentax nor Samsung published their methods at the time for subsampling he had to guess at the pattern being used so he came up with something like this based on the color shifts he saw in the videos:

 

http://falklumo.smugmug.com/gallery/8729393_oDRh3/1/#577245567_vwSxZ-A-LB

 

Now Falk is very smart and concluded Pentax was probably not optimizing its sensor based on the color aliasing he saw and actually proposed a subsampling pattern of his own that would make more sense:

 

http://falklumo.smugmug.com/gallery/8729393_oDRh3/1/#577316092_vLWjd-A-LB

 

Patterns like these (especially Falk's proposed improved pattern) would give you the 1536x1024 resolution image that would essentially be a checkerboard pattern; basically you'd have one superpixel with a set of full RGB values followed by a blank space of nothing, where colors would presumably have to be interpolated from the 4 neighboring superpixels. Samsung on their website has since advertised that their 14mp chip has a read 1 skip 2 mode and a read 1 skip 4 mode, which would be very close to what Falk hypothesized and Pentax may well have had a read 1 skip 5 mode custom done for their chips. Subsampling like this seems especially prone to color aliasing.

 

And then there's Canon, the company that has attracted the most attention WRT its sampling routines. From what I've read on the web it uses a different implementation than either of the above (the entry on this thread is a good place to dive in: http://forums.dpreview.com/forums/read.asp?forum=1000&message=33265391). It's a little complicated so I'll try a simple explanation followed by a picture (thanks to DSPographer for helping me out with the pixel spacing especially). It appears Canon scans every third line on its sensor when in video mode. On these rows every pixel is read but it appears that 3 pixels of the same color get binned together. So horizontally pixels are getting binned in trios while vertically they're getting skipped over (this explains why color moire is worse on the horizontal axis than the vertical). What this does is create a new smaller bayer pattern 1/9 the size of the original sensor, so there are no "superpixels" at all with this method. Here is an illustration of what I think is happening:

 

http://i73.photobucket.com/albums/i202/richtrav/Canonbinandskip.jpg

 

If you do the math you'll see that even the 5DII wouldn't have enough pixels to form a 1920 wide pattern, so they're probably upscaling on all their cameras to get to "1080p". I don't know how they implement their 720 mode (maybe every 5th line???) but obviously fewer pixels are getting scanned. Their VGA mode seems to be based on a downscaled version of their 720 mode (except for the special crop mode on the T2i and 60d which appears to be a full scan of the center portion of the sensor)

 

This makes it difficult to compare video "resolutions" of different cameras because they're all using different shortcuts to get there. The company that's probably doing it the best is Panasonic. Lord only knows how Panasonic is getting their video but it's a pretty good method, while the black and white resolution isn't much different from other cameras (which means still not close to true 1080) the GH1 can pick up fine details and textures missing from Canon and Pentax cameras.

 

Sony seems to be picking up more detail as well with their new sensors, if you download the video here:

you'll see at 1:02 a comparison of the VG10 with the Canon 550. The 550 certainly does curvy lines and fonts more smoothly but the VG10 is picking up detail completely missed by the Canon. UPDATE: No need to download the whole video, you can see still grabs here for the canon: http://www.point8cam.com/blog/wp-content/uploads/02.jpg and here for the sony: http://www.point8cam.com/blog/wp-content/uploads/01.jpg. I don't know how Sony is implementing their video either, you can get papers for some smaller Sony chips on their website that may offer some clues, like here: http://www.sony.net/Products/SC-HP/cx_news/vol55/pdf/imx032cqr.pdf. I would also guess that Nikon may have relied on Sony to help develop the 1080 mode for their new cameras.

 

Well that's it, I'm tired - time for bed - hope this didn't bore you too much. I have tried a few very informal experiments with some of these cameras, they're on my photobucket account.

 

Richard»

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...