Jump to content

Andrew Ray

Basic Member
  • Posts

    45
  • Joined

  • Last visited

Everything posted by Andrew Ray

  1. Film after DI process looks like output from good quality 2K camera to me. I guess some people still listen to the vinyl records, some to the CDs and some take 32 bit recorded music and put it through digital profile of vinyl sound characteristics and they have vinyl sound. I think 4K post production tools will be able to simulate color, grain and softness of any combination of film stock and camera hardware, including specific look and color of the lenses. Andrew
  2. Gentleman, I am new here and I can?t help the impression of very similar social phenomenon happening on McGill University (and not only) back in PC/mainframe computing era. Half of the university higher ranking academia was buying IBM mainframes for $500,000 per any box of it (hard drive, CPU, I/O unit) and another half was getting excited with PCs just being introduced. And we all know what happened next. Gentleman, relax. I can feel the high tension here on this forum, anything I missed that happened in the past 12 month here?
  3. Someone from RED team mentioned on the reduser forum that Misterium is Bayer CMOS sensor. http://en.wikipedia.org/wiki/Bayer_pattern Unless it is modified version of it, all is possible. Actually if we use picture movement just to enhance the resolution of the picture you don?t want to use more then 4 frames to do calculations, I think. Anything more will introduce more blur. I imagine there will be kind of 4 frames sliding window throughout the sequence.
  4. Matthew, so let's start the new thread about 'assembly' If I know how to spell I will not ask the question in the first place.
  5. I have to tell you that you caught me by surprise here. Why we have always so much trouble with the lenses? :-) Damn glass Sorry, gla.
  6. Actually David, 1920 x 1080 = 2073600 12MP Bayer has 3MP of red 3MP of blue and 6MP of green. Yes, I do the same mistake sometimes again and again. I think sometimes that 4K is 4MP, funny how brains work. 4K is 4096 x 2304 in 16:9 screen aspect ratio. So 9.4MP Gavin, temporal in 120fps actually will give you 1:1 deBayer It is possible to do it in post on your computer if you have RAW, but yet I have to see the hardware DSP optimized for such algorithm so it could be installed on the camera. As Jan said, 2007 is too early for this. At 60fps you may get loss of sharpness trying to do it and at 30fps for sure not.
  7. You guys are all wrong. As in the example 'sunglasses', and 'this sunglasses', not 'these sunglasses' 'Take this sunglasses and try it out, please' 'I love this sunglasses, they are really cool' We should write 'my lenses' not 'my lens' unless you are assembling the lenses and you want to say to your assistant that just took one lens off the table where you were fixing your lenses 'give me my lens back, I have to install it in the lenses now' However we usually refer to the lenses as a whole assembly so lenses not lens. Also 'this lenses', not 'these lenses'. Exception will be when you have many single individual lenses on the table and you ask your assistant: 'Could you gather these lenses and put it in the box there on the shelf so we can assembly it together tomorrow' Andrew
  8. Jan, I like your metaphor about inverted mpeg. When you look on 3 dimensional space projected on two dimensional surface in motion, it is actually reverse compression of infinite elements of the space. I do agree 100% with you, spatial movement in the picture as oppose to stills is actually compressed information of still picture infinite elements. This is the good simplified way of presenting it. Woowww! I never thought about it this way. Most cameras, still or motion one, do have much better RAW to final output conversion, on the software supplied with the camera not on the camera itself. I think it is because camera CPU is not as fast as computer that we use for post processing to begin with and it requires DSP that has higher power consumption, thus higher heat dissipation. What I like though he is using RAW or vavelet compression. http://en.wikipedia.org/wiki/Wavelet_compression The nice feature of this compression is the fact, that image is presented with the full resolution of the sensor as soon as you stop or slow down the pan or movement on the screen. If you get some fast movement in the frame you loose resolution on the whole screen a bit, but then motion blur doesn?t let human eye to recognize many details anyway. As oppose to non vavelet compressions that shows you the blobs of quadrants flying all over the screen. Just try to press the pause button while playing the most mpeg videos. With vavelet, once you pan the wide angle view with tons of details on it and you slow down a bit or stop, the whole screen becomes crystal clear and incidentally it is the moment where we want to show to the viewer most of the details anyway. We don't show fast action scenes using wide angle lenses, do we? Having RAW output clocking at 9GBits per second makes all this compression talk redundant, but it is good to have the compression for low budget production or previews anyway.
  9. Hello Jan, yes getting it out of the sensor and de-Bayering is still more the art then science. So I understand Panasonic, that they didn?t want to show how they are getting more from less. I like the Dalsa and few other manufactures approach. They publishing as much specs as they can and then right away there is a white paper next to the specs explaining how they get there. Sometimes you do not want to publish anything, because you don?t want competitor to figure out how you got there, this I understand. But in case of Panasonic, come on, pixel shifting is well known trick to get higher final pixel count. I see that RED has RAW output port, so if someone do not like what he gets with manufactures de-Bayering algorithm or conversion to RGB he can always get raw and use whatever is out there, or will be out there in the future. Knowing Graeme and ability of CMOS cells to be directly addressable with lightning speed, he will make this 12MP Bayer data in to the 24MP. http://www.nattress.com/ He is behind the REDCODE. As oppose to the DLSR still picture camera the movie cameras deal with dynamic changing patterns so there is a room for creativity beyond the basic resolution of the sensor, each frame is different so more room to recover light information based on the light changes from frame to frame. Andrew
  10. Hal, and others reading this post, seriously, how to use it: lenses or lens for single tube?
  11. Jan, any manufacture that is hiding its specification is usually hiding something that is not to the business advantage. Judging the openness of RED camera creation process, they will not hide specification that will not harm their business. Some lenses manufactures do hide MTF graphs and other technical specification only if the specs do not look good. If specs look good then they trumpet it all over the media. Yet it is so easy to test and make MTF charts if need be. The same relates to the camera sensors. Remember Panasonic HVX200 sensor mystery? We rented the camera and it took us 4 hours to figure it out that they use .5K sensors. We put it on test charts and we got 700 lines, so pixel shifting was in place. They did more damage than good by hiding it from professionals. It is my opinion, and only mine: I do not have respect to the manufactures that refuse to give me specs without valid business reason, it just takes few days longer to figure it out anyway. Andrew
  12. 4K is defined as a 4096 horizontal pixels format containing 4096 unique samples for each color (RGB) What I read from the redusers websites the RED sensor has 5.5X5.5 micron size. So for frame of 24 mm wide Bayer sensor (safe area in S35mm format) we will have: 24mm / .0055 mm = 4363 photo-sensors. 6.1% over sampled. 6.1% over-sampling for Bayer sensor is a bit small number to get 4096 unique reading in all color compositions of the picture. For B&W yes but for color it is bit too tight. However we don?t know if 5.5 micron is the dimension of each photocell or it is the spacing from the center to the center, since photocells have to be separated. Let?s wait till Monday for this answer. Andrew Ray
  13. Related question: When we talk about the lens, can we use ?lenses? After all there are many lenses inside the tube? Somehow I tend to use lenses when referring to the whole assembly. Now if it is correct, should I use ?this lenses? or ?these lenses? when referring to the one tube? I use ?this lenses? and my word processor always marks it as a grammatical error. In context it makes sense though. Andrew Ray
  14. Dave, you are just on. It is important to separate 4K screen from 4K sensors terminology. I didn?t know that we can have such nice use of 4:3 frame aspect format. Just didn?t think about it. Yes D20 is not 4K capturing or even recording camera but I just use it as an example that in post even D20 could be considered as a 4K recording camera. I am just bit sarcastic here because there are some companies out there marketing this way. If RED has 5K horizontal resolution then hmmmm?. I hope that the de-Bayer algorithm does high end math to get unique horizontal 4096 pixels out of it. There was one guy there, I think his name was Nipkov, he said that there is impossible to transmit more data bits per second then the channel bandwidth and there was even the Nipkov law of max speed transmission. http://en.wikipedia.org/wiki/Paul_Gottlieb_Nipkov He is dead now and guess what, we are transmitting the 56k over plain telephone channel that is only 3K (3000Hz wide) According to his theory and Nipkov law we can transmit only 3K Now we have Cable TV channels transmitting in old 6MGHz old single NTSC channel 30MBits/sec bit rate of HDTV channel. So I learn one thing in my life, never say never. I guess if you take 2X2 , 4 adjacent pixels of the Bayer pattern and you scan whole sensor left and right and up and down with only one bit offset you can get 5K of unique measurement per horizontal line in the RED 4K sensor. Then you can repeate the same using 3X2, and 2X3, adjacent pixels and get two more measurements again. Put it in some smart algorithm and maybe you can get 4K out of it. Also if the camera will pan even so slightly (one pixel width) then you get whole more extrapolated pixels. Now take first frame then second frame, compare/extrapolate, separate again in to two frames and voila, 60fps full 4K, but I want to see it first. Seeing is believing. As to the oversampling, you see Dave, for the dynamic range it really doesn?t matter if you have 8MP or 22MP in the S35 frame. We will be combining these pixels back together to form 4096X???? so the light and dynamic range will combine as well. It is like two TV antennas are giving you 3dB more signal (one stop in terms of light) then single one. What does matter is the coverage of the sensor surface to total sensor surface. http://www.dalsa.com/dc/documents/Image_Se...18-00_03-70.pdf For CCD it is I think 90% for CMOS is 50% CMOS though can go with all these 50% of the other circuitry surface under the surface of the photosensing layer. Kind of multilayer design, once they do it then even CMOS will have 90% surface coverage and much higher speed. See above article about the CMOS and CCD. Andrew
  15. Thanks! to all, I really appreciate your comments. Jan, I will try to be more realistic, though I have trouble with it when I smell big changes in technology. Yes, I am judging the changes in the cinema/TV in a bit egocentric way. Maybe changes will not be so fast, certainly HDTV didn?t came quick. I hope though, that progress is more like 2,4,8,16,32 rather then 1,2,3,4,5. David, you touched interesting subject. Yes, there is a lot of confusion about 1080p 2K and 4k. I think it is due to the fact, that it is new standard and everybody sees it and does it a bit different way. Again ?everybody? is just handful of the companies. I just try to think about 4K like a fixed 4096 display elements on the screen. Some are saying that 4K it is 4000 pixel elements, and yet some say 3,800 pixels, like Sony 4K projector. I hope that 4K will be 4096 and 2K is 2048 and 1080p is 1080x1920 Again, we are talking about digital displays, LCD screens, projectors. Now we have different aspect ratios of the screens, 4:3, 16:9, 1,85:1, 2.35:1, 2.4:1 etc. http://en.wikipedia.org/wiki/Aspect_ratio_%28image%29 For simplicity we can achieve any of these screen ratios, just by putting proper number of vertical pixels in the frame that has 4096 pixels as a horizontal base. We are talking screen pixels, not camera pixels and RGB nomenclature, so one pixel has one R one G and one B pixel. Now when we are talking about the camera sensor, it is different story all together. To cover all the aspect ratios we should have sensor shaped like 4:3, but 4:3 will be out after 2010 by FTC decision, so no need for this one. So let?s say we start with 16:9 ratio. ?Poor man? solution sensor will have less then 4096 horizontally and less whatever we need for vertical to meet the ratio of the screen. Then in post we will uprase it to 4096 X ???? for the final presentation. This one will not meet the film quality requirement but somehow Star Wars was shown in the cinemas based on this formula. Hey, first experiment, let?s be forgiving here. Now if we want to compete with film, then our 4K sensor better have more then 4096 RGB pixels horizontally. In old days we were saying you need 3 times more digital samples for the frequency of one analog interval. But in the modern days of mathematics conversions and extrapolations and side effect filtering, we are getting closer to 2 times instead 3 times. Now the RGB pixels in 3xCCD prism based system we will count differently then pixels in single CMOS Bayer pattern sensor and yet different in 3XCCD systems with pixel shift by ½ of the width and height of the pixel and yet different from two Bayer pattern sensors shifted by one full length of pixel right and down. And to confuse it even more we will count differently pixels on the single chip Foveon RGB sensor. Dalsa, ARRI D20, and RED all have Bayer pattern camera sensors. So to call themselves 4K cameras RECORDING cameras they could have even 2048 horizontal pixels and RECORD full 4096 pixels, what?s the problem to record in whatever you want resolution after upresing. But to call the camera full 4K capturing system, then IMHO we better see 8192 horizontal pixels in the Bayer sensor. Some will argue that 6144 will do. Well, we have all this mathematicians working on it, so I will not argue here. Sometimes less mean more. Seriously. Karl, I agree with you that 8mm or OMNIMAX can equally impress people as a good/story/acting material, but if you are in the nature/science stories most of the time, you can impress the people much easier, showing tons of details on the screen so your home theater screen looks more like a window to the outside world not like a screen and people go wowww! Though you need wide angle lenses most of the time for this wowww!! Andrew Ray
  16. Max, 65 mm I was referring to the Master Prime 65mm focal length ARRI/Zeiss lenses. I think Master Prime are aspherical not spherical. See here: http://www.arri.com/prod/cam/master_primes/mp_articles.htm Are these lenses re-build? Maybe I expressed myself ambiguously so we have to differentiate between 4K format and Mega Pixels needed for 4K format. 4K format is derived from the fact that it has 4k horizontal pixel count. Now in 2,35:1 screen size it will need about 12MP on Bayer sensor in good color and pattern proportion in the content of the picture and about 20MP in adverse content, like lack of green colors or fine repeating patterns. So we can have many different quality grades of 4K Bayer sensor, depends from the MPs the 4K Bayer sensor have. Also Dalsa 4K camera has only 8MP. I agree with you that 35mm film has more than 4K resolution as a master copy and only if it is done using highest quality film that is normally not used for most of the shoots. When it comes to the final copy that is watched by the viewers in the cinema, it doesn't come even close to 4K since the DI copy process is based on 4K sensors (ARRI) and the process require the light to pass 3 times through the lenses. Once when we are capturing the image (filming) second when we are scanning, third when we are projecting in the cinema. So out of 3 passes via 4K quality lenses/copy/display you end up with hardly 2K full quality. Other not DI processes are even more loss introducing processes. Now in digital 4K camera/presentation process, the lenses are used only once, during the acquisition. From this point on, especially when the material is watched on the non projecting digital screen, the quality of the material is mathematically lossless. (zero degradation) I understand that only few cinemas has ability to show full 4K material, both analog or digital way now, but our material is not news/broadcast category, so it may have more then 10 years presentation life. We know that full resolution 4K format storage of our material will have much higher value 5 years from now then 2K one. How much you will pay for 10 years old, low res good story/acting films versus, high res high quality good story/acting? 10 years from now most American households will have 4K projectors/screens competing with the cinemas and cinemas will be fully digital. Who would want to watch these soft 2K formats then, it will look old. Andrew Ray
  17. Thank you for taking the time Karl, Lance, Max, Brian, and explaining it here, I really appreciate. I hope that we are not hijacking the thread, but it is Dalsa that decided to go custom made lenses that triggers this discussion. Max, I din't know that ARRI 65mm are rehoused Zeiss medium format! Is Master Prime 65mm rehoused as well? Karl, last month I read a lot of material along your lines of reasoning. Dalsa has interesting article there. http://www.dalsa.com/dc/documents/Image_Re...PTE_37_2003.pdf I am still not clear on the combined resolving power of film/sensor and lenses. From their calculations it looks that as long as the lenses have double resolving power as compared with sensor/film you should get close to 90% of the rated resolution at 50% modulation. Also I hope that the new Dalsa camera will have more then 8MP sensor on it. David, did they said anything about the sensor on the new camera during the presentation yesterday? What we did here, we tested lenses with the 100% modulation and once selected we use D20 to get some test shots at 2MP (ARRI D20 is 6MP Bayer sensor) at 50 and 10% modulation. Test charts are B&W. In color, the performance is almost the same providing that we have green content in the test charts. Shooting test without the green brings all tests significantly down. We used this charts for 50% and 10% modulation with different colors. http://www.normankoren.com/Tutorials/Lenst...86p_15g_0is.png Everything looks acceptable for 2K format. When we extrapolate the tests to 4K nothing makes sense, unless lenses get better rating in Lp/mm. Anamorphic lenses in terms of resolving power, give you bit better performance and extra light in the frame but it is like squeezing the already squeezed lemon again in desperation. I do not discuss artistic performance of anamorphic lenses here. Somehow anamorphic lenses it is not elegant way to go especially from engineering point of view.
  18. Thanks! Brian. We tested Zeiss UP and MP and Cooke S4 and anything below 65 do not have sufficient 200Lp/mm resolution away from the center of the frame. Also chromatic aberration is limiting the resolution to less then 100Lp/mm except the very center of the frame. I was thinking about some lenses that are designed for 70mm format and the adapter to PL mount, along this lines?
  19. Gentleman, as you see from post number, I am new here. I do underwater and cave exploring and filming. We would like to move to 4K format this year and from our limited experience we see that there are no lenses on the popular market that will have resolving power to support 4K sensor, that is about 200 photo sensing sites per mm. Equivalent of 100Lp/mm The lenses have to have minimum double of that so 90% of resolving power will be preserved. We, as well as most of people here on this forum, are new to 4K. Can anybody help us with good advice on this subject, please. Andrew
  20. Does it mean that there are no good lenses out there that can work with the full 4k format? Are there any lenses that can resolve 400Lp/mm and could be used with the cine type camera? Andrew
×
×
  • Create New...