Jump to content

DJ Joofa

Basic Member
  • Posts

    149
  • Joined

  • Last visited

Everything posted by DJ Joofa

  1. Hi Hal, Thanks for the link. Just browsed over it. Seems interesting.
  2. Though conformance to certain parameters of human vision has been utilized in some cases (e.g., chroma subsampling), a strict adherence to the principles of vision has not been the criteria when it has interfered with picture quality. For e.g., almost always, the NTSC luma coefficients (0.299, 0.587, 0.114) are used incorrectly with nonlinear gamma encoded signals, where these coefficients are strictly valid only for linear signals; but that is done since it results in better color fidelity near primaries. The parameters that HDTV committees have considered important are: aspect ratio, digital sampling structure, total number of lines, colorimetry, and transfer characteristics among others. Hardware timings have been important considerations in HDTV, and an important consideration when these standards were being worked out was that the harmonics of the clock frequency do not interfere with international distress frequencies of 121.5 MHz as the civil, and 243 MHz as the military aircraft emergency frequencies. HDTV has taken several decades to be formalized properly and there are tons of standards and resolutions besides 1920x1080. In fact, even for 1920 x 1080, many MPEG2 encoders used 1920x1088. Even when the horizontal resolution is 1920, there are standards that don't have 1080 in vertical, for e.g., SMPTE 260 specifies 1920x1035 active pixels out of 2200x1125. Human vision is a complicated subject. We don't know much about the process in the cognitive stages of vision as we have more knowledge about the lower-level vision stages (retinal sampling structures, cones, rods, etc.). Though, TV manufactures and other people in this area are generally aware of vision principles in this lower level area, that arguments should not be carried too far, as the higher level processing in the vision are still not fully discovered. As I said above full conformance with human vision has traditionally been dropped in areas that have resulted in more pleasing pictures.
  3. I think ffmpeg can't do On2 VP6, at least it did not used to and I have not looked at its code base for sometime. For a gui-based ffmpeg, please try ffmpegX if you are on mac. VLC, which is gui-based, is partly based on ffmpeg also, I think.
  4. 700 Kbps should be fine for 640x360. What is the content of your video? I think you might try On2 VP6. Also please don't encode to SWF as it will limit you in many senses and keep the video separate as FLV. Might try putting key frames several seconds apart. Flash video On2 VPxx is generally considered to be variant of H.263/H.264, so it is not that bad.
  5. Yes I have used Twixtor. But, my experience has only been to learn the tool and play around with it. I found the product okay. But, somebody who does effects on a daily basis can give better advice. My experience on this tool is limited.
  6. Hahaha, that's got to be one of the better comments I have gotten :lol:
  7. MPEG2 Transport scheme can also carry formats other than MPEG2 compression. Transport scheme is different than compression scheme. Therefore it is possible that cable/air are using MPEG2 transport but not MPEG2 compression. Just to be technically correct, and it is not hairsplitting, there are few examples of a pure "digital signal", and the type of transmission on cable/air we are talking about is not one of them. The signal is analog in these cases, and what is meant by "digital signal" is that the analog signal is varied (encoded) using digital data, by employing techniques such as quadrature amplitude modulation or phase shift keying. Therefore, signal stays analog, and only its interpretation is digital, unlike broadcast NTSC signal whose interpretation is also analog.
  8. Chris, its about time you acquired some global perspective ;) and I shall provide you with one example: This movie Waterborne (http://www.imdb.com/title/tt0423514/) that was made on DVX100B (if I remember correctly what the director said in a Q&A session), had a major Bollywood star (Shabana Azmi) in it. Please don't drag this into a Hollywood/Bollywood stars rat race and their respective popularity.
  9. Thanks for the explanation and the reference above. I spend a long time playing around with the calculator on the link.
  10. As I said before, a 50 in. (diagonal) HD TV needs to be ~6.5 feet away and not 10 feet away for optimal viewing. 10 feet would be the distance for a 75 in. (diagonal) HD TV. And, 6.5 feet and perhaps even 10 feet are manageable distances in household settings.
  11. I have used products from revisionfx sparingly and if I remember correctly I might have tried a demo version of the above product, and I don't fully remember my experience with it, though my general impression is that revisionfx has reasonable products.
  12. A 50-inch HDTV could be fine if the viewing distance is appropriately accommodated (~6.5 feet in 50 in. case). For HD TV optimal viewing distance is about 3 times picture height; using the standard 10 feet distance from TV according to some studies, that would mean about 75 inches (diagonal) HD TV.
  13. Hi Chris, Sorry for being a little late in replying -- I must be too involved in merrymaking. Motion blur is a complicated subject. At the cinematography level it is sufficient to know that a larger exposure time will result in more motion blurr. However, at the signal processing level there exist algorithms that try to eliminate/control/reshape motion blur without assuming much apriori knowledge, that work sometimes, but of course, not all of the times.
  14. It is misleading to consider dynamic range and latitude on the same medium if the scene illumination was such that more or less full range available on the negative has been utilized. The fact remains that print film typically has a smaller range than the one captured on the negative, for various reasons. Hence, to select which portion of the negative range should be developed into the range of the print film some portions of the negative information is either discarded or crammed into the smaller range of the print film. This gives leverage to the cinematographer since they can under or overexpose on the larger range of the negative knowing that full range will not be displayed and a portion can always be extracted and then moved accordingly to compensate for under/over exposure. Think of latitude as an output side parameter (portion of negative that goes to print film) and dynamic range as an input side parameter (full range as recorded on the negative). Therefore you have the following equation: Dynamic range = latitude (portion that you select as viewable) + extra information that says on negative
  15. True, and an important factor in the retinex theory of vision. That would also imply that a theater projection will be more forgiving as far as minor color off-balance is concerned as opposed to living area TV display, since most of the light/visual information for color constancy is provided by the light from film source (discounting the flare reflected from walls), where as it is difficult to control TV area extraneous lighting.
  16. Won't go into motion blurr issues in this post. A longer shutter speed (exposure time) is good for noise reduction, with the assumption that proper calibration is done for read type of noises such as dark current that becomes more problematic at longer exposure if the scene illumination is low. For regularly illuminated scenes longer exposure times work good for noise reduction. Another advantage of longer exposure times is that if the exposure time is mismatched to the line frequency, then the (envelope of the) amplitude of flicker decreases with increase in exposure times. Please note that the actual amplitude of flicker is fluctuating but the fluctuation gets lower as exposure time increases as shown below: In addition to intensity changes between successive frames, flicker also manifests itself as banding in the same single frame, which is easier to detect on uniform intensity areas, as shown below: As mentioned before the wavy/banding amplitude shown in the image above can be reduced by choosing longer exposure times, and for certain exposure times the amplitudes totally goes away (i.e., converges to a constant > 0, so each image will have a gray cast, but since all frames will have the same cast it will be unnoticeable).
  17. I hope that you don't see any flicker at 1/45. In theory, there should be a little, but in practise you might have to try.
  18. 1/100, 2/100=1/50, 3/100 (though this one is close to 1/30 = 33 milliseconds and depends upon implementation, due to many signal dependent issues including blanking).
  19. Yes, for a camera based upon a rolling shutter chip. However, framerate does bound the max. exposure time (shutter speed). For 25fps, max is 1/25 seconds, so exposure times of only 1/100, 2/100=1/50, 3/100, (and perhaps 4/100=1/25 depending upon the implementation) might work.
  20. If the digital camera is a rolling shutter one, then, unlike a film camera, framerate has no effect on flicker if exposure time is matched to line frequency. Okay, framerate does have an effect on the phase of the flicker, however, the amplitude of the flicker is controlled by exposure time, and when amplitude is made to go to zero by selecting appropriate exposure times (n/(2 * line frequency), where n = 1, 2, ...) then the dependency on the framerate is removed.
  21. Hi John, that seems right. I think Red flavor of wavelets is intra-frame only.
  22. I am sorry if you found my post offending and apologize.
  23. Hi Michel, Things are a little different in temporal domain. Using some nonlinear approaches, the signal can be made fixed-points (or root signals) of filtering operations so that mostly noise gets suppressed. You are right that sometimes temporal filtering will smear/blurr image information. However, careful adaptive filtering ensures that enough information is still left in an image even after some slight blurring so you can still recover trajectories, etc.
  24. Though, never have seen a Red camera before, after seeing some online footage and reviewing technical specifications, DJ Joofa came to the conclusion that though Red One camera has some obvious flaws, it appears to be a good camera. (It is a widely held viewpoint that DJ Joofa sincerely believes that "appeal to authority" in the quoted comment above is unwarranted, since, though Lubezki and Malick are well-established names, their patronage of super 35 reflects a personal taste. While it may be possible that the viewpoint Lubezki/Malick hold might have resonance with a large segment of the motion picture industry, that does not elucidate the fact that the Red camera appears to generate acceptable quality of cinematic images.)
  25. Hi David, The myth is that certain incorrect notions about the capability of real-time implementation of certain types of processing have been introduced, sometimes because if a typical manufacture could not accomplish things the best way then it can't be done that way. While, it is true that many operations are difficult to do in realtime hardware vs. software implementation. However, for the two things you mentioned, viz., compression and deBayering, compression is already being done successfully in real-time by many cameras, including the Red One. As for deBayering, very good quality hardware accelerated deBayering can be done in real-time.
×
×
  • Create New...