Jump to content

Keith Walters

Premium Member
  • Posts

    2,255
  • Joined

  • Last visited

About Keith Walters

  • Birthday 11/22/1952

Profile Information

  • Occupation
    Other
  • Location
    Sydney Australia

Contact Methods

  • Website URL
    http://

Recent Profile Visitors

18,905 profile views
  1. Originally, TV studio cameras typically had about an 8-stop (~1:256) dynamic range. 256 centimetres is about 8.5 feet, a little over the height of a typical living room ceiling. On the same scale, 17 stops would equate to a ceiling about 1.3 kilometres high, about eight-tenths of a mile!
  2. I can't believe I'd never heard of Betty Brosner before! https://youtu.be/2HHTmPukoy8 Not sure what that has to do with Arri though....
  3. A long time ago in a galaxy far away, TV production studios used to have special "standard TV" speakers, which were specifically designed to emulate the frequency response of the "average" mono speakers in mass-produced consumer TVs. Editors would routinely check the sound through both the "Mr Average" speakers and what was deemed to be "TV Hi Fi" in order to strike the best balance between the two. I don't know whether something similar was done for movie sound, having "Flea Pit" and Dolby Surround test speakers, but whatever, those days are clearly long gone. I have no trouble at all with the dialogue from TV shows made before about 1990, but after that, it seems that the assumption is that everybody has a dedicated Home Theatre room with full surround and mighty subwoofers. I routinely take insertable ear protectors to cinemas now. In fact, that could be a new consumer product: "HiFi" hearing protectors, that just attenuate the sound without "colouring" it in any way.
  4. This one sounds more plausible @ US$300 for 1-99 pieces https://www.alibaba.com/product-detail/WINAIT-1080-1044p-super-8-roll_60699531087.html No supplier offers them for anything like $48, so it's got to be BS.
  5. https://www.shiees.com/product/transverter/?fbclid=IwAR0eruxbpFAkw4pkrGrllbDPQJu8Nabz5NKKdSx7gWlHyhjWoxRObu8NWwg This only just appeared in my Facebook feed. Has anybody else heard of it? Sounds too good to be true. The video looks suspiciously like CGI. There have been other "too good to be true" products advertised online that turned out to be scams. The best one was a DVD-VHS combo with implausibly good specs.... On closer examination you could see that the image wasn't of a real object.
  6. I don't know how I missed this before ? They're up to episode 4 (available on YouTube) and episode 5 is in the pipeline. I found it very entertaining viewing. I always thought Red were full of it with regard to who actually made their sensors and other equipment, and this confirms it. But the big question remains: Why?!! All the major manufacturers of electronic cameras seem to have no problems revealing who actually makes their sensors, and I could fully believe an outfit such as Sony could plausibly make their own. Meanwhile, Apple are still haven't given up on overturning the Red "RAW" patents https://petapixel.com/2019/08/19/apple-goes-after-red-over-keystone-raw-video-patent/
  7. A focussed image is made up of an infinite number of points of light that are produced by converging cone-shaped beams. At the point of maximum focus (the "focal plane") the photons are most tightly converged; either side of that, instead of points of light they produce circles, and the overall result is what we called a "blurred" image. The further you get away from the focal plane, the larger the part of the "cones" that is intersected, resulting in larger circles and less focus. If there is no iris fitted, the angle of these "cones" is specifically determined by the size of the front element of the lens. When an object is being imaged, photons from every point of light on its surface are going to strike the entire front surface of the lens, and eventually be re-converged into the "cones" striking the imaging surface. Obviously the size of the front element is going to determine the angle of these cones For example, a lens 2 inches in diameter is going to produce cones with twice the angle of one with a 1 inch diameter. Therefore, the out-of-focus "circles" are going to grow in size twice as fast with the 2 inch diameter lens as they would with the one inch diameter lens. In other words it has a much shallower depth of field. If you interpose an iris somewhere between the front element and the focal plane, it is going to have the effect of reducing the diameter of the front element, reducing the angle of the cones, and consequently increasing the depth of field. If you reduce the size of the aperture down to the size of a pinhole, then the angle of the cones will become extremely small, the photons will be almost parallel and it doesn't really matter where the image sensor is located; you have "infinite" depth of field. Which is exactly what you get with most small consumer cameras.
  8. Cinema lenses are generally larger diameter to minimize the smallest available depth of field. That is, maximizing the de-focussing effect of parts of the subject not in the plane of optimal focus. You could have two lenses with the same focal length but one with a front element one-quarter the diameter of the other. The smaller one will behave more or less like the larger one stopped down by the iris, that is, with a wider depth of field. Cinematographers mostly use lenses with large front elements simply because there is no other practical way of capturing the shallow depth of field required for conveying the illusion of closeness to the subject. The only other way would be to use larger format film such as 65mm. That was one reason for the popularity of the Canon 5D - its oversize image sensor. Otherwise it's not a particularly great video camera.
  9. This document gives a lot of information on lens types https://www.jaycar.com.au/medias/sys_master/9144062378014/Primer-Video-Camera-Basics-Jaycar.pdf
  10. Looks like they pulled the plug: https://www.hollywoodreporter.com/news/panavision-sim-saban-capital-acquisition-abandon-merger-plans-1191387 Wonder what happened there.
  11. It was the classic case of the Tail Trying To Wag The Dog™. With virtually every other camera format you'd just shoot and present your files to the production house. You didn't have waste time trying to establish whether your editor-of-choice (or purveyor of facilities for same) could be arsed installing the Red bloatware on their systems, or updating their existing editing software for the benefit of a clientele who probably couldn't afford them anyway, and/or were potentially more trouble than they were worth. This was particularly so a decade ago when the necessary download bandwidth and hard drive real estate were considerably more costly than they are now. I suppose the main reason Red started encrypting the R3D files was that FFMPEG conversion spat out what was actually there, rather than letting Red's proprietary "Turd Polishing"™ software get its hands on it :rolleyes:
  12. I seem to remember Ronald Perelman spouting much the same Gung-Ho financial-speak over 20 years ago. In the end, after spending frightening amounts of his own money clearing absurd levels of Bond debt, Panavision was seized from him by the banks and he was left holding an empty bag. “Advancements in technology and the emergence of streaming have fundamentally changed how consumers watch and discover content. This is driving significant growth in the market for production and post-production services. This secular trend creates a tremendous opportunity for Panavision to leverage its leading technology and pursue opportunistic acquisitions to grow in a manner that is agnostic to the content creator and distribution channel.” Most of that is quite correct, but I don't think Panavision have any particular expertise in that area, that's the problem
  13. There are a few experimental prism 3-CMOS 8K cameras kicking around from Hitachi and a couple of others, but details are extremely scarce. It's the same issue: There are plenty of single-chip designs around, but only 3-chip cameras can deliver the short image latency needed for live broadcasts. I really don't think over the air broadcasting is ever going to be up to the task though. It's more likely to be delivered by 5G wireless.
  14. Yes, I know all that. Long before there were even silicon sensors, there were camera tubes filled with colour stripe filters to make "single tube" colour cameras. However, it wasn't until well into the 21st century that portable computing technology got both powerful and cheap enough to handle all the processing required to get your "very decent image" Before that, single-sensor colour cameras were always the "poor relation." The only reason for developing high quality Bayer sensors for "digital cinematography" was simply that it allowed cinematographers to use the same lenses as they had been used to using for 35mm film cameras, since making a 3-Chip 35mm-sized prism colour separation system was not practical. I just thought it's ironic that after all that "prism-is-dead" waffle from the desperate-and-clueless wannabe element, for 2K cameras, Sony have still seen fit to use 3-chip prism optics for their 4K cameras. Maybe, just maybe, Sony know a bit more about it than they do..... "Pretty sure a lot /all ? of sports is broadcast 4K in Japan now.." Pretty sure you're wrong. There has been some 4K, but from what I can glean from Japanese news sources, it hasn't exactly set the world on fire. One major problem is that most everybody in Japan who wanted an HDTV has now got one, and is seeing little reason to upgrade to a 4K model. Plus a lot of the TVs with 4K panels don't actually have tuners that can receive the 4K broadcasts. But anyway, my question has been answered more-or-less. 4K OB systems are available, but it would appear that by the time it hits the customer's 4K screen, it's going to be something rather less than that.
×
×
  • Create New...