Jump to content

Michael Most

Basic Member
  • Posts

    765
  • Joined

  • Last visited

Everything posted by Michael Most

  1. If the monitor in the color suite was a CRT monitor (if this was done in a major post facility, that's entirely possible) and now he's looking at it on an LCD or plasma, that's all the explanation you need. CRT's tend to hide things like low level noise and grain. LCD's and plasmas - particularly LCD's - tend to emphasize it. This comes as a shock the first time one encounters it, but it's absolutely the case, which is one reason why a lot of those same facilities are now using plasmas as their color correction monitors (not the only reason, nor the most important, but a reason none the less). This would be noticeable on things like fades. Making a tape from 422 vs. 444 is not going to mask or emphasize noise. In fact, the difference between the two is usually undetectable by most human eyes on most monitors. If he thinks it's "better," it could easily be his imagination.
  2. Any guesses as to why so few people (other than you and me, of course) seem to understand that?
  3. Not the only one. The Phantom records uncompressed RAW, and the Silicon Imaging 2K allows uncompressed recording as well. And although it's no longer built or offered, so did the Dalsa Origin. In some ways, the Viper would also qualify, even though it's a 3 sensor, RGB device, in that the signals are not processed in camera in Filmstream mode.
  4. Most shows shoot 32,000 in the first 3 days! The average drama today exposes about 10,000 feet a day for 8 days. Many episodes will hit 100,000 feet or more. That's what happens when you have two cameras rolling on almost every setup, and, shall we say, slightly less experienced directors than we used to have. I do recall the days when we used to shoot for 7 days, expose about 40,000 feet total, and print about 2/3 of that. And that was on 4 perf. And we seemed to get just as much coverage as we often do today. Funny how that works...
  5. That is almost always a problem created by either your local affiliate or, in the case of some cable networks, the network feed. In the case of broadcast networks, there is now only one feed sent, the HD feed. This is the case for all of the US based networks, at least. Any reformatting is done by the local affiliate and has nothing to do with how the show is delivered or mastered. It's not intended for or marketed to lower budget productions, although the rental cost has decreased considerably since its original introduction.
  6. Ahh, but that's because you were with Viacom, one of 2 early enlightened companies (my own employer Lorimar being the other). And just to show how long both of us have been around, we (at Lorimar) started using 3 perf in 1986 (with Max Headroom, later with virtually all of our shows). I don't think Viacom adopted it until much, much later - in 1987 :rolleyes: For the major studios, as I recall, 3 perf was adopted first for sitcoms around 1989 or 1990 or thereabouts - basically around the same time Panavision came out with 2000 ft. mags for the G2's. Using 3 perf allowed for almost 25 minutes of run time, allowing an audience sitcom to do only one or, at the most, 2 reloads. Single camera really didn't embrace 3 perf until the late 1990's, when 16x9 came into play, and it became the standard shortly after that when HD finishing came along. Please feel free to correct my timetable if you recall it differently, though.
  7. Virtually all television programs that are shot on 35mm have been shot on 3 perf format for a number of years now. 4 perf hasn't really been used for television series for at least 8 or 9 seasons (at least 15 on sitcoms), certainly since 16x9 HD deliveries became standard. The standard method of getting 4x3 from 16x9 is to do a center crop extraction. This is the method generally used regardless of origination. All aspect ratio changes have been done electronically from the HD master for a number of years, so there is no difference between material originated on HD video and film in that regard. It is rather impractical in today's world to light for 200 ASA across the board on a television series. Possible, but not very practical - which is one of the reasons, quite frankly, why Red hasn't made a lot of inroads in network television. As for dirt, regardless of how well the film is handled, there will always be dirt issues, and they are magnified almost fivefold with 16mm due to the size of the physical frame. Any random dirt is much more noticeable because the same size dirt fleck is much larger in relation to the image size than it is on 35mm. Most 16mm shows (currently, Burn Notice, Monk, One Tree Hill, and Chuck are all shot on S16) will budget 6-8 hours of dirt cleanup per episode. That's a lot of hours and a decent sum of money to spend just to retain film colorimetry.
  8. Even more interesting to note that the vast, vast majority of those shows are going to Genesis, F35, F23, and some use of 900s (and their successors) and Panasonic. I might be wrong, but I don't know of any network shows other than Southland going to Red. Some cable series have (Leverage and The Cleaner come to mind), but not network programs. That's not to say they won't, just that as of now, they haven't.
  9. It may have similar colorimetry, but it doesn't have the depth of field characteristics, sharpness, or grain characteristics of 35mm. 16mm also creates some problems when you need to do things like visual effects. There are also those that simply dislike the degree of visible grain on most S16 stocks shot under common conditions. With S16, there is also the additional cost - sometimes considerable - of performing dirt cleanup in post, something that is unnecessary with electronic alternatives. So whether S16 is cost competitive with most of the electronic alternatives depends, in part, on whether you look at the big picture. The move to digital formats today is primarily centered around "big chip" devices like Genesis, F35, D21, and Red, which all provide an image that retains a lot of the characteristics of 35mm capture, in particular, rather shallow depth of field. Having been involved more than once in comparative tests, I can say that for many, the look of modern electronic cameras, such as the ones I just mentioned, is a more pleasing, richer, and cleaner look than that achieved by S16. You may not agree, but that is simply the case. Many of them look at S16 as an attempt to needlessly hang on to film technology. And even though I have a long, long history in film, I have to say that on some level, they have a point.
  10. I really wouldn't consider Twitter a bastion of truth, although in this case, it is true that 24 (which I inadvertently omitted from my film show list) started the season on 35mm and will, for the moment, stay on 35mm. That might or might not continue to the end of the season, however, and Rodney is constantly looking at various electronic alternatives. So far, none has completely met their needs.
  11. Yes. The only one I'm not 100% positive about is the original Las Vegas CSI (I'll try to confirm it). New York and Miami have definitely gone to digital. As John mentioned, the move away from film was severely accelerated by the SAG situation. I would say that it pushed up the timetable by at least a year or two, and as a result has left a lot of post businesses (which are, in general, heavily invested in film transfer equipment) seriously reeling. The move was inevitable, but nobody expected it to happen this quickly.
  12. It would likely have gotten cancelled without cutting their costs, that's true. Going to a digital camera was only one - and a somewhat minor one - of the ways they accomplished that. The same is true of almost every drama on network television. All have been required to cut costs, because in case people haven't noticed, network numbers are nowhere near what they used to be and their financial model is changing. No different than any other industry (or what's left of industry) in this economically disastrous time. As for "the industry going digital," you're going to have to get used to it, certainly for television. This coming season, there will be far fewer film shows than ever. Many established film shows will be switching to digital shooting, including all of the CSI's, Criminal Minds, Numbers, NCIS, and a few others. In addition, every new show will be digitally originated with the possible exception of Eastwick. In fact, besides Lost, Brothers and Sisters (also rumored to be going to digital), Ugly Betty, Grays Anatomy and its spinoff, Heroes, Chuck, House, and One Tree Hill, I'm hard pressed to think of any other dramas still on film. My guess is that within one or two more seasons, there won't be any. The sea change long predicted has actually occurred. Such is the way of the world.
  13. While that's partially true, there are a lot of advantages to SR tape, even in 2009. Compared to your method, material on SR tape can be played immediately, in real time, on any SR deck (and there are an awful lot of them), in different formats (almost all SR decks have format converter cards). LTO4 may be a fine physical tape data streaming format, however, there is no standard for how files are written to it, and it is nowhere near real time (i.e., 24 fps) when either writing or reading. The format that is used must be available to both the supplier and the user, so whatever backup system is used must be used by both. In many cases, this is not optimal because many of the more efficient backup systems create a local database for file restoration. While it is true that you can use standard TAR files, these are slower to retrieve. Post houses continue to use videotape in part because they're well set up for it. But they also continue to use it because there's nothing in the data world that can match its combination of real time performance, high quality, robust error checking, and common worldwide interchange without conversion. Having a transfer facility deliver both an SR videotape and digitized files (in uncompressed form, ProRes HQ, DNxHD 175x, or whatever other "low loss" format one wants and can use) gives you the best of both worlds.
  14. I disagree. The notion that faster is better is like the notion that fast food is better than a properly designed meal. Watching TV news - or looking at clips on the Internet, for that matter - is a way of force feeding your brain. It gives you no time or opportunity to properly consider and digest what it is you're being told, and it gives you a much less nuanced view of what's going on. You also can't pick and choose what you want to know in more depth (you can on the Internet, but you certainly can't on TV). There is value in the printed word, and whether you think about it or not, there's value in advertising that's local and directed to the local market. You get that in a newspaper and you don't get it through visual media, either broadcast or broadband. Under your scenario, maybe books are outdated as well, since watching a movie is faster and requires no actual imagination or thought. Do you also favor getting rid of books? There are circumstances - such as breaking events, and the Kennedy assassination is a good example - where live coverage is unparalleled in its ability to quickly bring you the story. But a newspaper is not a mechanical device, like all of the other things you mentioned. It is an instrument of information, and it brings you that information in ways other devices cannot. It not only has value, it has great value. That doesn't mean it will last. But it does mean one shouldn't be so quick to smugly dance on its grave, especially if its replacement simply isn't as good. Which, by the way, sounds very much like the current situation with digital and film capture :P
  15. I would suggest that the first important feature in making your movie should be a script, a cast, and at least one person who's done this before. A camera choice is the least of your issues.
  16. Not really. The Silicon Imaging camera, combined with the Cineform Raw compression codec, basically predated Redcode by at least 2 years. And while the data being compressed wasn't 4K, the approach was essentially identical - and the primary reason for the lower resolution was that Silicon Imaging was, by design, building a camera with available technology. Red decided to do their own compression codec for reasons known to them, but the notion of using wavelet compression techniques on the RAW data directly from the sensor, as well as the notion of recording that on commodity storage, was all done by SI and Cineform prior to Red's entry into the market. The D21 - basically an updated version of the D20 that has replaced the D20 in Arri's product line - can output either dual link HD video, or RAW data. The RAW data is 2880x2160 (the sensor is a 1.33:1 aspect ratio, similar to a film frame) and thus, by Red's own nomenclature, qualifies as a "3K" camera. Arri has chosen to output this only in uncompressed form, and it can be recorded by both S.two and Codex recording units. S.two has just released their OB1 recorder, a very small, camera mounted unit that uses removable flash packs. Since these devices - as well as the camera - are intended for delivery to what is primarily a rental based market, the purchase price is not as relevant as it is in the case of Red, which is intended for direct sale to the end users. None of this is intended to demean what Red has accomplished in any way. I'm just offering some background facts to provide context.
  17. You're not pushing in or enlarging anything. 4x3 is only a required deliverable for SD video release. Even if you create it from the HD video version, you're actually reducing the image, not enlarging it, because the HD frame is 6 times larger than the SD frame. It's basically a combination of creative cropping and scaling the image down. As for the HD deliverables, this all depends on the post path for your DI. Depending upon how this is done, you're likely to either be working directly with the 4K R3d files as the source and rendering 2K files for the film recorder, or you'll be working with DPX files in either 2K or 4K form (most likely 2K). In either case, if the film release is anamorphic, you'll be cropping the sides for the 16x9 version (if you're not delivering letterboxed 2.35), which will likely result in a slight blowup for the HD video version. If you're working in a 4K environment - or you're working directly from the R3d files - you're doing an image reduction similar to what I described above for SD video. It all depends on where in the process the final RGB files are rendered.
  18. I would modify that to say you have "effectively similar" range and control. It can't be the same because if you use in camera settings, you're working with the information prior to compression, at least on most cameras (not Red because it doesn't currently allow pre-compression settings by the user). In some cases, you're even working with the analog information prior to digitization, which allows for even more control due to an essentially unlimited number of value levels. If you use a post approach, you're working with the information post compression. How much of a difference that makes, particularly in the case of the current Red design, is questionable. But the fact that there is less information once the material is compressed is not arguable - and this is true even if you assume 12 bits internally and 12 bits externally (i.e., working with an image converted to linear light).
  19. Perhaps in your eyes. I don't know your location, but in Los Angeles, Panavision is most definitely not looked at as a film only rental company. In fact, they are by far the largest supplier of video and digital production equipment, and just about everyone in town knows that.
  20. Just a suggestion: Rather than calling yourself a "Scratch Colorist," I think you'd be better off calling yourself a Colorist (or a Finishing Artist, or whatever you think of yourself as), and then listing operating a Scratch system as one of your skills. Any time I encounter someone who refers to themselves based on a single piece of gear, I think of that person as more technical than artistic, and that's not a good way to be viewed when you're a colorist. BTW, I tend to see those who want to sell themselves as "Final Cut Pro Editors" the same way. The attraction is in the talent and the skill, not in the piece of gear that happens to be in front of you at the moment. And yes, I realize that you're not necessarily looking to leave your current employer. But if you're going to present yourself to the larger community, it should be as an artist, not as a technician limited to one piece of software. Like I said, just a suggestion.
  21. If all networks, cable operators, and the converter implementors would adopt the AFD (Active Format Description, an optional part of the ATSC broadcast standard which defines how an image should be displayed), as NBC already has, this would not be an issue. Perhaps that can still happen before June. Perhaps not.
  22. 1. While I understand your personal pride in doing this, you should know that you're certainly not the first. Creative Bridge built their Mobile Digital Lab almost 4 years ago, and it was used on numerous productions that utilized varied equipment (Viper, F900, and others). It proved to be too expensive and of limited utility to most productions, and I don't know if it's still being used at all. I would suggest you talk to Brian Gaffney (I believe he's at Technicolor these days) for information on what they learned. 2. Posting Red projects is not rocket science. Those who can't figure it out don't necessarily need to have the entire editorial department brought to the location, they just need a post supervisor who can put together a sensible plan. I understand your very eloquent pitch, but the truth is that as things become more common, they become more accepted and supported. I suspect that's what's happening with Red right now, and those that haven't yet participated in Red projects will be before long. So whatever ignorance there still might be (and I don't think there's much) about ways of handling it is likely to evaporate in a reasonably short amount of time. Besides, most productions leaning towards using Red are doing it primarily for financial reasons, so an elaborate vehicle housing the editorial and file processing departments would stand a fairly good chance of being seen as an extravagant, unnecessary expense given that all that's really required is a Macintosh or two. Just my opinion.
  23. Probably the best low light camera in the world right now is the Sony F23. Superb latitude, and an astonishingly low amount of noise. Certainly not the cheapest, but the best, by a considerable margin.
  24. Neither HDCam nor DVCPro is a "broadcast standard." They are videotape recording formats that have nothing to do with broadcast, other than the fact that some - and I emphasize, some - programs are either recorded or delivered on them. However, in terms of prime time programming, the great majority of prime time programming is delivered on HDCam SR, a completely different format than HDCam and one that does not have any of the limitations you're talking about. Many of these programs originate on either film or HDCam SR tape (anything shot with a Genesis or an F23 is likely in this category). They never incur HDCam compression anywhere in their post chain, nor do they they incur it when the delivery masters are created. This is just not the case, as explained above. Sorry to question your assertion, but it's simply incorrect. Compression does not affect the pixel size of the original material, although it does of course eliminate a certain amount of information by definition. If you take a 1920x1080 image into Photoshop, and export it as a JPEG image, it's still 1920x1080. Anything else is determined by specific settings in the encoder. There is certainly loss, and there is certainly what appears as noise added - as well as motion artifacts and various other very nasty things - but the image size is not changed. And, once again, HDCam isn't a part of this equation.
  25. Law and Order is currently shot on the Panavision Genesis.
×
×
  • Create New...