Jump to content

Michael Most

Basic Member
  • Posts

    765
  • Joined

  • Last visited

Everything posted by Michael Most

  1. The additional caveat is that time code becomes useless. Of course, if you never plan on going back for a retransfer, or doing an online conform, I guess that's not a limitation. There is also the additional issue of image quality if you happen to be using a Cintel telecine - which you just might be doing if you're trying to transfer to standard def and do it cheap. Flying spot telecines are quieter and have generally better performance at 24fps than they do at 30. This is less of an issue with the newer machines, such as the Ursa Diamond and Y-Front models, and something of a non-issue with CCD telecines like the Spirit and Shadow.
  2. If you have time code coming out of the telecine room, and can insert VITC into the SD video stream, any Aja or Blackmagic card will remove the 3:2 pulldown based on the now-standard convention of "A" frames on time codes ending in 0 and 5. However, the highest quality and simplest way to do this is to run the telecine in 1080/24p HD. Send it to a Mac with a Kona 3 card and let it do the downconversion and write to the disk array using whatever codec the client requests. That eliminates any frame rate changes completely, and gives you some quality advantages by using an HD source. Of course, you haven't said how you plan to control the Mac, and since there's no VTR emulation software available to facilitate that, you might have some issues.
  3. Nobody - not Bono, nor anyone else - will price it that way if what's being transferred is select shots, which requires cuing and editing each shot individually, with time codes matched to the original DVCam transfer. In fact, I'm not at all sure that Bono will even do a time coded transfer, let alone one with slaved time code. Not to mention that you must include handles in any select transfer, which adds to the footage total considerably. 100 minutes of edited final picture does not mean exactly 3600 feet of transfer (that's the footage total for 16mm, @36 feet per minute). Don't believe what you read on an Internet forum. Call some reputable labs (you're already working with one in Post Works) and get an estimate. And be honest about what you're going to need, what information you're going to supply to them, how you're going to supply it, and what your expectations are, in terms of both quality and turnaround. Think about what your final product is actually going to be and how it's going to get that way - in other words, if your final delivery is going to be in HD, exactly what format will that be? How will you create a delivery element, such as an HDCam or HDCam SR tape? In other words, what is worth doing yourself and what isn't, especially given that involving the company doing your film transfer in other steps of post production will likely get you more favorable pricing (not to mention highly skilled talent in areas like conforming and color correction)? Think like a businessman or a producer rather than an individual.
  4. I think you have some inaccurate ideas about how this works, at least in Los Angeles based production. Food is provided in the form of craft service throughout the day, but the only thing that's really "expected" of craft service is what's normally provided in many industries, and that's coffee, drinks, and snacks. If you're talking about catering, that is something that is normally provided only on location (with a reduced lunch break) or in situations where food is not directly available without leaving the studio, such as in some warehouse stage situations. For shows that live on studio lots, catering is not normally provided except on specific occasions at the discretion of the producer - for instance, a crew screening of a television show episode, or perhaps a late call. The only time mileage is provided in Los Angeles is when the location is outside of the 30 mile studio zone.
  5. For some facilities (not ours, BTW), one major point is that the conform can basically be done with video editing equipment rather than a DI system. This keeps it in a less expensive environment for a longer period. It also allows for simple and reliable storage without needing terabytes worth of drives, and with real time retrieval, unlike data tape formats such as LTO3. Of course, there are ways to do 4:4:4 DI work without incurring any compression of any type, but it requires some thought as to how to do that.
  6. You really should look at Spectsoft RaveHD. It's a very simple matter to change time code on a Quicktime file within Final Cut. If you have a known time code ID frame (a punch is the most obvious), select the file in the Final Cut browser and pull down "Modify->Timecode." Go to the punch frame, tell it the proper time code number and format, and identify it as "this frame." That's it. The file on disk is restamped with the proper time code, and it will now match whatever your Flex or ALE file has. We check any files we record wild and correct accordingly before we send them out. You have to prevent clients from hurting themselves....
  7. Not in the way you probably think. The only way to control Virtual VTR as a recorder is to give it "hard record" commands. It will not synthesize time code during an edit preroll, as a true VTR emulator such as Rave HD, Clipster, or Drastic QuickClip will. You can get full transport control via RS422 for playback, but not for record. In our case, we use a simple MIDI footswitch controller to "kick it" into record. One of the problems with doing VTR emulation on a Mac is its almost total reliance on Quicktime. Most of the PC based devices I mentioned record file sequences, DPX being the most common. They then transcode to movie formats. This makes it possible to do "editing", since you're basically overwriting existing files, but leaving the existing frames outside of your edit range intact. Movie formats do not allow editing - the entire file must be overwritten. Nonlinear systems, of course, get around this because you're always doing "virtual" timeline playback. You're not "recording" anything, so you're not trying to change your existing source files. The only hardware I know of for the Mac that even allows sequential file recording is the Kona 3, and even with that card, it's not talked about very much. Unfortunately, what most clients really want is Quicktime files, so the only way to satisfy that is by either "hard recording", or by digitizing from some other medium (tape, or perhaps one of the PC based devices I mentioned). Unless and until someone makes a "VTR emulator" front end for the Kona 3 in DPX sequential mode that won't change. And even if such software was available, it would still require a transcode to make self contained Quicktime files.
  8. The 35mm version of this stock (5299) is used on a number of network dramas. Boston Legal is one I specifically know about, but there are others. The Kodak box is not used in the transfer of any of them, at least to my knowledge.
  9. For the type of material you're indicating you're going to shoot, you should probably be worried about the fact that the only available recording medium for Red at the moment is Compact Flash, and that the largest one you can currently use will only record about 4-5 minutes. You might be better off with another shooting/recording solution that's a bit less restrictive in that regard if you want to keep the camera rolling longer than that. Of course, as with all things Red, any of what I just said could change in the next 10 minutes. Or the next 10 weeks. Or months. I guess that would be another thing you should be aware of....
  10. It was a lower budget. About $15 miilion as I recall, but not all of that was production. The VFX budget, for instance, went from about $1 million for Paul's version to over $5 million for Renny's. As much as I personally happen to like and respect Paul Schrader - and I do (he's one of the smartest men I've ever met) - I wouldn't necessarily disagree with you on either count (hope Paul's not reading this.....). Although as I mentioned before, the "lazy" feeling you mention was largely by design, a kind of against the grain approach to the horror genre's "normal" conventions. So though it was intentional, whether it works or not is another issue entirely.
  11. John took ill shortly after the production moved to Rome (following the Morocco location work) and did not really finish the picture, although all of the sets were his designs. In a word, yes. In fact, the dailies, done at Technicolor Rome with Vittorio's timer providing some degree of supervision, at least for the PAL version we looked at each night, had considerably more contrast, to my recollection. As did the video tap during shooting.
  12. Well, as David knows, I'm pretty knowledgeable about the Schrader picture (I was the original VFX supervisor), a bit less so about the Harlin version (Ariel Shaw did that one, but we're friends, so I'm pretty familiar with what went on). For the Schrader movie (released as "Dominion," although the original working title was "Exorcist: The Beginning"), the scene pictured was indeed shot on location in Morocco. All of the Harlin picture was shot in Rome, primarily on stages. The framing (intended to be, as David mentioned, 2:1) and timing of "Dominion" were done by Morgan Creek without Vittorio's input (at least to my knowledge, and I would have been told) and was done via a physical assembly of the 3 perf negative and a telecine transfer to 24p HD video. The film release was a tape to film transfer from the video master. Timing differences are, I think, fairly indicative of the value of having the original creative team in place for that step, particularly when dealing with a very stylistic visualist like Vittorio. The Harlin picture was finished via a DI done at Technicolor in Burbank with Vittorio's supervision, and thus the harsher contrast and less reddish tone of the scene is not surprising, at least to me. It is also indicative of the tendency of video colorists to attempt to hold all detail in the image, even at the expense of a flatter image - where most good DI colorists, having worked more directly with cinematographers as well as working in a darker room with a larger screen, tend to not be afraid of contrast and even the possible loss of detail which might not be important to the overall image. As far as shooting style, Paul wanted his movie to have a period picture feel, so there was generally less camera movement and wider lenses. This is particularly noticeable in the location footage, which has a number of "epic" type compositions, and long, flowing crane moves. The color pallette was something that was worked up by Paul and Vittorio in preproduction, and was very specific in its progression, especially towards the end of the picture. I don't think the final transfer really plays it quite as intended, though, at least not the copies I've seen.
  13. I'd rather not, but I will say it's an independent production currently shooting in Ohio.
  14. John, you know there's a lot more to it than that. I do have my head around it, we are finishing a feature shooting on Red, we are already doing DI and film out tests (and pretty much have it nailed), and we have a DI theater built around Scratch that feeds from a large SAN, just like they do. But on the front end, you currently have to convert each shot manually (Redcine doesn't exist yet unless you're one of a very chosen few), you have to convert to some other compressor for offline (because you can't play a timeline with the QT wrappers, even though you can play the clips themselves), you have to manually enter the reel number (because it doesn't export properly), and you have to have numerous terabytes of storage. I know at this point, I sure wouldn't even think of doing a television series - or anything with a real turnaround issue - with it. And I wouldn't trust that most experienced editors and assistants would want to go through the extra steps required when compared with the common film workflow of: import ALE, digitize dailies, Done.
  15. All of these devices have their own preferred partner products. If you use Red, you basically are tied into Final Cut because Red is "in bed" with Apple. If you use SI, you are basically tied into Premiere Pro on a PC because that's what Cineform is tightly aligned with. If you use that tool, the workflow is straightforward for a "do it yourselfer," but obviously the product you create is limited in terms of format and quality to what you accomplish on a desktop. If you want professional level finishing, you'll have to export to other formats, such as DPX. If you want details, go to the SI website. It's all explained there. Because these manufacturers have their preferred partners, the flexibility that we have always had in the post world is becoming a thing of the past, unless you just want to make things difficult for yourself. You can try to make Red files go into an Avid workflow, but you'll encounter a lot of difficulty and you won't get any real support from Red. Conversely, you can try to cut an SI project on Final Cut, but.... well, you know the drill. Will this change? Who knows? Right now, that's just the way it is.
  16. The "Red post infrastructure" currently does not exist. The Red software is basically in a late alpha/early beta stage. Any projects currently using Red are either being posted by jumping through a number of hoops, converting to some other format, or not being posted at all. The Apple based post flow is highly incomplete, the Redcine software is still not available, and any other post flow (Avid, for instance) does not exist. The one path that does somewhat work is conforming and color correcting using Assimilate Scratch. If you really want to use a Red now - today - 11/13/07 - you're going to find any post production to be rough going. That could change next week. It could also change next month. Or next year. That's the reality of the world Red has created. The images you capture are indeed very, very good, but actually doing something with them at the moment is another matter entirely. Bottom line: If you want to complete a project right now, or just want to start something knowing going in how you're going to finish it, Red is not your best choice. SI and all HD cameras already have working post flows that are supported and will get you a finished product, whatever that product needs to be.
  17. Perhaps due to my status as a lowly IA crewmember I have a rather different view on this, but.... Rant mode on. I've never understood why it should be assumed that once paid for a job, one should continue to be paid forever. The entire concept of residuals is based on two assumptions. First, that the creative contribution of actors, writers, and directors is significantly more extensive than anyone else connected with the production, and second, that because none of these individuals work full time, they are paid in a system that gets them income even when they're not working. My problems with both of these assumptions are many, but they start with one premise: Neither the actors, the writers, nor the directors are putting up any of the money it takes to create the production in the first place. They are, in fact, assuming no financial risk. The studios, on the other hand, are putting up all of the money and taking essentially all of the risk. In our society, those who take the risk are entitled to the reward. Those who don't, aren't. And it really is as simple as that. The actors, writers, and directors somehow garnered enough power years ago to create what amounts to a profit sharing situation that exists in no other industry I know of. And they perpetuate it on the notion that "we're all in this together." Except that we're not. Now, while it's true that residuals are actually paid to the IA locals, these residuals go directly in to the health plan. While that's extremely valuable, there are other ways to fund the health plan directly, and the fact is that WGA, SAG, and DGA members get their residuals put into their pockets - and still maintain a health plan. We don't. I really fail to see why companies have any sort of responsibility to share the profits they make with employees who assume no financial risk in the manufacture of the goods the companies make. It's certainly good business to do so, as it encourages employees to put their best effort into making the products better, as it will benefit them directly if it results in more sales. But can anyone here imagine that, say, in the auto industry, the designer of a particular car model getting paid an additional fee - over and above the salary they've already been paid - for every car that's sold? Or the designers of, say, Windows Vista getting paid additionally for every copy of Vista that's sold? That's madness. Everyone who works for someone else gets paid for the work they do when they do it, and that very much includes actors, writers, and directors. In fact, they get paid quite handsomely for the work they do. I've never understood why they should continue to get paid for it til the end of time, especially when the additional value afforded by, say, a DVD release is done without any additional help from them - short of perhaps a commentary track that they also get paid directly for. The notion that when the company makes a profit, the writers, actors and directors should also make a profit is, at its heart, sheer lunacy. When the writer, actor, or director takes a financial risk, they are entitled to a reward. If one of those individuals makes a specific deal to forego a higher original salary in favor of back end points, that is one of those situations. But in a "normal" working situation, the individuals are paid for the work they do at the time they do it. That should be enough, as it is enough in every other business relationship I know of. Except in the film business. Rant mode off.
  18. Unfortunately, these days there aren't a whole lot of Fuji tech reps. While I agree that this could potentially be a scanning issue, in my experience this usually isn't the case. While there are certainly differences in negative emulsions, those difference are manifested primarily during exposure, i.e., the way they capture light. Once the negative is developed, those characteristics are "baked in," but the actual densities presented to a scanner are relatively the same. It's not like the difference between, say, the "normal" Kodak Vision stocks and their Primetime stock, in which one was "normal" and the other had no anti halation backing and a completely different density characteristic. So they should check the scanner setup, but my guess is that any custom calibration of the scanner is going to be minor and not related to what's being talked about here. It should also be pointed out that although I don't know what type of scanner was being used, most modern scanners are self calibrating - that is, they look at Dmin (usually by looking at a frame line), measure it, and calibrate themselves accordingly. In some scanners, the calibration is based on a specific exposure curve supplied by the stock manufacturer or the scanner manufacturer, so in that case, they should be using a Fuji characteristic curve. If they're not, that could potentially be one problem source. This is not usually the case with telecines, which are usually "hand calibrated" by the colorist and thus have far less scientific basis for their setup.
  19. It's highly unlikely that the DI facility is using the "wrong LUT." In almost all cases, the only LUT that's applied in a film originated DI is the print preview LUT, which is applied at the end of the processing chain (or, in many cases, in a separate piece of hardware prior to being sent to the projector - this is usually how Truelight works). Its purpose is to simulate what the log image being presented to it will look like when recorded to a particular intermediate stock and printed to a particular print stock. The source image is not what is being evaluated by the LUT, the LUT is only there to serve as a substitute for the film printing process. Now, to be fair, in some color management systems - specifically Truelight - the characteristics of the camera negative stock is part of the calibration. But it's a very, very minor component - it has to be, because most productions wind up using multiple camera stocks. The more significant components are the projector/screen combination, the film recorder, the intermediate stock (or the negative stock, if you're recording to a camera stock), and perhaps most significantly, the print stock. In Lustre, as in a number of other systems, one normally doesn't do an overall desaturation when trying to create a specific "look." You would usually use either basic 6 vector secondary correction or color curves to do this. Lustre has both of those tools, and should be able to do what you're talking about pretty effectively. If it isn't, it's probably not the equipment that's the problem ;-) One other way to do what you seem to be trying to do is to linearize the scans on input to the Lustre, grade in linear space, then convert back to log on output (and view it through their "standard" print preview LUT). What this will do is present the color corrector with more saturation and thus more control of its manipulation, albeit at the expense of a bit of detail and color control in the lower midrange. But for isolations, it might be a better source for the secondary or curve controls you'd likely use to achieve that. It will also likely add some noise, so if you try this, be aware that there's a downside. Read what I said earlier. This would have to do with accurate print simulation, not color correction. My guess is that a different print stock was being used than the one they were calibrated for. In other words, they were printing on Fuji when the LUT had been derived from Kodak prints.
  20. Well, to some degree you've answered your own questions right there. You're in way over your head here, and you also don't have equipment that is designed for or capable of doing what you want to do. I really don't know why people think they can do anything with anything. Post houses generally don't do any conforming of DPX sequences on Macintoshes because Apple really only supports Quicktime, and no third party (with the notable exception of Iridas) has written software to get around that in a professional manner while maintaining things like time code headers. Essentially all current DI systems are based on either Windows or Linux, and that's done for a number of good reasons. If you're going to work on a Macintosh, you really ought to forget about working with DPX's. Convert everything to Quicktime files and move on from there. The render time is irrelevant, and quite frankly, the price you have to pay for do-it-yourself work, especially on what is essentially low end desktop equipment. Both AJA and Blackmagic make utilities to do this, and Gluetools is another possibility - although you seem to not really have enough knowledge of how DPX sequences are handled or colorspace to use it effectively. What are you planning to do with whatever you final product is here, anyway? If you're planning to record to film, you need far more knowledge of lookup tables than a simple answer to "what Cineon settings do I use for an IMac" - not to mention proper monitoring in order to see any kind of reasonable facsimile of what your output is going to look like. And if you're planning to just make DVD's or something similar, why are you even messing around with DPX's in the first place? High end capabilities don't come for free, regardless of what Internet forums might say.
  21. Very short answer. No. Slightly longer answer: If you're not going to be going back and retransferring film, you don't actually need specific time code on these files. In that case, you could use Final Cut itself, starting your recording with "Capture Now" and ending it at the end of each shot or each lab roll. You could also use Virtual VTR in combination with a Midi controller and go into a "hard record" mode to capture in a similar manner. You can always assign or change the time code after transferring by identifying a particular frame and modifying the time code to be what you want. Of course, without edit control, if you're transferring in NTSC (i.e., you have 3:2 pulldown), which you would be if you're in the US or other NTSC country, you wouldn't be able to force your pulldown cadence without an actual edit controller - so you'll likely have to go to tape and then digitize, which is what the vast majority of facilities currently do for Macintosh based files. There are solutions on the PC side of the world - Drastic Technologies products and DVS's Clipster and Pronto lines are both Windows based, and Rave HD is Linux based, and all of them can act as VTR's under DaVinci's TLC control.
  22. Having had the pleasure of working with all of the above cameramen (I was on "Ally McBeal" with Billy - still one of my closest friends - for all 5 seasons as Visual Effects Supervisor, and with Brian on both "Civil Wars" and "NYPD" for its first 3 seasons as final colorist), I strongly concur. The toughest part of doing a series, as David has noted, is keeping up your energy for as long as 10 months straight. There are very few features that go that long. Doing a series is like running a marathon, and I consider it rather miraculous that the names you mentioned, along with numerous others, are able to deliver the kind of vision and talent over the course of an entire season that they do. Anyone who's worked on the crew of a series knows just how tough that is - for everyone, but particularly for those with ultimate responsiblity. I'm glad you mentioned David Boyd - I did a number of episodes of Without A Trace with David, and instantly developed both respect and admiration for him (not to mention that I just really like the guy!). He's not only talented - that's obvious - but he might be the "happiest" cameraman I've ever met, someone who genuinely loves what he does and brings that love to the set every day - and it's infectuous.
  23. An S.two doesn't record data. It records DPX files from a video stream, such as dual link HD-SDI coming out of a Viper. In order to record from a Red, the Red would have to provide a video stream. It doesn't do that, in part because that would require a full quality debayer in real time. Even when Red enables the HD-SDI port, it likely still won't have a full quality real time debayer. If you're going to shoot with a Red, you're committing to recording Bayer data, most likely compressed with Redcode Raw. That's the "normal" working mode of the camera, the same way that dual link HD-SDI recording to an SRW1 is the "normal" working mode of a Genesis. Geez, John. Don't assistant editors know how to sync anymore? To sync dailies today is a no brainer, and it doesn't even require digitizing the sound to do it in most cases. Drag in a bunch of WAV files from the DVD-RAM, match up sticks, and you're done. Popping the tracks takes about 10 minutes. Synching is instantaneous. If you're shooting with Reds, you're going to have to do some kind of file translation for the picture, just to create something to cut with. So you're going through a dailies process, whether a facility does it for you or you do it yourself. I just don't see synching sound as a "killer." In fact, I don't know how recording sound on the camera is going to help you if the files need to be converted anyway.
  24. Not true. The ASC has members - a number of them - who have never shot a feature in their career. Many of those members come from television, and a number of others come from commercials, and it is their work in these mediums that was used as part of the criteria for membership.
  25. "Rendering a Digital Cinema compliant 2K" means color correcting in a DCI compliant theater, syncing picture and sound files in "reels," converting to XYZ color space, compressing using JPEG2000 compression, wrapping both picture and sound in compliant MXF wrappers, possibly encrypting the entire package, playing it back in a DCI compliant theater to check for any problems, and possibly creating and supplying keys for the theaters to decrypt and play back the material. It's not simple and it takes properly set up monitoring and environment if it's going to have any relevance to what's going to be eventually seen in a theater. It's a specialty deliverable that is not intended to be done by the do-it-yourselfer. If you want to color correct this piece yourself, and you have a qualified colorist and proper monitoring and environment (and a Panasonic monitor, particularly an LCD, is not "proper monitoring," especially for something that's going to be theatrically projected), then I guess you could. But what you do will have little relevance to DCI compliant digital cinema unless you bring it into a DCI compliant projection environment for final color correction. As far as making the DCP, most facilities that do this (and there aren't very many at the moment) will charge you a price based on the level of complication your elements present - meaning that if the facility does the DI, the DCP will be priced lower than if you bring in the elements. That's because in doing the DI, the piece is already being put into compatible files, and the color correction is being done in a proper venue. All of the elements are basically prepped for DCP conversion. With client supplied elements, there are many issues - color correction is only one. There are numerous others, based on what is supplied. The price will change depending upon how many manipulations need to be done and how long that will take. If you really want to project in a theater, and you really want to do this yourself, you'd be much better off forgetting about digital cinema packages, and instead putting the finished piece on HDCam tape. Then you can find a theater that will project from that tape, possibly renting an inexpensive playback deck like a JH3 to do this.
×
×
  • Create New...