Jump to content

Tenolian Bell

Basic Member
  • Posts

    905
  • Joined

  • Last visited

Everything posted by Tenolian Bell

  1. On the Mac you can choose between Avid or FCP.
  2. OK I see what you are saying. It appears this control is in the three way color corrector. Under the color wheels is a section called Limit Effect. Under limit effect is a Color Range bar with four handles and two rubber bands. Under that are two separate gray scale bars with handles and rubber bands. One is called saturation control, the other is called luminance control. The color range control limits or expands the range of color within a given color grade. One set of handle/rubber bands work with green to navy blue, the other set of handles/rubber band work with green to violet. The saturation control has four handles and two rubber bands that work with color saturation across the gray scale. One set of handles/rubber band deal with black to gray, the other set of handles/rubber band deal with white to gray Luminance control has two handles and one rubber band that expands or limits contrast across the gray scale from black to white. It sounds as if that fits what you are talking about. I can see this control being of use within the camera as it gives you a choice over the limited video dynamic range. But after the signal has gone to tape you have little control over dynamic range and gray scale in post. I've shot the Varicam a few times and have done a tape to tape color grade on da Vinci. There was a little room in the shadows, and pretty much nothing usable in the highlights. I would imagine unprocessed HD such as the Viper being the exception. If I shot that format I doubt I would be finishing my final color correction on a desktop NLE. Are the monitor and timeline separate elements to be moved independently, or do you have menu options that change to look of the interface? That's right I forgot about that. I have a friend who used to use Vegas for sound mixing before Sony bought it. So what do you use for compression or encoding and DVD authoring? Yeah, its great to have choices.
  3. A couple of complaints I've heard about Vegas is its poor media management, and it source preview monitor being combined. As well as its monitor being on the bottom of the screen and timeline on the top of the screen. Last I heard the only version of HD Vegas can edit is HDV, no DVCPro HD, or XDCAM HD, or HDCAM. Also FCP comes with Sound Track Pro for editing and mixing audio, DVD Studio Pro for DVD Authoring, Motion for graphics, and Compressor for compression. What are the Vegas equivalents?
  4. Final Cut Pro 5 has six sections to its color corrector. A broadcast safe corrector that you use with waveform and vectorscope. A two way color corrector that deals with over all color balance and hue, as well as general white, mid, black level. A three way color corrector that controls color and hue between white, mid, and black levels. A Highlight Color Desaturation filter. Which has slides and numerical indicators. A Low Level Color Desaturation filter. A RGB Balance for control over highlights, mids, and blacks with in each color channel. There is no slider labeled gamma in any of these color correction tools. I'm not totally sure how you are defining gamma. As far as I know its the over picture contrast. It seems like a great deal of control if you lift the black level without effecting the midtones or highlight levels. You can effect the color of white, mid, and black separately. As well as effect the white, mid,and black of each separate color. Are you saying they totally switched from FCP to Vegas? I'm asking because that would require totally switching from the Macintosh platform to Windows. A person would have to give up investment in their current Macintosh hardware and software to reinvest in all new PC hardware and software. That's a lot to go through to switch to Vegas. Film schools generally spend a lot of time, money, and effort setting up their editing workflow, and what they decide is what they are stuck with, film schools usually don't have money to make a major switch like that.
  5. I had no idea there were such people.... ....nor have I ever heard anyone say this... .....which would answer many questions.
  6. Please lets not get this started. Just pick the one you feel most comfortable with and use it.
  7. The colored lights are a style from Alias. JJ Abrams being the creator of Alias brought that style into MI 3. I would say the story was simple, made more sense and easier to follow than the first MI. The action was less intense and less fun than MI 2. But then John Woo is a master at staging fight scenes.
  8. Yes I thought about that after I posted. Having seen Sonnenfeld from Company 3 speak about DI, I believed he was calling HD post 1K.
  9. 2K is 2048x1556 there no way obtain true 2K resolution form the Viper or F900.
  10. Superman has undoubtedly gone through extensive color grading and rotoscoping. The same as with Star Wars and Sin City. What we are seeing is not the original camera video untouched. While Miami Vice has not gone through the same process. Michael Mann likely wants it to feel raw.
  11. Just spoke with a photographer friend of mine who gave me a lot of in-site into what is going on in the digital photography realm. Now all of this makes a lot more sense to me. First he told me most RAW files are proprietary. Canon, Nikon, and Sony lock their RAW images files. Software developers have to reverse engineer the proprietary file to be able to process it. That seems a bit crazy and unproductive to me. I asked why has the photography industry gone along with this. He says most people don?t really care because they are already financially or emotionally invested in Canon or Nikon. If people have spent thousands of dollars on Canon EOS lenses they will stay with Canon digital cameras to continue using their lenses. The same with Nikon F mount lenses. So it doesn?t so much matter if the RAW format is proprietary. He tells me Adobe has attempted to develop a common RAW format for everyone to use but Canon, Nikon, nor Sony will adopt it. Next he tells me that all of those different RGB color space tables have been created to be used for specific reasons. sRGB was created in 1996 by Microsoft and HP as a way of dealing with computer and internet pictures on cheap un-calibrated 8 bit computer screens that 90% of the worlds PC?s use. sRGB has a low sample rate and a smaller color gamut. So that a picture will appear fine on the lowest common denominator screen running at 800x600 at 256 colors. sRGB has too small a color gamut to be used effectively for CMYK print. He tells me it should only be used for computer display. He tells me that sRGB has become popular for consumer digital cameras because it is easy to process. At the most consumers generally are going to make 4x3 prints from a 3 megapixel JPEG. At that small of a picture accurate color reproduction doesn?t matter. He tells me Adobe RGB has been the standard color space for professional digital CMYK print. Adobe RGB has a much larger color gamut and can accurately reproduce CMYK prints. He tells me all pro photographers making prints from RAW images should at least be using Adobe RGB color space. Adobe has a newer color space called Adobe Wide Gamut RGB which is larger than Adobe RGB. He says Wide Gamut RGB is made to work with 16 bit files anything less could produce severe digital artifacts from lack of information. The downside to using Wide Gamut RGB as the color space becomes larger the file becomes larger and the longer it takes to render and process. He says there is another color space called ProPhoto developed by Kodak. But its such a large color space that only those shooting extremely high resolution digital need to deal with it. I asked him about using different color space tables for aesthetic reasons. He tells me in his opinion that color space serves more a technical function than an artistic function. If you shoot RAW and use sRGB all you have done is significantly decreased the amount of color you can use. He tells me amatuer photographers or a photographer new to digital who hasn?t learned about digital photography probably would not know these differences. And not know how to use correctly use color space. We also had a whole conversation about megapixels and how its a marketing gimmick and fairly useless for actually determining the quality of a camera.
  12. Negative is limited by its physical properties. All color negative films are made essentially of the same basic chemicals and they all contain Cyan, Magenta, and Yellow dyes that record basically the same limited spectrum of light. In the digital world without a working standard any manufacturer can make a sensor that records any resolution, at any bit rate and sample rate, in any color space. Which will not work with any other digital format. What you have described doesn't really have any thing to do with any standard, it is a workflow. Workflows can be different from project to project depending on its needs. The method of the substitute instructor's basic way of preparation is fine under certain circumstances. Without having done tests and actually shot film with the equipment the DP is trusting that it will provide the look they want. They trust that the rental house is giving them color matching lenses, they trust the film stock will give consistent color and contrast at given exposures, they trust the lab will print the look they intend, or the telecine colorist will provide the look they have intended. In situations such as episodic television where the production and post production have already established a workflow and can maintain a consistent look minimal preparation is fine. But this way would not work well under every production situation. The real importance of the lab test is the ability to see an unbiased performance of the equipment. The DP may see a film stock does not give them the look they want under certain exposures, or one lens is softer than another lens, or the chocolate filter gives a better warming effect than the tobacco filter. A DP shooting a much more complex production. Such as a film that travels to several different countries, or has extreme photochemical or digital manipulation, or two to three camera units. Under these condition it is necessary for the DP to test everything. Establish negative densities and printer lights with the lab. As well as establish a final look with the telecine or DI colorist. This workflow is to insure color remains consistent and the DP's original look is maintained through the production chain. No, I've never heard of those numbers. At this point digital grading does not have standard numerical values of color such as the printer light system for film. The only real limiting factor is color space. The machine can tell when a color is within or outside of a given video formats color space. A numerical system for digital grading is currently in development and should be in use in the near future. What makes the film scanning process different from digital photography is that at the end of the process the data will go into a set format that will have a consistent resolution and color space. 4K is always 4K, HDCam is always HDCam. That's fine, but that leaves no way to take a basic picture and through unbiased observation see which system delivers the best performance based on similar variables. Because neutral skin tone, neutral color, and neutral gray scale are a pretty universal concept it is possible to tell which lab you like better based on which one can deliver them accurately and consistently. I did not take the football picture it was on the website doing the test between the four RAW converters. The entire picture was noisy. It was probably taken at 1000 ISO or higher, but I think that was the point to see how the RAW converters could deal with noise. That's what I'm saying and there is no real way to say which is better.
  13. I actually think we are doing pretty good in respect to digital cinema. SD,HD,2K,4K all have fairly set and agreed upon resolution and color space. Even if all the cameras do not create their images the same way or shoot the same format or record the same medium, the end result is pretty much the same. Moving from one to the other isn't too difficult because you are moving from one set standard to another set standard. Digital post color correction is about the last place that needs some type of standard measurement. The ASC Technology committee is working with the leading color correction vendors to come up with a consistent numerical system for digital RGB color. That should be in use sometime in the near future. This is all true. But the test I describe was to be a definitive judgment on which RAW converter is better and which is worse. This test was conducted with no consistency or image neutrality. A RAW converter is going to have to place the image inside of a color space container and give it limited color gamut. If each RAW converter is using entirely different color space tables then the comparison is apples and oranges. Three of the RAW converters are applying sharpening and noise reduction which renders any real objective observation useless. Choice is fine. But choice is based on subjective opinion which there is no true measurement of right or wrong. Before a professional cinematographer goes out on a big shoot he/she will need intimately know the characteristics of their chosen film stock. This involves shooting tests. For example if judging between shooting Fuji 8552 250T or Kodak 5217 200T you shoot a scene with a model, grey chart, and color chart. You are looking to see how both stocks respond to underexposure, proper exposure, and over exposure. You need to know at what exposure and negative density will both stocks record accurate skin tones, neutral colors, and neutral gray. With this method you have a fairly objective view of how the film stock performs and are able to make you choice on whether to use it or not. One stock may tend to record more saturated colors and reddish skin tones at a proper exposure. You go into your production knowing this. You may want this look or you may compensate for it. A shoot may be days, weeks, or months and it is the responsibility of the DP to maintain consistency. After you shoot the film will go to the lab and post production where the DP no longer has direct control of it. So its important to know what you want and how the film will respond. The nature of the telecine room really makes this even more important. Because much the time the DP will not be there and if you don't understand how to communicate what you need, you have no idea what you will get back. If the various RAW converters are using different color space tables and apply sharpening or noise reduction there is nothing neutral about that at all. As all of the results will be entirely different. Without the ability to have some objective or neutral test there would no way to usefully gain empirical differences between imaging tools. We would be left with emotional opinion and manufacturers marketing spin to tell us what performs better than what. Which seems to be the state of digital photography.
  14. I’ve recently learned quite a bit about digital photography workflow. What I’ve learned is not very encouraging. A computer based web site has published a test between RAW image conversion and managing software. An earlier test by this same site conducted an evaluation of Apple Aperture 1.0 and Adobe Lightroom beta. Aperture 1.0 was found to have a bad RAW image converter. Apple quickly went back and fixed Aperture and recently released the update to 1.1. This latest test was of Aperture 1.0, Aperture 1.1, Capture One Pro, and Adobe Lightroom. Their were several tests of pictures rendered by each converter. As I looked at the same picture rendered by all four RAW converters I aked myself what is the common thread between these that makes this evaluation valid. There has to be some neutral benchmark that they started from. In one picture of a cat. The cat has black fur. The Aperture 1.0 picture is low contrast and you can see noise in the dark areas. In Aperture 1.1, Capture One Pro, and Adobe Lightroom there is more contrast and the darker areas have less detail which does not show the noise of the Aperture 1.0 render. Another picture of a high school football game at night. The background of the picture is extremely underexposed. The Aperture 1.0 picture has lots of noise. The other three renders have noise but not nearly as much as 1.0. Its clear that some type of noise reduction or sharpening was applied to smooth over the noise. Another picture is an extreme close up of a dog. On the dogs nose near the middle of its eyes you can see some type of digital artifacting. The artifacting is very noticeable on the Aperture 1.0 render. The artifacting is less noticeable in the 3 other renders but you can see that it is there. Some type of algorithm had been applied to smooth it out. The conclusion of the test was that Aperture 1.1 performed much better than Aperture 1.0. In some pictures Captures One Pro or Adobe Lightroom performed best. In the open forum discussion of the test my first question I asked about the test is what exactly is it evaluating? Its pretty clear that Aperture 1.1, Capture One Pro, and Adobe Lightroom are increasing contrast, sharpening, and smoothing out defects from the RAW Bayer data. While Aperture 1.0 wasn’t making any of these corrections so the image did not look as good. Its clear that these are interpretations of the Bayer data and not necessarily the true picture. Is the point RAW conversion to show you what the picture looks like or to make your picture look better than what the camera captured? The response I got back assentially said the RAW conversion is basically an interpretation. And is too complex a topic to determine the truest rendering. So I explained that I work in film/video. When we make evaluations of our imaging equipment we start with a neutral benchmark. Generally we want the default image to have accurate skin tone, neutral color, and neutral gray scale. I thought this would be important in still photography also. Some people responded that accuracy is not very important in still photography, and that a photographer of photo editor with a good eye will create a nice looking picture. Some people said accuracy is important in still photography and it is possible. But of more importance is to get as much information as possible. My next point is to them is that making an objective evaluation of different RAW converters you need to start at a common point of reference. In this test you have 3 converters using sharpening and noise reduction skews the results even further. Someone points out that these RAW converters are using 3 to 4 different color space gamuts. Adobe RGB, sRGB, Pro Photo RGB, and Lightroom is using Adobe Camera Raw. Then I respond if all of these converters aren’t even using the same color space how can anyone possibly give a useful evaluation of their performance. With different color space the histograms of all of the pictures will be entirely different. If all of the converters were using the same color space such as Adobe Camera Raw, would the numerical values for luminance and chrominance even be the same? Someone else says that color accuracy for photography isn’t as important as it may be for motion film. Because a photo shoot only last a few hours. Versus a film shoot over weeks or months. I respond that I understand that. But color consistency and accuracy are ways that we in motion picture judge all of our equipment. I don’t understand how you can make a clear judgment between film stocks, digital sensors, lenses, or color converting software if you cannot start at a neutral place. Too many variable will cloud the results. The whole thing sounds like a mess to me and I pray that digital cinema does not follow the same path.
  15. Tenolian Bell

    Red

    Somehow I figured that?s what you would say. For the past eight years there has been this believe that (insert name of new digital camera) will revolutionize filmmaking. In that time there have been some interesting new developments, but by and large the studio dominated business model has continued uninterrupted. Many people outside looking in erroneously believe film itself is the barrier between them and the industry. While it can be a barrier its not really the main barrier. There are probably tens of thousands of movies shot on film that will never see wide distribution. The movie business is far too complex to be simplified down to what camera format a movie was shot on. In the larger scheme of things that doesn?t even matter. In the indie world the reason a movie shot on film could potentially sell better than a film shot on video is because a production shooting film will take more care with its quality. A movie shot on film will more likely have a decent script and hire an experienced crew. Hollywood continues to shoot 35mm partially because of legacy and momentum, but mostly because 35mm film truly offers advantages over other formats. If you are paying your star actor $20 million and have a well paid and experienced production crew why use an inferior recording format. For the most part no matter what format a movie was shot with what makes the largest difference in being accepted for distribution comes down to the people who champion it. A well known actor, a powerful director, producer, or studio head. Those are the true gate keepers who hold the keys.
  16. Tenolian Bell

    Red

    Why is RED your salvation? What will it be able to do for you that no actual working camera on the market cannot do now?
  17. Tenolian Bell

    Red

    The point of the post is the fact that just because RED will possibly deliver uncompressed 1920x1080 HD does not mean that full production of uncompressed 1920x1080 HD will be affordable for the common person. I gave an example of a small I company I know of and components they bought for 1920x1080 post production. All the equipment was not bought brand new. I believe the HD deck and Sony monitor were bought used. But its not like the price was significantly reduced, it was still well beyond what most people can pay. I don't see what is misleading about my post. If anyone were to purchase this equipment brand new that is what it would cost. I take for granted most intelligent people would know its possible to buy used equipment. There is no standard price list for used equipment the same way there is for brand new equipment. Of course there are other options besides what I listed but I don't see how that invalidates what I said. It is possible to rent the most expensive components of the HD system. It is possible to rent the entire system and buy none of it. It is possible to just pay someone else who owns the equipment to do all of the work. All of these options have their advantages and disadvantages. As I said at the top of my original post it depends on what you need to do. I believe Kim is replying to the fact that you expended more energy insulting my post than you did providing useful information to the discussion. If I don't know what I'm talking about please do enlighten us.
  18. With any film image one has to keep in mind which camera and lens the image was shot with. If you shoot on a Bolex with lenses made 40 to 50 years ago, of course the image will not be as sharp as an F900 with a decent HD zoom. Fine grain film stock with the newest lenses will have far greater resolving power and a finer detailed image. Under these conditions even with the extreme sharpness of HD, super 16 will inherently posses advantages that can easily outweigh the considerably subsampled and compressed HD formats. Once your get into 4:4:4 10 bit uncompressed HD. Then I agree technically the advantages of HD can outweigh most of super 16 advantages. Perhaps the last being archivability. Over all though at this point those who shoot super 16 do so for the look and advantages super 16 can offer their production. Those who shoot HD do so for the look and advantages HD can offer their production.
  19. It can depend on circumstances. There are so many variables. If you shoot 7201 (50 ISO) with Master Primes, scan it at 3K. In the totality of what makes an image you are working with a lot more real information than HDcam's perceived sharpness. True. If the DP is shooting greenscreen with 500 speed film and crappy lenses they are certainly doing you no favors. But these problems are circumstantial and not across the board a characteristic of all super 16 film stocks or all lenses. Fine grain does, not large or course grain. You can also shoot super 16 at 30fps to match 1080i temporal resolution. The slower film stock you shoot, the finer grain you record and the more detail. There is no one definition of super 16 grain.
  20. In mathematical terms HDCAM is lower than super 16 and far lower than 35mm. But in perceived resolution because HDCAM is grainless it will appear to match super 16. If you shoot with a slower speed film stock the differences in grain and MTF diminish. I doubt super 16 would blow HDCAM out of the water in perceived resolution because HD is grainless. But a super 16 image will contain more luminance and color depth.
  21. Would capturing, storing, editing, and blowing up to a print of 4K data really be much cheaper.
  22. No its not a myth, people have shot endless tests. And the fact super 16 is more popular today than it has ever been. When ever you place a super 16 image next to an HD image you will always see grain in the super 16 image because the image is made from crystals. Just because you see the grain does not make the image of lower resolution. What are you using as an objective test of resolution? Broad statements like this make me wonder if people that make them really even shoot film. Because grain from 7212 will be nothing the same as grain from 7279. Variances in exposure will not reproduce the same amount of grain even from the same film stock.
  23. From what I've been told they are actual red, green, and blue LED's.
  24. The Arriscanner is a scanner, while the Spirit is a telecine. Telecine is a realtime process while scanning is not. Scanners use higher resolution sensors than telecines and yes they also record more luminance information. The Arriscanner uses separate red, green, and blue LED's to flash the sensor. It takes one RGB picture for highlights, another RGB picture for shadows, and combines both pictures together. The good part is you have a high resolution image with most of the detail captured on the negative. The bad part is its a slow process.
×
×
  • Create New...