Jump to content

Peter Moretti

Premium Member
  • Posts

    307
  • Joined

  • Last visited

Everything posted by Peter Moretti

  1. I too thought that Christian Bale and Amy Adams were excellent. I do think the script had some main character issues. Namely, there wasn't enough Micky Ward in the film. I left the film not knowing nearly enough about who he is as a person. He seemed rather passive as a character. That said, I enjoyed the film and very much appreciate even more all the work that went into it. I had no idea that old Beta Cams were used for the fight scenes, I just figures it was some post process that created the video effect. Very cool to know.
  2. I don't think he's trying to impress anyone. He explained the test and its potential value. Why you have to impute to him some desire to impress people is beyond me.
  3. I am very sorry to hear of your loss. Memorial Day is a time for rememberences. I will include you and Paul in my prayers. Please take and know that Paul was very appreciated here and that we will miss him.
  4. Alexa is designed to be upgradeable to large degree, by the user, via replacing certain modules. Arri doesn't mention about the sensor, but that doesn't mean it couldn't be done if the camera were sent back to the factory.
  5. Chris, I agree that full frame and beyond are pretty limited in their application, not to mention lens offerings, but we'll see if it finds a good use. Thinking more about what Georg wrote, I have to disagree w/ what he said in terms of adding pixels being easy. It IS easy to add more pixels to a cellphone or other still camera sensor, but it's very difficult to read them fast enough to have them be useful for a motion camera. This is why vDSLR's have lower video than stills resolution; they essentially have to ignore some pixels to be able to read the sensor fast enough. Now unlike in the stills world, where who knows what the delivery resolution will be, we know motion is delivered in essentially SD, HD and 2K, Arri's sensor has enough resolution to go up to 2K, so adding more pixels will hit diminishing returns in terms of resolution and could become detrimental in other areas. But if delivery goes above 2K, then Arri's sensor will jeopardize resolution.
  6. Georg, Thanks for the explanation and I think you bring up some very good points about craming pixels into a given sensor size. However some of Red's offerings will have full frame (24mm X 36mm) size sensors and larger, not S35. In those instances, it seems more pixels could be rather beneficial.
  7. Also, regardless of what the graphs show, the filters do change the coloration of visible light as well. The colors lose some pop. So if the camera can help you know if there is a problem before you take the cure, that would be nice. BTW, I wonder if people are using black lens caps that block both IR and visible light when black calibrating their Red Ones? If not, it would seem that their black calibration could sometimes be off due to IR contamination.
  8. In the meantime, perhaps using something like a "black balance" card. It would be made of a material black in both the visible and IR spectrums. Have someone hold it up against what you believe should be black in the coposition and eyeball the difference on the monitor. W/ the newer touch screen cameras, maybe having the ability to isolate a small part of the composition and view its R, G, B vlaues, an RGB histogram, or a vecotorscope trace. This would allow you to evaluate how the camera is seeing that portion of the composition which you believe should be black. [i edited this post after thinking about the problem more.]
  9. Mike and Jim, Thanks very much for the Q & A. Jim, this is very informative; thank you. And Mike, thanks for staying on message ;).
  10. Come on Mike, this is a very narrow reading of what he said. It's not that film needs to improve as much as that another technology--which seems to be benefiting from Moore's Law--is chasing film.
  11. Exactly, that's what was so audacious about Moore's Law when it was introduced in 1965 and when Andy Grove claimed it as a guiding principle for Intel. And yet it has reliably predicted the rate of increase in semiconductor power until very recently. I don't think Jim is espousing a truly literal application of it to digital cinema. (More's law doesn't even apply to computer systems as a whole. CPUs may have been double their MHZ every 18 months, but systems didn't as whole. I can still boot up an old Compaq 486 laptop running Windows 3.1 pretty quickly.) It is used as a colloquialism for the rapid rate of advancement in digital technology.
  12. Mike do you really not understand what is meant? Digital cinema technology is improving at a faster rate than photochemical technology is as it relates to film (that said film stocks do continue to improve). If you extrapolate their rates of improvement, it seems that digital will continue displacing film. As for improvements in film, this is an ongoing process as well, as I'm sure the Kodak scientists who formulated Vision 3 would attest to.
  13. There may not have been a lot of effects, but the film looked pretty heavily graded to my eyes. Both overall looks (almost burnt yellow in the beginning and abundance of dark green/blue when the Basterds are in the filed) and secondary CCing (poping the blue in eyes) may have helped make the case for a DI.
  14. Jim, While your general observation is spot on, Nikon indeed still makes film cameras. http://www.nikonusa.com/Find-Your-Nikon/Fi...mera/index.page Think of the F6 as a Vista Vison frame sized DSMC, w/ no "D" and no "M."
  15. I just finished watching it. It is beautiful and moving. Yes, it's slow at times. No, it does not follow Syd Field or Robert McKee structure. But it is a very worthy film and a great accomplishment that it DOES work w/o following storytelling "rules." As for the technical details, the F900 looked gorgeous. Beautiful bokeh, and minimal noise even though many of the shots are in low light and all compositions used only available light. I have no idea what was done while filming or in post, but this bore no resemblance to the hard, video look of films like "Attack of the Clones." (I realize the camera has been updated since then, so maybe that accounts for it, IDK?)
  16. "Avatar' doesn't even prove that 3-D cinema needs to be pursued. I'm all for space exploration, increasing NASA's funding, etc.. But to think there is something unique to earth that makes man behave badly on it is just folly. The earth is a rather nice planet, from what I can gather. If we have a problem living on it, it is WE who need changing, not our location to some space station or colony.
  17. Man's destiny is inside him. Changing the scenery doesn't change who we are.
  18. I just finished watching "How the West Was Won." The hand painted titles by Pacific subtley add to the period nature of the piece.
  19. Their pixel counts are very large. As that increases, the need for an OLPF decreases.
  20. Hey John, So if I follow you correctly, the poor blue performance of some video cameras is not caused by a lack of silicon's sensitivity to the blue spectrum. Rather it's caused by dye filters doing a poor job passing blue while occluding other wavelengths. This seems to make sense to me, as I vaguely remember reading that silicon is actually quite sensitive to blue.
  21. Thanks guys. I very much appreciate your help! I believe I understand the function of cones in the eye as they relate to what we call primary colors. What I still don't understand is the wavelength delineations used for R, G and B in prisms and masks designs. Most succinctly put, is there a wavelength gap between G and R? Take for instance a Bayer mask. Is G 495nm to 570nm and R 620nm to 750nm? Or is G let's say 495nm to 600nm and R 600nm to 750nm? In the first case, G and R correspond to what we generally recognize as green and red. But wouldn't there be a region between 570nm and 620nm at which the sensor is completely blind? So wouldn't a beam of yellow only light (e.g. 600nm) be unable to pass through the mask, and hence never reach the sensor? In the second case, G and R would not have a wavelength gap between them. But G and R would extend past what we call green and red; each would include a portion of the yellow spectrum. Thanks again!
  22. John, Thanks very much. This might seem like a very basic ?, but what do the masks or prisms do with the spectrums that are not red, green or blue? I understand that R, G and B are primary colors and can be combined to make a wide range of other colors called a gamut. But is that really the same as saying that if you take only the R, G and B wave lengths from an incoming beam of visible light and recombine R, G and B, the result is exactly the same as original incoming beam? This doesn't make sense to me because there are seven colors in the visible spectrum: violet, indigo, blue, green, yellow, orange, red. Violet: 380–420nm Indigo: 420–450nm Blue: 450–495nm Green: 495–570nm Yellow: 570–590nm Orange: 590–620nm Red: 620–750nm Now I can understand violet and indigo being grouped with blue. And maybe red and orange are grouped together. But where does yellow go? Is it part of green? Part of red and orange? Is it excluded? I've been reading about color theory, but am still very confused, obviously, LOL!
  23. It seems that prism designs which split light into R, G and B wavelengths should have an advantage over Bayer mask designs. More light and co-sited colors seem hard to beat. However the needs for more sensors, very precise lenses and finely aligned prisms are noted as technical hurdles (esp. w/ large sensors). But is there another drawback to the beam splitter approach, namely how precisely the beam is split? For example, in the three prisms designs that I've looked at, they contain a filter that reflects red and transmits green. But such a dichroic mirror should have a transmission and reflectance overlap around its crossover frequency. I.e, around the border between red and green, some red is transmitted and some green is reflected. Which has me wondering, are masks more precise than prisms and mirrors when it comes to separating colors? For example, if a Bayer mask designates that green ends at 570nm an red begins at 570+nm, can this separation be very precisely made so that green transmission is ~100% right up to 570nm and ~0% above 570nm and red transmission is ~0% at 570nm and ~100% above 570nm? And is such additional precision an advantage of the mask approach? Thanks much!
  24. Fix her?! She sounds like the perfect girl for many guys. :wub:
  25. I watched "Damnation" earlier this week. The opening shot was extraordinary, as were many others. As a film, I felt the sum of the parts was greater than the whole.
×
×
  • Create New...