Jump to content

Alvin Pingol

Basic Member
  • Posts

    677
  • Joined

  • Last visited

Everything posted by Alvin Pingol

  1. >>With careful, professionally lighting the Z1 HDV 1/3" cam can produce good >>results in interlaced 1080i mode, but it generally requires more light than the >>cams mentioned above. Just don't use its Cineframe fake "24p" mode. I believe his choices were between a DV5100 and the new HD100, which I feel is a significant step up from the Z1. If your final cut is intended for SD only, I would go for the 5100. It's a much more professional camera with better electronics; the cost of the 5100 body is the same as the HD100 *with* Fujinon ProHD lens...
  2. Interesting. The interlaced image almost appears sharper than the 24P version, which upon close inspection appears to have slightly stronger sharpening. I know you kept the other settings identical, so this is a little odd. Were your pulldown settings correct for the 24P frame? I'm seeing heavy aliasing on near-horizontal lines, such as on the edges of the wooden table and plate, which is much less visible on the interlace frame. The low edge enhancement on the film look frame helps a lot, but I'm not seeing a substantial increase in dynamic range. In fact, it almost seems as if highlights are being sacrificed faster; Highlight detail on the label of the Gerolsteiner bottle is harder to see, compared to the default settings frames.
  3. I too prefer the second image to the first, as it looks a lot smoother and less harsh (though it is hard to tell with the subject matter) - less "video". The first image's settings minus the sharpening would probably look pretty good, too. The same test with a MacBeth color checker, plus some high-contrast scenes and landscapes would be a really good test.
  4. >>HDV is not actually putting any more data on the tape than DV, it's simply >>compressing HD so much that it can fit in the same bitrate as DV already uses Less, actually! (19Mbit/sec vs 25)
  5. A few oddities, >>In addition, film color temperature is much lower than video, giving >>uncorrected film color a much "redder" tint. I thought CCD's were calibrated to 3200K. I'm pretty sure no one makes stocks balanced for below 3200K. Oh, and that "redder" film tint is dependent on both film stock and processing, I'd think... >>Thirdly, the video color gamut is somewhat greater than that of film. This gives >>video the capability of reproducing a slightly wider range of colors than film, >>adding to the color "snap" of HDTV video. Har, har. That "snap" is the exact opposite of a wider color gamut, wouldn't you say? Since when does color quantization = a "somewhat greater" color gamut? >>Most film cinematographers make full use of the depth-of-field capability; >>making sure objects in the background are highly defocused Depends on the subject matter! >>the majority of scenes of both today's theatrical and film-based productions >>(e.g., most prime time programming) are shot at very low light levels. Since when? If anything, light levels for a film-based production would be, in general, greater than those for a video-based production. I'm done. ;)
  6. >>Well we also need to qualify if we're talking SD "pro imaging" or HD "pro >>imaging" AFAIK, Digibeta is all standard-def...
  7. >>>Pro imaging starts at 90Mb/sec with DigiBeta >>DVCPRO50, anyone? Hah. Perhaps he feels the SDX900 isn't "pro imaging" >>So HDV is 19.4 and standard DV is 25? Just clarifying for my own warped little >>mind... Right. HDV compression, however, works in such a way that previous and future frames can incorporate data from each other, making it much more efficient than DVSD, in which each frame is compressed independently.
  8. I wouldn't get laser eye surgery unless my vision was really quite bad (I'm actually up to -4.25 in the left eye, or about 20/450 in Snellen measurement), and I was involved in contact sports. Or swimming. Although contacts were treating me well, they never seemed to produce as sharp an image as did my glasses. Optics are optics; a thin, flexible, stretchy piece of hydrophilic material simply cannot resolve the amount of detail our brain is able to interpret. I got a stronger prescription thinking this was the problem, but nothing really changed. Having said that, I'm back to glasses. I don't mind seeing the world through a lens - that is, of course, how cinematographers work, anyway ;) . And that $60 antireflective coating really does wonders.
  9. I find heavily-gelled (red, blue) backlights tend to work well on subjects in white labcoats, especially if the scene is lit low-key.
  10. Thanks Phil ;-) Mark I'm sorry to hear about your loss!
  11. >>It's noise; yes, horrible, isn't it. It's worse here because I used the Photoshop >>desaturate command, which seems to just take max(R,G,B) as many computer >>programs do. Much cleaner to use the HSL filters which take sum(R,G,B)/3 and >>thereby draw information from all three channels. So that's what that does! I've always wondered why it looked a little different. Neat. I believe this may have been fixed in Photoshop 7; the desaturate command and the slider on the HSL prompt produce identical results. Great shots! Though the rimlight on the drummer is a little intense - would look more appropriate, IMO, placed at an angle behind, pointing down on the drummer... The contrast achieved with the soft sources on the lead singer is perfect - the sources create pleasing shadows without making the subject look flat. I like it.
  12. Don't completely disregard 60i-only cams! I've been sold by area-based deinterlacers. They do their job well.
  13. Beautiful shots! :) Can you describe your lighting setup for these?
  14. Viruses really aren't what people should base their OS choice off of; If you have proper antivirus installed that you update regularly (or set it to auto update, a common feature in pretty much every antivirus software available [even free ones]) your Windows system will be clean. I haven't gotten a virus in a long, long time. I think Intel vs. AMD is a much more reasonable argument. ;)
  15. She's not the main actress, but Dakota Fanning in I Am Sam, shot by Elliot Davis.
  16. Halogen worklamps are great solutions for inexpensive, wide-coverage hard light. Best used through some form of diffusion. They really are a headache to control, since you'll most likely be attaching some DIY barndoors/flags to cut spill, and you can't modify beam focus. They are also cumbersome and, along with their heavy duty stands, can be heavy. Fortunately, the long, tubular quartz-halogen replacement bulbs are only a few bucks and are easy to find. My local shop doesn't carry 1k worklamps, which is a shame; the 500w (x2) "T-bar" worklamp setup is great for shining through a diffusion frame. Just watch out when you start accumulating multiple worklamps and stands, as placing them in the largest case you can find for transport purposes becomes a real life game of Tetris. Be sure to get some extension cords and power splitters while you're at it...
  17. Hi Valentina, Keep in mind that the majority of the 'golden/yellow antique look' is a result of art direction, rather than merely the color of light. I would bet that with clever art direction, tungsten left a little warm, and a bit of desaturation, you will get the look you're after.
  18. I know how you feel. As soon as I received my Dell laptop I uninstalled pretty much all of the preloaded software (except for the CD burner app; I've had far too many troubles with burners not agreeing with my hardware!) When Microsoft introduces a feature where a window triggers a spectacular desktop ripple effect, a la Widgets, I'll smile. But hey, transparent windows in Vista might be cool...
  19. You can currently see something similar on television. Sporting events (tennis matches, often) use 2x slow motion playing back at 60i. The fluidity is amazing - it would be awesome to see this on a big screen...
  20. A period film, in anamorphic. That's my dream. :-)
  21. There are many reasons why a DP would choose a scope extraction from an S35 frame over true anamorphic. Cost and availability of equipment/processing can be one factor. The lighting situation is another - opening up on an anamorphic lens introduces significantly more artifacts like flare and distortion, versus a spherical lens. Some DP's may not like the natural distortions created by anamorphic lenses, or may feel they do not fit well with the subject matter being photographed. And if I'm not mistaken, anamorphic lenses are a bit more difficult to work with when it comes to focus and overall operation, but this shouldn't be problematic for DP's and crews familiar with scope.
  22. Hi, I'm hoping Thomas Worth's article may have cleared up some things for you, as it details the same thing I'm describing (60i to 60P to 24P). My process involves taking the 60i footage through a custom script written for AVISynth, a very handy tool that unfortunately scares most people away because a GUI for it doesn't exist. Once you read the script documentation, it becomes quite easy. Anyway, the script takes the 60i footage and applies a process called Bob deinterlacing, which (1) splits the 60i footage into 60P, thereby doubling the framerate and reducing the vertical size by two, (2) resizing the 60P back up to full frame [bicubic interpolation], and (3) shifting the even and odd frames down a half pixel and up a half pixel, respectively. This is where "bob" comes in; the frames are bobbing up and down to counteract the natural location of even and odd fields, creating a fairly steady image. AVISynth homepage I then render the process in VirtualDub. Hope this helps.
  23. >>60i is kind of like 60p except it is lower in resolution. So, converting to 60p isn't >>really doing anything besides introducting an additional compression stage. Not quite. Converting fields to frames (60i --> 60P) is something I do a lot of and have found it to be very useful. Interlaced footage is a pain to work with since the only proper way of viewing it is at full speed on an interlaced display. Converting fields to frames gives you much more flexibility. Plus, if compression is an issue, you can always do the conversion uncompressed. 60i and 60P are different beasts. When interlaced, two points in time are represented in one frame. This can cause major headaches when dealing with speed conversion, since the majority of software alters speed via framerate adjustment. Because each interlaced frame depicts TWO points in time, most software automatically deinterlaces the footage unless you tell it not to, in which case the interlace scan lines will be left in. This does not look pretty; when viewed on an interlaced set, each "frame" has its own sort of motion to it, an artifact of having those interlace lines still there. >>What you'd need to do is capture the 60i, insert in a 24p timeline, then "speed >>change" it to 40%. This should evenly distribute the 60 fields across 24 frames >>in the timeline. This assumes the software is converting 60i to 60P! You cannot just 'evenly distribute' fields, for one field is only half a frame. This is where fields to frames interpolation comes in, after which 60P can then be evenly distributed across 24 frames, at 40% speed like you said.
×
×
  • Create New...