Jump to content

Lew Fraga

Basic Member
  • Posts

    14
  • Joined

  • Last visited

Profile Information

  • Occupation
    Other
  1. @Richard - so, would a good email be something in the vicinity of, "My name is ----- ------, and I'd like you to check out my body of work, including..."? Like the OP, I'm in the mid-Atlantic (Richmond - 90 minutes south of you, Trent - hey neighbor!) and working mostly corporate shoots, but trying to get more into commercial entertainment (music videos and features). There just isn't the support network like in NY or LA, so after many years its been difficult to move ahead. Great thread, BTW - very constructive.
  2. I think you're right. Canon's taking too long, entirely. Not that they should release a half-@$$ed cam just to have something out there, but I'd think the DSLRs would have done MOST of their R&D for this process (oh, it would be SO sweet if they could use the sensor and processors from their 1D Mark IV!!), and their 300 and 305 models would have proven their codec works fine. Panasonic's is $4,700-ish, though. It's serious competition price-wise considering the Sony F3 is $13,000 for the body.
  3. Yeah - I realize it doesn't mean everything. In fact - the Panasonic AF100's 24MB bitrate isn't an across the board compression. It's processors are evaluating each frame to figure out where to compress (big white wall) and save space, and where to not compress for details (think tussled hair) so I imagine it looks beautiful. I have a JVC HD110 and looking at a still from that, you can see the compression - but when it plays, it all blends together disappears. And shots from that have gone through a lot of After Effects work and hasn't fallen apart yet. Some review I just read about the new Canons (300 and 305 - not the DSLRs) talked about the visual difference between their 35 and 50MB rates and said they could barely see it at all. It's the techie nit-picky post production matter of having a little more color info and details when it comes to stuff like heavy corrections and grading, greenscreen (VFX in general) so it holds up. Too compressed and it fall apart if handled rough at all - one of the complaints I keep finding about the Panasonic GH1's video files - you'd better get it the way you want it in-camera because if you cc or grade too much it'll start producing jaggies. But all the above - shooting action, or drama, comedy - no heavy VFX? As long as it's a good script and well put together, no one will be looking for jaggies, moire patterns, compression artifacts in general. As one guy (whose name I can't remember right now) put it, this is analyzing it on a forensic level that no one but techies will EVER know or care.
  4. Last Spring/Summer-ish there were rumors that Canon was working on a successor to the XLH1 with a super35mm sized sensor, EF lens mount, an ISO base of 800, with a 50MB codec, and trying to keep the price at around $8k. Then Panasonic announced their AF100 (24MB bit-rate). Then Canon came out with the 300 and 305. Lo and behold, they have a 50MB bit-rate. Then Sony announced their F3 (35MB bit-rate). I'm gonna go out on a limb and guess Canon is still working on theirs. Working on the rolling shutter issues, possibly taking notes from the 1D Mark IV and having multiple video processors for unbelievable low-light capabilities (has anyone seen Vincent LaForet's "Nocturn"?!?! - all at freakin' 6,500 ASA!!) If they can do all that, AND keep it to around $8k price, they'll even beat the tar out of the Sony with a 50MB bit-rate that will stand up even better to compositing and greenscreen. Just a hopeful guess. -Lew
  5. "And likewise for underexposure for bright highlights - there's no underdevelopment for shadow detail - you've got a sea of grain awaiting you." Sorry - mean "overdevelop" for shadow detail.
  6. Yes, it can help with cinematography - though I have a feeling only REALLY with film, not digital. It is not only about proper exposure. It is about the math and methodology behind choosing how MUCH to over or underexpose, and/or how MUCH to over or under develop to achieve a specific artistically desired end result. Seeing more into the shadows without blowing out the highlights kind of thing - does that make sense? For cinema it REALLY needs to be tested BEFORE you shoot for your projects. This isn't an "on the fly" assumption that it will work. Test your methods with a specific lab in order to get repeatable results - each lab will be a little different here and there, so stick with the lab you did your tests. Now, I'm fully open to being wrong, so keep that in mind with the next statement. Given that there is no "development" with digital acquisition (you might quote RAW capture, but even THAT has its limits as far as tonal range), I can't see how the more in-depth aspects of exposure/development compensation would work for digital. If you've overexposed for your shadow detail, there's no such thing as underdevelop for highlights - they're officially blown out. And likewise for underexposure for bright highlights - there's no underdevelopment for shadow detail - you've got a sea of grain awaiting you. For digital, I think it is best to take it as a solid understanding of exposure. If your shadows are going to be too dark, better bounce more light in there, or get another lamp to even out the contrast. Get it? I hope this helps- -Lew
  7. Just watched all of Chapter 1 Chapter 2 doesn't seem to be up yet. I'll just keep checking back. Really enjoyed that! Nicholas Humphries made the comment that you guys get to make up the rules as you go and he's right! As long as its consistent, the Steampunk universe you guys have created works! I'm gonna echo Steve McBride - I didn't expect it to look that good! Usually I hear "web series" and come to find out it's a group of people running around with whatever $300 camcorder they could get at Best Buy. I'd definitely like to hear more information - I'm with a group of people trying to make a web series as well, so any information is great. Was the production able to do any storyboarding, whether the whole thing, or just a coupla signature shots you wanted, or did you decide on your blocking mostly when you got on set with the performers? And how closely did you stick to them, or did you find you started departing from them - maybe getting ideas for better staging, or just plain needing to speed things up (on the set of The Shield the cast and crew apparently had a saying for new directors - "Feature Film by day, Documentary by night!")? Were there any particularly difficult lighting or camera sequences, and how did you figure out a workaround? What was your favorite scene/setup, and again, what was involved? -Lew
  8. Tim- I actually came back downstairs because I just realized the opposite side - yeah, my total experiences and thoughts are based on using the lenses FOR their destined format sizes. Because yes, if you take a 50mm lens for 6x7, a 50mm lens for 645, a 50mm lens for 35mm, and a 50mm lens for 16mm, and capture them (if you magically find the adapters) on the 16mm film frame you will end up with the same image. So yeah, you're right, I was looking at it backwards - trying to figure out how it would apply if an adapter could project the entire lens' projected image circle to a smaller size captured by the recording format (in this case a 1/3" sensor). If I could get the same framing with larger format designed lenses, basically (Mamyia 50mm for an RZ lens has a 81 degree angle view, and a Nikon 50mm lens have a 46 degree angle view - but given their target recording format you will be physically closer or further away to get the same shot - hence the change in DoF). I haven't read your post yet because I'm sure you're right. So, bowing head, saying "thank you" and now I'm gonna go back and read your post. Again, sorry for the headache- -Lew
  9. Okay - first lemme say I get what you're saying. If the 12-120 were zoomed in to 25, it should have pretty much the same characteristics as the 25-250 at 25. I have noticed it to be different - that's what I'm saying, yes. That the bigger format lens gave me a shallower Depth of Field. Yes, I'm arguing that the image capturing area (the actual sensor or film behind the gate) itself is not "in itself" what determines the Depth of Field (I know you realize that as well - it's part of my misunderstanding of that statement that I want to understand your point and why I'm still at it - undoubtedly very annoying, I know, sorry) because in my feeble mind it's merely where the image being projected by the lens "lands" regardless of how large an image circle the lens was designed to project - the above attached picture is what I'm getting at as far as that notion. If you attach a PL mount adapter to any 16mm camera, mount a 35mm camera designed lens, it will project a much larger image circle than needed - resulting in a "cropping" for the final exposed image. The back-theory behind my continued arguing is about framing/Field of View between different format cameras and their corresponding lenses - in the same tight room, with the same camera placement, you will end up with different focal lengths per format to achieve the same Field of view, and yes, I'm agreeing the larger the intended format, the shallower your Depth of Field will get for that same Field of View (which is where I think I'm looking from the opposite end from what you're saying and not seeing the common ground). And if proven wrong, I will appropriately bow my head and say "thank you" - just saying that what I've noticed between the cameras was when going up to 35, the same framing of a shot gave me a shallower Depth of Field (which to me was a more pleasant image). Maybe this was entirely my perception, but it's what I saw and felt I got. Not mm, but framing. This is also why I feel Tim Dashwood had a noticeable shallow Depth of Field in comparison to using 1/3" format intended video lenses. The main thought/problem/challenge in my responses is the desire to use lenses designed for larger formats/projection circles on a teeny little actual capturing area. Yes, I'm stuck on the lenses - based on them being designed for larger or smaller formats - which is also why I'm arguing about the adapter systems (Letus, Brevis, etc.) thought. I'm about to get nit-picky - understand that this is for clarification - not the "adapter", but the lens in front projects the image onto a ground glass. And yes, I agree the lens in front was designed to project a much larger image circle - larger format. And the lens in front (PL mount 35mm based, Nikon, Canon, whatever you use), for the same Field of view with a stock video lens, will give you a more shallow Depth of Field The lens photographing the ground glass is not determining the Depth of Field, and neither is the CCD/CMOS sensor, the lens projecting onto the ground glass is determining that (the lens capturing/photographing/framing the ground glass itself is merely photographing a flat field, yes) - the point of those adapters is to use the FRONT lens' characteristics which are not easily (if ever) achieved with the video lenses. The final recorded image will be exactly the same if recorded on 2/3" or 1/3" CCDs - they are merely recording what the lens at the very front is projecting, and you merely frame the ground glass to get the same crop. And when using the PL mount adapter (going back and denoting the original post) from JVC, if I were to slap my 25-250 on there, I would never be able to get a wide angle view. The PL mount adapter was with the intention of using flavors of lenses designed for use with 16mm - not 35mm (@ 4x larger projected area). Yes, it's very possible I'm arguing the same thing but from the opposite end as though it were different - I'm actually trying to figure that out over this thread because I know it HAS become confusing to me at least. Yes, I know there is too much mis-information around (especially on the web), which is why everyone here wants to make sure they have it right. I'm obviously a little confused and I want to make sure I have it right - sorry for the headache. -Lew
  10. "...does not alter the depth of field of the camera sensor because there is no rephotographing going on, only demagnification. This is physics." Hang on - first part incorrect (unless that isn't what you literally meant), second part right. The incorrect part (again, unless that isn't what you literally meant) - no one is saying it alters the depth of field of the sensor because depth of field is not dependent on the sensor, or even the sensor size - it is dependent on the physics of the lens being used (hence, the second part "right"). While I don't own my Eclair ACL anymore, I could tell a DEFINITE difference between the DoF characteristics of my Ang 12-120, and my 25-250 on an Arri 35III when the shots were framed the same. THIS is why I would prefer to use my lens designed for 35mm motion picture film rather than a lens designed for 16mm/S16mm motion picture film. The smaller the destination size (image sensor/CCD/film frame - doesn't matter) the lens is designed for, it will inherently have a deeper DoF. The larger the destination size, it will inherently have a shallower DoF. Ever use medium format lenses on an IMAX camera - or just on a medium format camera for that matter - it's really shallow and a nightmare to pull focus on the fly. And using a large format camera you have to work VERY hard at getting a head and shoulder shot completely in focus - that's a LOT of light to get a deep enough f-stop. It IS physics, but the sensor itself does not in any way define the DoF - the design of the lens being used does. Objects further away from the focal plane in my 12-120 were still fairly focused at T4 while the same shot (framing and distance from camera) with my 25-250 at T4 had those objects definitely more out of focus. BUT, at least using the 12-120 they would go out of focus MORE than with my lens that came with the HD110 - which is where the desire for a film lens adapter comes in. If this wasn't true, there wouldn't be this emerging business of trying to come up with the best one (P+S, RedRock, Letus, Brevis, etc). -Lew
  11. Ah yes - sorry - I get it now - it really IS that severe, but in a different way of gauging it. Well, looks like I either drop the notion of getting the PL adapter, or add the notion of getting a lens designed for 16mm cameras... Thanks Mitch- -Lew
  12. "I seem to get a lot of motion blur with action footage." I was wondering (quasi-asked above already) if you were shooting at 1/12th (which will be more blurred) or 1/24th? I own an HD110 myself and shoot 1/24th for action and don't have much of a motion-blurring problem. Video doesn't exactly match film in the way it records motion (yes, stating the obvious - bear with me). Digital motion blur isn't the smooth transition that film is - too slow and it becomes almost like 2 semi-blurred still frames merged (hope that makes sense - it's kinda like a double exposure) - and on the opposite end, if you shoot with the 110 any faster than 1/48th shutter speed then it really WILL look like a series of still frames. This can be very useful if you're going for a staccato effect on purpose, but if your goal is to just be able to see action clearly than it might be going overkill and jarring for the viewer. Hope this helps- -Lew
  13. I fully understand that the optics of each lens used will not change - from focal length to depth of field. The answer I've literally been looking for is very simple, but (and please understand I've had this frustrating conversation many times with people) the answers given to me are just the ones above - versions of "focal length doesn't change" or "lens characteristics don't change". Here is the exact question I'd LOVE to have answered - looking at the above image for illustrative purposes, what might the crop be when using a 35mm motion picture film zoom lens (as opposed to one specifically designed for versions of 16mm - standard or Super16)? The crop - and ONLY the crop - are the questions I have about this adapter. The above image is as literal as I could illustrate it - the size of the projection is for a MUCH larger imaging area - therefore cropped when using a smaller imaging DEVICE - get it? My worry is that after spending $4k just so I can slap my 25-250 on there, the actual image recorded by the camera will be such a teeny fraction of the whole lens' capable projection that it's worthless. A second question - which might clear up what I'm asking, or give a different perspective - is of the adapter itself - is there a relay/resizing/enlarging or shrinking lens element which when using a lens designed for 35mm film that would resize the image to the pore appropriate imaging size of a 1/3" CCD? -Lew
  14. I'm having this same debate myself - do I go with the lens I already own (Ang 25-250 w/PL mount), or buy/rent one MADE for 16mm. I want to buy the PL mount adapter, but not sure just how severe a crop it will be. While a 50mm lens (or ANY lens for that matter) won't "change focal length" there WILL be a crop - THIS is the question Ben wants answered. The shorthand is to say there is a focal magnification - like the 1.5x magnification factor describing 35mm lenses used on APS-C DSLRs - when in fact there's no change in focal length, but a cropping of the image - therefore "equivalent to" a focal length change/magnification. (see attached image - if this works) The only way I'm going to know for sure how severe the cropping will be is by getting the adapter in my hands andd slapping on my lens. I want the more shallow DoF from my Ang lens, but I just may have to get a "made for 16mm lens" because the cropping might make mine unusable. -Lew
×
×
  • Create New...