Jump to content

Ryan Emanuel

Basic Member
  • Content Count

    60
  • Joined

  • Last visited

1 Follower

About Ryan Emanuel

  • Rank

Profile Information

  • Occupation
    Cinematographer
  • Location
    Los Angeles

Recent Profile Visitors

2615 profile views
  1. Its a long thread so I'll just add a quick 2 cents. The point of the demo as a far as I'm concerned is "the camera is a tool" is not the way to think about digital, its not a saw or a hammer, digital is a two dimensional array of over 1 million measurement devices for light, that can capture 68 billion unique measurements per pixel 24 times per second. Its not a tool, its big data, so the issues of color science are really a statistics and data science problem. A lot of people will say film is too complex for digital to replicate, but there are statistical machine learning algorithms for emulating a function that is too complex. Thats basically what Yedlin is doing. If filmmakers open themselves to programming, digital signal processing, and data science, the only limits to what you can do with digital is your own programming skills. I know Davinci has been mentioned a couple times, but the math davinci uses makes it almost impossible to effect small color idiosyncrasies, its better suited for global color grading. I think instead of clinging to film, filmmakers should ask for better tools for digital, specifically interpolation algorithms, so you can make your own emulations, with samples and targets, and design a transform matrix iteratively and efficiently so everyone has their own personal looks.
  2. Bleached mus is probably the issue, it murders light, and depending on the source, it produces a mixture of hard light and soft light. If you see a hot spot in the muslin, there will be some hard light coming through, personally I'm a fan of magic cloth, you have to try really hard to get a hot spot and it doesn't completely kill the unit. Mus is kinda like silks, they sound freakin beautiful, "soften the light with a silk", but the fact is they are very inefficient at diffusing. If the budget has some excess room, booklit through mus with big lights, but if the budget is tight, you gotta make each foot candle count. I'd go double break 250 litegrid, or bounce off ultrabounce way before shooting through mus.
  3. If I'm understanding things correctly, depth of field or more importantly blur circle size is based on the opening size of the iris, the distance the subject is from the lens, and the distance of the out of focus objects from the lens. Higher focal lengths will have larger pupil openings for the same f stop. 100mm at f4 has a 25mm iris hole, 50mm at f4 has a 12.5mm pupil. But if you open the 50mm to f2 it will have a 25mm pupil and will be the same blur circle sizes as the 100mm with the focus set at the same distance. It seems like the notion of bigger sensor means thinner depth of field is the wrong emphasis. There are pros and cons for any focal length on any sensor size, it just seems more important to know how the variables influence one another to get the right pairing for the project. I think its a great time now, because with larger sensors on most modern cameras you can crop in and use difference sensor sizes for different situations and still retain enough quality. More recently I've been experimenting with shooting super35 for wides and mediums with the 25mm 35mm, then cropping the sensor to micro 4/3 for close ups and cutaways and staying on the 35mm instead of switching to the 50mm. Since the 50 will have thinner depth of field at the same f-stop, the close up might require stopping down to get the subject completely in focus, then the lighting has to change or ND has to be used for the wides, either way the set needs to be lit for an extra stop. But if I just crop the sensor, the 35mm will have the field of view of a 50mm, but the depth of field of lets say a f2.8 on m 4/3 will equal a f4 on super 35mm with the same exposure. So the wider depth of field of the sensor lens pairing saves the gaffer a stop of light for the scene. Just trying to show an example where bigger is not necessarily better, smaller sensors can produce the same depth of field with less light. That might come in handy. Big sensors have thinner depth of field for the same f stop, that might come in handy too. Also for shooting wider lenses on a smaller format, barrel distortion can be corrected. Longer lenses will have pin-cushion distortion on larger formats, which can be corrected as well.
  4. I'm really just trying to understand Yedlin's article here http://yedlin.net/lens_blur.html So if the exposure on different sensor sizes are the same, then are the examples from the article for Alexa are lit to a f4 while the imax shots were lit to a f11 to match the depth of field?
  5. I'm a lil confused and wanted to ask for some clarification. From my understanding the f stop is calculated from the ratio of the focal length divided by the entrance pupil size. So a 50mm lens at a f2 has an entrance pupil of 25mm, and a 100mm lens has a entrance pupil of 50mm at an f2, so F stop's entrance pupil size are relative to the focal length. An f2 is a different size opening at different mm's. The reason why a 50mm at f2 has the same exposure to a 100mm at f2 is that the 100mm proportionally projects a larger image. The pupil diameter is twice as big on the 100mm, so the surface area of the hole is 4 times bigger letting in 2 extra stops of light at a f2 but the projected image of the 100mm is proportionally larger so the exposure evens out. When comparing large format vs smaller formats, that sensor size is not fixed so that extra light of the 100mm vs the 50mm would be captured by the sensor. So lets say you are comparing micro 4/3 (2x crop) vs full frame, to get the same angle of view you need a 50mm to roughly match the 100mm on the full frame camera. But f2 on both those lenses will not be the same amount of light. So will the full frame camera be brighter at the same F stop? Will you need to adjust the f stop proportionally to match the change in sensor size, so a f2 for the 50mm on micro 4/3 would be around a f4 for the 100mm on full frame to match exposure and depth of field? Thanks for the help.
  6. Is there any reason why dimming quasars on a Leviton D4DMX would be bad for the units? Its just a much cheaper option than the ratpac LED dimmers for DMX. Thanks!
  7. Developing the network is all that matters, if you came out of film school without that, its gonna be tough. Just focus on building the network from the ground up. Instagram is key nowadays, take high quality stills, follow directors and production companies, comment on their work, start discussions, its like dating, start slow, ask to work together after they have gotten to know you. Learning the skills won't really get you hired at the level your at, friendships will though. Just know that a producer will hire a cinematographer with not as good work, if they are easy to work with, if you can make people laugh, you'll get rehired most likely. At the top end humor might not be more valuable than knowledge, but for the low to middle level it is. People are on set with each other up to 6 days a week 12-16 hours a day. You can be the most knowledgeable person, but if you can joke around and do some small talk things will always be tough. You can always learn through doing, don't spend the free time studying, spend it reaching out to new people. Go to the level right below your level of work, to make quicker connections.
  8. What stop do you need from what distance? If the softbox diffusion is heavy enough, there really shouldn't be a heavy fall off from the center.
  9. Shadow quality has a proportional relationship between the size of the source and the distance from the subject. A 4x4 from 5 ft will be the same shadow quality as a 8x8 from 10 ft.
  10. When the analogue data becomes digitized into linear, I've read that there is a lot of unusable bits on the top end, but can someone explain the process of transforming from 12 bit linear to 10 bit log, is there a function that gets applied to all of the data? Is 10 bit log still 12 bits of data?
  11. I felt the same way but for him to explain in detail you might need to know linear algebra, multivariable calculus, and some python. Most filmmakers don't, and explaining the how might not be possible without the math. I think thats why he sticks to the greater concepts and suggests that people try to research on their own.
  12. you should check out http://www.yedlin.net. Theres definitely multiple levels to image pipeline. You can do it in a very simple boiler plate fashion with good results or you can do it completely custom. Depends on how much control you want and how deep you want to go math and coding wise into digital acquisition and display prep.
×
×
  • Create New...