Jump to content

Ryan Emanuel

Basic Member
  • Posts

    88
  • Joined

  • Last visited

Everything posted by Ryan Emanuel

  1. Man I'm failing at this third times a charm maybe! https://th.bing.com/th/id/OIP.GQod-WeMpMnechUC0s_DaAHaDH?pid=Api&rs=1
  2. https://shotdeck.com/browse/stills#/s/kiss+kiss+bang+bang hopefully this works
  3. I have a pool party scene for a horror film coming up and I love this shot from Kiss Kiss Bang Bang, how would you go about the underwater lighting?
  4. Its a long thread so I'll just add a quick 2 cents. The point of the demo as a far as I'm concerned is "the camera is a tool" is not the way to think about digital, its not a saw or a hammer, digital is a two dimensional array of over 1 million measurement devices for light, that can capture 68 billion unique measurements per pixel 24 times per second. Its not a tool, its big data, so the issues of color science are really a statistics and data science problem. A lot of people will say film is too complex for digital to replicate, but there are statistical machine learning algorithms for emulating a function that is too complex. Thats basically what Yedlin is doing. If filmmakers open themselves to programming, digital signal processing, and data science, the only limits to what you can do with digital is your own programming skills. I know Davinci has been mentioned a couple times, but the math davinci uses makes it almost impossible to effect small color idiosyncrasies, its better suited for global color grading. I think instead of clinging to film, filmmakers should ask for better tools for digital, specifically interpolation algorithms, so you can make your own emulations, with samples and targets, and design a transform matrix iteratively and efficiently so everyone has their own personal looks.
  5. Bleached mus is probably the issue, it murders light, and depending on the source, it produces a mixture of hard light and soft light. If you see a hot spot in the muslin, there will be some hard light coming through, personally I'm a fan of magic cloth, you have to try really hard to get a hot spot and it doesn't completely kill the unit. Mus is kinda like silks, they sound freakin beautiful, "soften the light with a silk", but the fact is they are very inefficient at diffusing. If the budget has some excess room, booklit through mus with big lights, but if the budget is tight, you gotta make each foot candle count. I'd go double break 250 litegrid, or bounce off ultrabounce way before shooting through mus.
  6. If I'm understanding things correctly, depth of field or more importantly blur circle size is based on the opening size of the iris, the distance the subject is from the lens, and the distance of the out of focus objects from the lens. Higher focal lengths will have larger pupil openings for the same f stop. 100mm at f4 has a 25mm iris hole, 50mm at f4 has a 12.5mm pupil. But if you open the 50mm to f2 it will have a 25mm pupil and will be the same blur circle sizes as the 100mm with the focus set at the same distance. It seems like the notion of bigger sensor means thinner depth of field is the wrong emphasis. There are pros and cons for any focal length on any sensor size, it just seems more important to know how the variables influence one another to get the right pairing for the project. I think its a great time now, because with larger sensors on most modern cameras you can crop in and use difference sensor sizes for different situations and still retain enough quality. More recently I've been experimenting with shooting super35 for wides and mediums with the 25mm 35mm, then cropping the sensor to micro 4/3 for close ups and cutaways and staying on the 35mm instead of switching to the 50mm. Since the 50 will have thinner depth of field at the same f-stop, the close up might require stopping down to get the subject completely in focus, then the lighting has to change or ND has to be used for the wides, either way the set needs to be lit for an extra stop. But if I just crop the sensor, the 35mm will have the field of view of a 50mm, but the depth of field of lets say a f2.8 on m 4/3 will equal a f4 on super 35mm with the same exposure. So the wider depth of field of the sensor lens pairing saves the gaffer a stop of light for the scene. Just trying to show an example where bigger is not necessarily better, smaller sensors can produce the same depth of field with less light. That might come in handy. Big sensors have thinner depth of field for the same f stop, that might come in handy too. Also for shooting wider lenses on a smaller format, barrel distortion can be corrected. Longer lenses will have pin-cushion distortion on larger formats, which can be corrected as well.
  7. I'm really just trying to understand Yedlin's article here http://yedlin.net/lens_blur.html So if the exposure on different sensor sizes are the same, then are the examples from the article for Alexa are lit to a f4 while the imax shots were lit to a f11 to match the depth of field?
  8. I'm a lil confused and wanted to ask for some clarification. From my understanding the f stop is calculated from the ratio of the focal length divided by the entrance pupil size. So a 50mm lens at a f2 has an entrance pupil of 25mm, and a 100mm lens has a entrance pupil of 50mm at an f2, so F stop's entrance pupil size are relative to the focal length. An f2 is a different size opening at different mm's. The reason why a 50mm at f2 has the same exposure to a 100mm at f2 is that the 100mm proportionally projects a larger image. The pupil diameter is twice as big on the 100mm, so the surface area of the hole is 4 times bigger letting in 2 extra stops of light at a f2 but the projected image of the 100mm is proportionally larger so the exposure evens out. When comparing large format vs smaller formats, that sensor size is not fixed so that extra light of the 100mm vs the 50mm would be captured by the sensor. So lets say you are comparing micro 4/3 (2x crop) vs full frame, to get the same angle of view you need a 50mm to roughly match the 100mm on the full frame camera. But f2 on both those lenses will not be the same amount of light. So will the full frame camera be brighter at the same F stop? Will you need to adjust the f stop proportionally to match the change in sensor size, so a f2 for the 50mm on micro 4/3 would be around a f4 for the 100mm on full frame to match exposure and depth of field? Thanks for the help.
  9. Is there any reason why dimming quasars on a Leviton D4DMX would be bad for the units? Its just a much cheaper option than the ratpac LED dimmers for DMX. Thanks!
  10. Developing the network is all that matters, if you came out of film school without that, its gonna be tough. Just focus on building the network from the ground up. Instagram is key nowadays, take high quality stills, follow directors and production companies, comment on their work, start discussions, its like dating, start slow, ask to work together after they have gotten to know you. Learning the skills won't really get you hired at the level your at, friendships will though. Just know that a producer will hire a cinematographer with not as good work, if they are easy to work with, if you can make people laugh, you'll get rehired most likely. At the top end humor might not be more valuable than knowledge, but for the low to middle level it is. People are on set with each other up to 6 days a week 12-16 hours a day. You can be the most knowledgeable person, but if you can joke around and do some small talk things will always be tough. You can always learn through doing, don't spend the free time studying, spend it reaching out to new people. Go to the level right below your level of work, to make quicker connections.
  11. What stop do you need from what distance? If the softbox diffusion is heavy enough, there really shouldn't be a heavy fall off from the center.
  12. Shadow quality has a proportional relationship between the size of the source and the distance from the subject. A 4x4 from 5 ft will be the same shadow quality as a 8x8 from 10 ft.
  13. When the analogue data becomes digitized into linear, I've read that there is a lot of unusable bits on the top end, but can someone explain the process of transforming from 12 bit linear to 10 bit log, is there a function that gets applied to all of the data? Is 10 bit log still 12 bits of data?
  14. I felt the same way but for him to explain in detail you might need to know linear algebra, multivariable calculus, and some python. Most filmmakers don't, and explaining the how might not be possible without the math. I think thats why he sticks to the greater concepts and suggests that people try to research on their own.
  15. you should check out http://www.yedlin.net. Theres definitely multiple levels to image pipeline. You can do it in a very simple boiler plate fashion with good results or you can do it completely custom. Depends on how much control you want and how deep you want to go math and coding wise into digital acquisition and display prep.
  16. Was finally able to test this yesterday on set. I had a leeko 15 feet away from a 4' 216 frame, and compared inverse square with a 4x4 fabric led 2 inches from the frame and the inverse square incident readings were exactly the same. Bye bye book light!
  17. I'm not sure it helps, I don't know how they filled the card, unless you have magic cloth I don't know any diffusions that would fill the card evenly without a double break, unless the source was very far back. Full grid still might need a double break with something light to make the point source evenly light the frame. A 8x8 litetile would be a cool comparison with a booklight, but if done correctly they'll look the same as far as softness is concerned
  18. So then a booklight really has no utility, unless you need more output than what a litemat/litetile can provide, but at the same time do not have the distance to go straight through the diff evenly.
  19. Man I would kill for an a free hour in a black box studio. I just can't find anything that verifies that two and three dimensional sources work the exact same way. Every reference I see of inverse square talks about point sources with equal spherical radiation in all directions. In my mind, if you have a 4x4 even source and a point in space 2 feet from the center of diff. The corners of the frame will be more than 2 feet away and will light that point in space less than the center. Only when the point in space gets further back will the distances from each point on the 4x4 be equalish. It can't function like the inverse square up close right? Also I get that the diff is the new source, but so it does not matter if the source behind it was 100 feet away vs led blanket 1 inch away from the the diff? If the reading in both cases at 5ft is a 2.8, then reading must be the same at 10ft at a 1.4? If the diff is a reset on the inverse square, does the distance from bounce sources not matter either as long as its evenly illuminated?
  20. I know the inverse square is in reference to point source radiation and you can calculate the dropoff by 1/d^2, but what about fully diffused 2 dimensional sources like diff frames. If you have source that is diffused by a 6x6 of magic cloth and the card is filled complete and evenly, should the light emitted from the diffusion fall off slower, faster or at the same rate as the inverse square. How much does the distance between the unit and the diffusion matter for fall off if the card is evenly lit, would it matter if it was a larger unit further back or several units up close. I have a white apartment, so its tough to do an objective test.
  21. Its just a lot of money. The two jokers together will be about $20,000. I'm not familiar with the Chicago market but your going to wanna to see how many days you can bring them on as a kit fee or rent them out to get your money back. The two lights rent for about $375 together in LA, thats around 55 days which is a sizable amount of working days depending on how often you work. Not to mention insurance and maintenance, and repairs, which probably bring it more to like 60-65. If you have the days its worth it. ROI aside the units are really good, I would get the lensless reflectors for both units. The joker 1600 is actually brighter than an m18 if you have that. I would bypass the aputures if you are gonna invest in the jokers. Might as well get quasar battery powered units with honeycrates. More versatile. Its all up to your particular style, but I would drop the second joker and get some litemat 4 with snapgrids, they are probably the most useful smaller light out there. I don't like going on jobs without one. But if you like a harder front/fill light with the dramatic sports hard backlight then it might not be useful. If you rather the hard edge and a softer wrap around fill from the edge side, the eggcrated litemat comes in pretty handy.
  22. I think the more accurate millennial answer would be "Why aren't you using RGB LEDs with wireless control to you phone and painting via monitor on the day?" Also the 10 gel question color repreduction is kinda dependent on LUTs, are you doing an approximate rec709 transform or are you adding contrast and saturation to a wide gamut space, most of those gels will look like different colors depending on your post processing of the log. Storaro Green might start looking like Cyan depending on the image pipeline.
×
×
  • Create New...