Jump to content

Ryan Emanuel

Basic Member
  • Content Count

  • Joined

  • Last visited

  • Days Won


Ryan Emanuel last won the day on June 22 2018

Ryan Emanuel had the most liked content!

Community Reputation

6 Neutral

1 Follower

About Ryan Emanuel

  • Rank

Profile Information

  • Occupation
  • Location
    Los Angeles

Recent Profile Visitors

2425 profile views
  1. What stop do you need from what distance? If the softbox diffusion is heavy enough, there really shouldn't be a heavy fall off from the center.
  2. Shadow quality has a proportional relationship between the size of the source and the distance from the subject. A 4x4 from 5 ft will be the same shadow quality as a 8x8 from 10 ft.
  3. When the analogue data becomes digitized into linear, I've read that there is a lot of unusable bits on the top end, but can someone explain the process of transforming from 12 bit linear to 10 bit log, is there a function that gets applied to all of the data? Is 10 bit log still 12 bits of data?
  4. I felt the same way but for him to explain in detail you might need to know linear algebra, multivariable calculus, and some python. Most filmmakers don't, and explaining the how might not be possible without the math. I think thats why he sticks to the greater concepts and suggests that people try to research on their own.
  5. you should check out http://www.yedlin.net. Theres definitely multiple levels to image pipeline. You can do it in a very simple boiler plate fashion with good results or you can do it completely custom. Depends on how much control you want and how deep you want to go math and coding wise into digital acquisition and display prep.
  6. Was finally able to test this yesterday on set. I had a leeko 15 feet away from a 4' 216 frame, and compared inverse square with a 4x4 fabric led 2 inches from the frame and the inverse square incident readings were exactly the same. Bye bye book light!
  7. I'm not sure it helps, I don't know how they filled the card, unless you have magic cloth I don't know any diffusions that would fill the card evenly without a double break, unless the source was very far back. Full grid still might need a double break with something light to make the point source evenly light the frame. A 8x8 litetile would be a cool comparison with a booklight, but if done correctly they'll look the same as far as softness is concerned
  8. So then a booklight really has no utility, unless you need more output than what a litemat/litetile can provide, but at the same time do not have the distance to go straight through the diff evenly.
  9. Man I would kill for an a free hour in a black box studio. I just can't find anything that verifies that two and three dimensional sources work the exact same way. Every reference I see of inverse square talks about point sources with equal spherical radiation in all directions. In my mind, if you have a 4x4 even source and a point in space 2 feet from the center of diff. The corners of the frame will be more than 2 feet away and will light that point in space less than the center. Only when the point in space gets further back will the distances from each point on the 4x4 be equalish. It can't function like the inverse square up close right? Also I get that the diff is the new source, but so it does not matter if the source behind it was 100 feet away vs led blanket 1 inch away from the the diff? If the reading in both cases at 5ft is a 2.8, then reading must be the same at 10ft at a 1.4? If the diff is a reset on the inverse square, does the distance from bounce sources not matter either as long as its evenly illuminated?
  10. I know the inverse square is in reference to point source radiation and you can calculate the dropoff by 1/d^2, but what about fully diffused 2 dimensional sources like diff frames. If you have source that is diffused by a 6x6 of magic cloth and the card is filled complete and evenly, should the light emitted from the diffusion fall off slower, faster or at the same rate as the inverse square. How much does the distance between the unit and the diffusion matter for fall off if the card is evenly lit, would it matter if it was a larger unit further back or several units up close. I have a white apartment, so its tough to do an objective test.
  11. Its just a lot of money. The two jokers together will be about $20,000. I'm not familiar with the Chicago market but your going to wanna to see how many days you can bring them on as a kit fee or rent them out to get your money back. The two lights rent for about $375 together in LA, thats around 55 days which is a sizable amount of working days depending on how often you work. Not to mention insurance and maintenance, and repairs, which probably bring it more to like 60-65. If you have the days its worth it. ROI aside the units are really good, I would get the lensless reflectors for both units. The joker 1600 is actually brighter than an m18 if you have that. I would bypass the aputures if you are gonna invest in the jokers. Might as well get quasar battery powered units with honeycrates. More versatile. Its all up to your particular style, but I would drop the second joker and get some litemat 4 with snapgrids, they are probably the most useful smaller light out there. I don't like going on jobs without one. But if you like a harder front/fill light with the dramatic sports hard backlight then it might not be useful. If you rather the hard edge and a softer wrap around fill from the edge side, the eggcrated litemat comes in pretty handy.
  12. I think the more accurate millennial answer would be "Why aren't you using RGB LEDs with wireless control to you phone and painting via monitor on the day?" Also the 10 gel question color repreduction is kinda dependent on LUTs, are you doing an approximate rec709 transform or are you adding contrast and saturation to a wide gamut space, most of those gels will look like different colors depending on your post processing of the log. Storaro Green might start looking like Cyan depending on the image pipeline.
  13. FOV is the term that confuses people because others would argue that the focal length and the FOV do not change, only the crop factor. Its semantics of terms but thats definitely where people get confused. Separate question. I own a set of Veydra MFT primes. Now the imaging circles cover s35mm. People use these lenses with the Fs7, but when I compare frames between the Veydras on my Blackmagic Pocket 4k and my directors viewfinder set to s35mm, the frames line up between the viewfinder and the P4K. That leads me to believe that the Veydra imaging circle is designed for MFT, the 25mm Veydra has no crop on the P4K. But then what is going on for the lenses on super 35mm. Can you really have an imaging circle where the FoV of a 25mm fits on a MFT sensor, but theres still enough extra 25mm room to cover s35mm. The reverse crop would be like .70 crop, that seems like a lot of room before vignetting. Or is the lens company lying about the mm and the true FOV is more like a 18 but they marked the lens 25mm
  14. Still confusing mostly because people use the term field of view differently. I'd like to try to explain my understanding without the term field of view. Please let me know where I'm wrong. I understand that the depth of field and the spacial distortion of a lens is fixed. Regardless of the sensor size those characteristics of the lens are part of the optics and not the camera. 25mm will have a the width, height, and spatial distortion relationships between distances of a 25mm within its imaging circle. When the sensor is smaller than the imaging circle, there is a crop, or smaller cookie cut out of the 25mm imaging circle. It is not the same as shooting on a tighter mm. The distortion and the depth of field is the same as the 25mm, but it is cropped in. The sensor size the imaging circle that the lens was designed for only impacts the potential cropping. But the full spacial perception characteristics of depth, width, and height, of a 25mm can be projected onto different sized imaging circles. You can have a 25mm imaging circle that fits on a small sensor that sees the same exact frame as a 25mm designed for big sensor. If you take the 25mm with an imaging circle designed for LF, and put it on a s16 sensor. You are still looking through a 25mm but with a crop in equivalent to the percentage of the surface area of the LF sensor divided by the surface area of the S16 sensor. Now saying a 25mm is a 25mm is a 25mm is right but its kinda a riddle that perplexes the original question. You won't use an 25mm with a LF imaging circle on s16 like you would on LF, the experience of shooting on the 25mm with a fixed imaging circle is different on the different sensors. Due to the crop thats going to effect the distance subjects are from the camera. While you might have liked subjects 6 feet from the camera for mediums on a 25mm with a LF imaging circle, once you use the s16 sensor with the LF lens, the crop will force you move your subject back to 12 feet lets say for the medium. You will get the depth of field and distortion characteristics of a 25mm at 12 ft but your frame will be a medium. The frame will definitely look different even though its the same lens. The entire experience of shooting will be different, BUT it is not the same as a 50mm in terms of depth of field and distortion characteristics setting the focus to 12 ft. It is important to know what imaging circle size the lens was designed for because of potential cropping.
  • Create New...