Jump to content

Gabriel Devereux

Basic Member
  • Posts

    106
  • Joined

  • Last visited

Posts posted by Gabriel Devereux

  1. How meaningful is a native ISO... or how every camera has one. 
    Theoretically isn't ISO just a standardised speed rating for a camera? The ideology of one being native and it giving 'peak' performance is as David says perceptual. 
    I remember it being explained to me that 'native ISO' means the image/signal has only undergone analogue amplification, this is of course wrong... as almost all useful images that have undergone demosaicing have had a level of digital gain introduced (an example being 1.64* for the Red channel at 5600k for Alexa sensors (ALEV)). 
    And with the idea that the analogue gain applied to be the mean, native, signal... even non 'DGA' Sony sensors from my understanding typically employ a comparator system at the end of each column (typically where substantial analogue amplification takes place prior to ADC) that decides if a signal is under a certain value - a hypothetical 4, and the ADC, line or whatever gets saturated at a value of 10. It'll amplify by an extra 6db or such (2*) to mitigate quantisation noise and such. 
    With the amount of image processing in today's cameras, I think the ideology of a native is nice, but it's not really a meaningful specification... outside of one high - untouched and reconstructed at such level, analogue gain and one low in terms of dual 'native'. The only aid I see it being is an understanding that at a said point you have a relatively even distribution of signal above and below middle grey. However, even then as said above - the noise floor of any imaging sensor is measurable but different, it's similar to ISO in relation to it being a perceived standard in terms of SNR but is an outcome of numerous variables different from design to design and even location to location, with thermal noise, dark current noise, reset noise... and that would eat away at the bottom of your signal... so you may not want to evenly distribute it.
    Sorry for the ramble.

  2. 1 hour ago, aapo lettinen said:

    We had two ursa mini g2 's on a show last Spring and one of them had to be immediately sent back because it was broken right out of the box (the one I talked about) . The other camera worked with minor faults like some kind of loose connection in the battery plate etc. but it worked otherwise pretty OK for all the shooting days except it switched of unexpectedly many times and had to be rebooted. The one we got as a replacement for the faulty-right-out-of-the-box camera was a bit unresponsive at times but after there was a theatre blood incident where the camera operator and the G2 got heavy splashback from a blood effect and were all dripping wet from the theatre blood, the G2 mystically improved and started to perform very reliably after the blood was wiped off! 

    So if wanting to have a very reliable Ursa Mini G2 one seemingly needs to send the first one back and get a semi faulty replacement, then splash it with fake blood couple of times to repair the bad manufacturing quality and then it is a keeper! Never laughed as hard in my life after finding out that the camera actually got better when it got some blood all over it ? 

    Likewise re G2 and G1

    Once went through 3x UMP's in 3 months. Sadly - in search of a 'good deal' two years ago, all out of warranty (by about a week).

    The third one, after a $3000 AUD PCB board replacement, still works today. However I nurture it and on bigger shoots I rent a more reliable camera.

  3. 1 hour ago, Tyler Purcell said:

    The 12k is an amazing design, you can use the full imager to capture at any frame size. So even though it does have a color pattern like a Bayer pattern, it does not however force cropping. So you can shoot at 8k for instance, using the full imager and get a full RGB image, rather than half res on the R and B channels like a bayer pattern imager. In fact, the imager at 8k looks much better than it does at 12k. Being able to get a 12 bit full 444 image from the camera without sacrifice is huge. From my knowledge, nearly all other cameras at full resolution are using a Bayer pattern which uses half resolution on the R and B channels. 

    Correct me if I'm wrong -
    The Bayer CFA exists with twice the amount of green photosites - 'sensors' because our eyes - our photopic vision is more sensitive to the green (550nm or about, frequency band). 


    With this, let's look at the sensitivity and the collection of data in an incredibly simplistic sense from a Bayer CFA. Looking at colour science and colour rendition in a similar simplistic fashion.
    My eye is perceiving an incoming wavelength with a certain amount of value in each RGB frequency band, an amount of energy/photons in each. This theoretical ray of light has a value of 20 in the red channel, a value of 20 in the blue channel and a value of 20 in the green channel (emphasising this is an extreme hypothetical). 
    For this example, attributes such as the density and efficiency of each filter on the CFA pattern will be ignored, each filter is even in its respective frequency band and as efficient as possible (the same with the QE of the photo-sites bellow). 
    So with this, the data collected from the imager should be R20, G20, B20, G20 and after demosaicing (I'm tempted to attempt a calculation using a relatively basic Bicubic interpolation algorithm however I think it'd be unnecessary with such simplistic numbers) the general gist is the two green channels would be more 'sensitive' a higher value, than all the others. The hypothetical pixel created would be around R20 G40 B20.  The goal/hope would be my eye - my photopic response would achieve similar with this increased sensitivity in the same frequency bands. If I took the same ray of light, the same amount of photons and somehow got an RGB readout like a camera from my eye the values would be similar. 


    Cameras are weird. The goal from my understanding is not to achieve a true RGB value from incoming light but match or achieve near similar results if it'd be our photopic vision interpreting said information. With this, I do wonder what kind of nifty amplification has been applied to the green channel on the UMP12k or other RGBW CFA cameras in an attempt to match this. I do believe if they were to capture all RGB values at 'full resolution', sampling all three values at the same sensitivity, the image produced would look a lil weird. 

  4. 1 hour ago, Phil Rhodes said:

    I think the idea is that it has such overwhelming resolution that the sharpness compromises of a mosaic sensor are overcome and even 8K material is so oversampled that it approaches the Nyquist limit of the frame dimensions in terms of sharpness and detail. In may circumstances, the factor limiting resolution is not the sensor nor the quality of the lens, but the diffraction limit of the lens, which is geometrically determined and therefore exactly the same for a low-cost stills zoom as it is for a Leitz Cine Prime. The point isn't really 12K. The point is it has much more than you need for any reasonably foreseeable circumstance.

    Yes, smaller photosites are noisier, which compromises dynamic range, but that's sometimes a rather misunderstood metric. If you take an HD area from the 12K sensor, it will be physically smaller than the HD area of a 4K sensor, and it will be noisier. But if you take the whole 12K sensor and scale it to HD, you gain back much of the loss. Not all of it, of course, because more pixels have more gap around them, so your fill factor becomes lower and you're using less sensitive area overall, but this isn't quite as simple as is often assumed.

    The big problem with the 12K is that using it to its full potential locks you into a Blackmagic Raw workflow. This, again, is not quite as simple a problem as we might think; generating 12K ProRes would be an impractically-huge processing load even if anything supported it, and scaling that vast sensor down to saner resolutions is hard work too. I do think they could have made slightly more effort in this regard, though; I think this is what puts most people off. Blackmagic Raw or bust, really.

    To expand and question, now that we've had time to somewhat dissect and look at the image and its pro's and cons.
    I find the ideology - potentially the wrong term; of a sensor out resolving a lens flawed. As you say above the photosite pitch, the total pixel count crammed onto the sensor exceeds the resolving power of near, if not all-optical systems especially in realistic situations even with a hypothetical perfect lens. 
    The main issue argued with the above is low capacity photo sites, easily saturated, total camera latitude etc. What you say makes logical sense about noise - I can understand to a certain extent assuming the singular white/clear photo site in each 'matrix' acts as a higher gain channel to interpolate shadow information to improve the dynamic range but the compromises are endless, yes all imaging and optical systems are a series of compromises. However, with the rather sudden and rigid highlight roll-off that is especially apparent with RGB clipping, the lack of workflow infrastructure around the total resolution and even with that, they haven't necessarily made a 12k camera as to somewhat even compete with image fidelity downsampling is a must. The selling point of the camera seems redundant.
    To return to my initial statement about a sensor out resolving a lens. Maybe I'm being a fool, but, from my rather limited understanding of the engineering of an imaging sensor the ARRI with its 8.25um (diagonal) photo size 'size'/pixel pitch is in near-perfect harmony with the majority of its optical systems, a perfect example being the master prime. I believe achieving MTF(74) at 70lp/mm resolves an airy disk around a similar size to the photo site. It's relatively balanced so why compromise further on photosite size for pixel count. - it almost seems if they're attempting to reproduce the qualities of multiple channels (with different gains) from each photosite but in a two-dimensional sense which compromises, as you say above the total area to sample from. Which realistically results in a lower fidelity image (concerning SNR and other aberrations) even downsampled to 8k at a 1:1 pixel ratio with its competitors. 

     

  5. Just got off a shoot with 13 or so FX6’s, an FX9 and a few A7’s. I needed all cameras to output a logarithmic signal for me to monitor while the rest of VV needed to see a regular standard rec709 signal (about 5 or so monitors, 2 on switchers). 
    I personally went with the switchers going through inline LUT boxes (Teradek COLORs) and the rest of the monitors where looped through BM 12G HDMi to SDI converters and a few on a teranex and occasionally a ultra studio rack. 
    In your scenario you can load custom LUT’s onto 12G converts and I believe 6G and 4G (maybe) converters as well (however unsure) they have a small form factor and are relatively cheap. 
    Teradek color’s are good when partnered with a live grading solution however in your case I don’t really believe it necessary. 
    I did end up using an older BM I/O 3G box it was able to take multiple channels and worked with pomfort so in my situation it was perfect but yeah for something small the newer 6G-12G converters are something to look into.

    Apologise for terrible wording, wrote on phone.

  6. As long as your willing to pack accordingly. 

    A few times around Australia I flew with a shot bag for my monitor stand. A friend of mine would travel with one as well while we waited for either freight or for the truck arriving with the rest of our gear. 

    I always kept it in one of the pelican's though. I'm not sure if traveling with them loose would be the greatest idea. It'd be an interesting check in experience. 

  7. Hi,

     

    I'm currently in need of an IR converted mirrorless camera (preferably Sony) to rent this coming weekend in Australia. I've seen them all around the US, some even CFA stripped (which would be preferable). 

    It preferably would susceptible from anywhere between 700/750nm - 810-850nm for deep IR recording.

    The goal is to film 'invisible light' with an IR emitting source that is far more susceptible to our cameras than the eye. 

     

    Thanks

    Gabriel

  8. I'm currently working on a moving production while needing to work with a colourist for another. 

    Due to our constant traveling it is impossible for me to have adequate equipment for proper remote grading. My solution is to view graded material through a Atomos Shinobi (calibrated daily through a probe) monitor (purely for checking values, not judging the overall grade) from my MacBook Pro 2018. 

    My goal is to take an output from DaVinci, through an ultra studio mini 3g monitor I/O to an SDI input to the monitor. 
    Will this work? Am I overlooking something, as said earlier my goal is just to view values in a somewhat accurate manner. The deliverables I'm getting are in Rec709 Gamma 2.4

    Thanks
    Gabriel
     
     

     

  9. As said above, everything LED is missing spectral content - now this is OK as long as you can control the camera systems being used under said limited frequency bands of lights. Now I'm going to assume (purely from the point that doing otherwise would be almost absurd) that you can't tell the studio to only shoot on one particular camera system. With this, unless you go down the route of say Kino Freestyle panels (that attempt to match the spectral response of numerous camera systems) you may want to give up the versatility of RGB based (RGBWW, RGBW etc) LED's for just plane white LED's (either CW or WW) for the sake of somewhat colour accuracy. 

    As said above Aputure and Nanlite are good when looking at it's simple single white LED fixtures (WW in this instance if your going to use large tungsten fixtures for sunlight) so it may be your best option. White light LED's, while lacking in versatility do have the most uniformed spectral power distribution emitting a singular wider frequency band that will harmonise with more camera systems and allows a larger gamut of colours to be resolved, meaning less inorganic skin tones and serious colour shifts. 

    At the end of the day you may want to look again at tungsten space lights. Permanently installing LED fixtures may be a little unwise considering the extreme amount of misinformation and the technologies infancy. I remember listening to an interview with John Higgins 'Biggles' on his work on 1917 and he talked about purchasing LED fixtures and how doing so at the moment is unwise as a long term investment. 

    • Thanks 1
  10. 4 hours ago, Tyler Purcell said:

    Well yea, the image is "created" in post. With film, the image is created on the set. 

    Both shows are overly graded, I've seen the BTS, looks nothing like the final show. Kinda sad. 

    To jump in, and disagree.

    On film the image is not 'created' on the set anymore than digital. I've yet to see chemists on set altering chemical compounds of said emulsion in an attempt to boost the magenta channel/layer while you're attempting to get a shot. If anything the look of film is somewhat predicated by the stock you choose disregarding the post process the emulsion and prints will go through similar to that of a digital pipeline  however arguably with less control. 

    I don't see why you view a film being graded in a certain way sad. Is choosing a certain stock for a certain look, sad? Is it because the film maker has more latitude and can make more refined decisions? Because it means they can potentially fix an error? The latter two I believe are the main causation for the DI to be knocked but personally I don't mind it. 

    The amount of times people throw around the term 'The DP's job is too *insert remark here*' I find interesting yet, isn't our primary job, above all else, to deliver an image? The images that are being delivered 'nowadays' I would partly consider more exhilarating, exciting than in previous decades - this is of course disregarding the actual creating of the image which as said above is becoming easier with more latitude and tools. With that - the bar of quality is getting higher, while the required skill is getting lower. It's almost as if cinematography is becoming easy to learn, hard to master. Which is different, but I wouldn't knock it. 

    • Like 2
  11. 7 minutes ago, Adarsh said:

    I want to replicate for outdoor scene at 12 noon.

    As said above, your options are endless. There is no singular way to replicate sunlight. 

    To break it down typically you'd have a singular powerful harsh source (acting as the sun), and then a surrounding softer source acting as the sky (at least that's typically how I'd look at it).

    So, with your goal being to achieve an outdoor scene at noon (keeping in mind, I don't know if this is an exterior set, an interior set or if this outdoors... if so you may just want to shoot at noon). I'd imagine you'd want one harsh powerful overhead source, as harsh and as high as possible with a wide beam spread -  maybe a 20k skypan (keeping in mind to operate and rig said fixture requires a skilled crew)... or something of that nature. Then a surrounding softer source, a hypothetical array of space lights, potentially with a very mild diffusion bellow. This, like everything said above is up to personal taste.

     

     

    • Thanks 1
  12. On 5/2/2021 at 12:25 PM, Christopher Santucci said:

    Contrast and fall off are the same thing to my mind. Key someone standing 10 feet in front of another person with a fixture that's 3 feet away. Compare that to the fixture at 20 feet away. What happens is greater fall off at 3 feet away, than 20 feet away. The unlit person further back will be darker with the fixture at 3 feet away from the subject (more contrast - more fall off), and both will be more evenly lit with the fixture 20 feet away (less contrast - less fall off).

    You're talking about using modifiers and different setups. I'm just talking about the nature of light on an elemental level. It's really just inverse square law at play.

    Contrast is a difference in illuminance between two points. For example, to take it out of light and/or radiology. The contrast between 'terrible' and 'wonderful' - there is no path between those two words they are just on the opposite ends of the meaning. Such as light and dark. By definition - the state of being strikingly different from something else in juxtaposition or close association.

    Fall-off, in theory, is looking at plane wave propagation in free space. 'Modifiers' and different set-ups that David is using as examples does not alter the fact that light is an electro-magnetic wave. You cannot modify light into something else. If you diffuse light (scatter it) or refract it etc - which I may add your source that you most likely are using for your example, say a blondie or a Tweenie are doing said 'modifications' on a much smaller, potentially minuet to the eye, scale.

    There are a dozen great practical examples in this thread - however light on an elemental (which it isn't) level is just electromagnetic radiation. That, to put it crudely, does dilute over space, that has no real relationship with a relatively artistic term. 

  13. On 4/12/2021 at 5:02 PM, Karim D. Ghantous said:

    That is fundamentally a legitimate feature of high resolution sensors. Several productions shoot their cameras full frame and compose for a window. You just can't go crazy with it, that's all.

    Back when the Leica M9 was relatively new, I recall that one photographer was surprised at how much he could crop in. He basically said that he preferred the M9 with a short telephoto lens than a contemporary DSLRs with a long telephoto. Of course, a razor sharp lens is essential.

    Yes and no, 

    If you want to obtain an 4k image, so you shoot at 8k (in S35, so note photo sites smaller than 3-4ym (micrometer)) to hypothetically crop into and deliver, as said above, a 4k image, while your overall total pixel count may be 3840x2160 pixels, disregarding the camera it won't have the same fidelity on an optical level and it most likely NEVER will. 

    With resolution and cameras it's always hard to say '8k will never have the fidelity of 4k' as camera technology (especially digital) is constantly advancing. Of course a higher resolution camera will most likely always suffer from low capacity photo sites (if kept to S35) however technological advancements over the years may eventually negate or help the issue. 

    However, one technology that isn't in its infancy and that we can look at the properties of, is optics and can calculate from a hypothetical 'perfect lens'. Which has attributes such as its elements being as transparent as air (somewhat unrealistic) among others only limited by the diffraction limit. However it still has its limitations when it comes to resolving power that already modern day cameras (such as the 2.2ym photo sites of the UMP12k) are already exceeding. Bellow is a bit taken from another post on the forum (Joshua Cadmium). This is calculated from a hypothetical perfect lens at MTF(50), it is important to note that it's only at MTF(74) that an airy disk is similar in size to that of the photo site of the camera. 

    "For the Blackmagic 12K sensor at green 550nm, that puts MTF(50) at f3.2 and for the Alexa sensors that puts it at f12.1.
    At red 700nm light, the Blackmagic 12K would hit MTF(50) at f2.5 Alexa would at f9.5
    At violet 400nm light, the Blackmagic 12K would hit MTF(50) at f4.4 and the Alexa would at f16.7"
     
    Just to quickly duck dive into MTF, to put it crudely you're looking at the way lights channeled through an optical system. Looking at how it channels an airy disk (the point of light the camera measures which is formed by light passing through the diagram (iris) of the optical system). 
    You calculate it through line pairs per mm in relation to what the lens can resolve. Here is a quick insert I wrote on another post - to show the point I’m making point we need to calculate the line pair per millimetre for a sensor. That is done by taking 1mm (1000um) / pixel pitch (2.2um) / 2 (line pair). The UMP12k would be 227.27 lp/mm and just for a comparison the Alexa is 60.61 lp/mm
     
    Now, to briefly look at a modern lens that we all know and love, the master prime. One of the 'sharpest' lenses one can use on a S35 system - "If I remember correctly a master prime just about gets MTF (74) at 70lp/mm". Sadly I don't remember exactly which focal length this is however the point is there. The increase of a pixel count on a S35 system to 'crop in' is flawed. One most likely will never be able to achieve the same fidelity, unless you use a larger format camera (not compromising photo site size while being able to increase total pixel count).
     
  14. 3 hours ago, Raymond Zananiri said:

    Isn't it the correct assumption though that there are no new sensors that are more light sensitive than 800 iso? (or am I wrong?). 

    In such a case it's all noise reduction in the camera. The question is: is the quality of the NR high enough to make it comparable to no NR?

    Yes, and no... mostly no. 

    Here is a crude (and somewhat inaccurate example) https://na.panasonic.com/us/audio-video-solutions/broadcast-cinema-pro-video/dual-native-iso-camera-technology-cinematic-low-light-video-production

    Different sensors have different analogue outputs, different 'native' analogue amplifications. Some 200, 400, 800 and so on, and in this case, for extreme circumstances, the ability to switch to a higher analogue output. 

    It's similar to that of ARRI's DGA (dual gain architecture) where each photo site has two analogue 'native' outputs - one high one low. However in this instance, rather than combining the two, they are for two separate outputs. Hence the name 'Dual Native ISO' because each photo site has a dual 'native' analogue output. 

    You are right, there is most likely analogue noise mitigation (I believe there is in most contemporary Sony cameras, and most cameras nowadays) and I will add Panasonics graph stating in a way that noise starts at digital amplification... is wrong. However, it does allow to theoretically capture a certain amount of latitude above and bellow middle grey (similar to that of shooting at a regular cameras native, most commonly 800 ISO) at a much higher sensitivity, without any digital amplification. 

    I think. 

    • Upvote 1
  15. The image fidelity is quite poor. This does bring up an interesting question of shooting things in a wide to crop in etc. I remember watching older RED advertisements that you can 'Shoot in a wide! Crop in, multiple shots in one!' blah, blah. However from looking at the RED image fidelity and also the resolving power of contemporary optics, or even perfect optics I doubt we'd ever be able to seriously crop into an image without impact to the image fidelity, unless we all started to shoot on incredibly large formats.. oh wait... 

    As already with the UMP12k the 2.2 micrometer photo sites are far too small for any contemporary lens or even an 'impossible' perfect lens to resolve... well... (MTF(70+)) at most likely the f-stop required in the case of shooting for multiple shots. 

    I do wonder if to achieve the possibility of using one camera to shoot multiple shots, broadcast companies will start moving towards larger formats as well.

     

     

  16. On 3/24/2021 at 2:21 AM, Jonathan O'Neill said:

    With a Falconeyes F7 Fold I want to get the equivalent of a Steel Green on a tungsten light , with camera at 3200k .

    HSL mode you can get the hue in a ball park, then if you desaturate the LED it will ultimately go to about 5600k at 0% saturation....

    RGB would just be ages of flicking through RGB combinations and thats if you have no plan to have a brightness option...

    CCT mode is a separate set of LED emitters , with no option for + or – green.

    Can someone recommend a different brand of RGB lighting , where you can set your base kelvin, e.g desaturate to 3200k? And accurate LEE filter options? and a CCT mode with + or - green? cheers

    I've had relative luck and have faith in the Kino Freestyle line. However, you're asking a lot from any fixture. 

    Most, if not all LED's can't resolve every LEE/Rosco/whatever filter colour, for every camera. Even if you accurately input information due to the discontinuous spectrum of all RGB LED fixtures it's likely the frequency bands emitted while being perceived as accurate from our photopic vision or even one camera system it may not be on another (due to different camera spectral response curves). 

    Even if you measured with a colorimeter or such to match the two. You're matching them both to one constant when in reality the 'detector' - to put it crudely, is also a variable. 

     

    • Like 1
  17. 8 hours ago, Stephen Sanchez said:

    Correct.

    For the soft source measurement, there's math involved that somebody will eventually nail down, whoever has the time and ability, in a simple enough form for others to use.

    But even with hard light fixtures, we don't run a calculation involving the reflector, bulb type, and lens, to determine output. We look up the photometrics for that fixture or refer to our experience. A soft source is no different. Make your own measurement cheat book if it works for you.

    When first starting out, I had set a 4x4 bounce card five feet from a Joker 400 and 800, and metered five feet from the illuminated card. That helped me for a while until I grew used to the intensities by feel. A friend told me another way was to "find your f11." Knowing where f11 is with any light gimmick, you will know that half that distance is f22 and double that distance is f5.6.

    I'm not sure if a true formula or calculation would ever arise and be user friendly/simple.

    I keep getting stumped (admittedly no mathematician or physicist, in fact far from it). Even once understanding how to calculate light fall-off from a large source - assuming that the diffuser in this instance or bounce's intensity is uniform (which it often isn't with the nature of our lighting instruments). All that allows is calculating the fall off of light from said source (bounce, diffuser). 

    Attempting to calculate the fall-off of the bounce source with only luminance information being the source lighting the bounce source adds variables upon variables and sadly not ones one can easily dismiss for an approximation, which there are many of as well.

  18. On 3/1/2021 at 7:19 AM, Stuart Brereton said:

    I just wonder how useful it is to post something that unknown other people told you, saying that it is possible to "somewhat guesstimate" a bounce level without specifying how far away from the bounce your subject is. Other people who are looking for information might read this and not knowing any better, spread this misinformation even further.

    Miss-information is a bit harsh - now disregarding minor discrepancies of which there are MANY. And the following is a tested approximation. 

    I found that a frame (6x6' frame of Ultrabounce) when 6ft away from the subject (distance is the same as source size). With the frame evenly lit by a source (again another variable, however I found it gave minor discrepancies) placed 3-4' in front of the frame (in-between the subject and the frame). The light loss - is around 2 1/3 stops - 2 2/3 stops in relation to the fixture at the combined distance from source to frame and frame to subject.

    Now, as you can see - even here the light loss isn't the 'guesstimate' of the 2.5 stops above. HOWEVER, I find it to be a good guesstimate. 

    I will openly disclaim that there are a sheer number of variables - in my original post above I disclaimed the sheer number of variables and I also stated a good course of action to potentially get a more accurate answer. However having tried going down that path myself, it's a long one. The above helps me, as a guesstimate - an estimate based on a mixture of guesswork and calculation. 

     

  19. As said above there are too many variables to be able to make an accurate calculation, however, I did once go down the path of trying to find a solution of calculating light fall-off from a soft source. I asked several cinematographers and none knew an exact answer (some recommended with Ultra bounce just cut 2 1/2 stops from your original output - this isn't accurate when your measuring close to the bounce however at a distance it is somewhat a good guesstimate). 

    I went ahead and asked on a physics forum and got this answer - this answer was about shooting light through a diffusion. 

    "If your diffuser is good enough that the light from any portion of it close to uniform from any direction, and the light from all portions is equal in intensity, then the total illumination on your point is proportional to the solid angle from the point that intercepts the light. At large distances (small angles) this will become equal to inverse square.

    At shorter distances, the falloff is slower. Very close to the source, the intensity becomes nearly uniform with distance.

    The actual solid angle formula is not one that I tend to use much, but there are some formulas and references on Example Formula's"

    I got this response sadly at a busy time (the original question was in relation to an upcoming job) so I never truly followed it up but plan on doing so. I do think having a somewhat accurate formula of calculating light fall-off from a large source, especially considering - "the inverse square rule is often still a useful approximation; when the size of the light source is less than one-fifth of the distance to the subject, the calculation error is less than 1%"

     

     

     

     

     

  20. 31 minutes ago, Karim D. Ghantous said:

    Interesting comment. I mean, CCDs can give nicer images out-of-camera, but all you need is a LUT, and your CMOS image now looks pretty much the same. But, that's not a technology thing, it's an aesthetic thing. I am also led to believe that some CCDs have global shutters, although I'm not sure.

    I don't know if you ever followed the Leica M9 vs M 240 debate? It's true that the M9 had nicer colour OOC, not to mention a more natural sharpness, but you could wrangle the colour from the M 240 to look pretty much the same. The M 240 also had more DR, which may have contributed to the 'thinner' OOC look. But on a tech level, the CCD vs CMOS debate is very interesting.

     

    The camera used here was probably something like a GH5. But they all behave like this - so far. You'd think someone would have solved this problem by now. I guess I'll have to come up with my own high level solution, and if I do before someone else does, I'll post it on this site.

    Out of curiosity regarding the RGB clipping - how would one fix it? From my understanding the white clipping stems from all RGBG photo sites being fully saturated (full, unable to interpret more incoming data) and therefore interpreting it as white light.

    Surely the only way to fix it would be to increase the physical size of said photo site (something camera manufacturers are doing the opposite of). 

  21. (As a younger aspiring/studying cinematographer). On commercial work and narrative work I always use my metre and spot metre, analogue and/or digital.

    This is for several reasons. During a pre-light/rig often the production I'm working on can't afford to have the camera package for that day. Even if they could it would be a waste and money and why would I need it if I can judge exposure through other means. This is the same with scouting... if you need to know the level of light in a dark room etc taking a camera package to judge exposure with, to me, is nonsensical.

    Not only that but often the monitors I use to judge the image are uncalibrated and are not harmonised to that of the camera (when a production is willing to hire someone younger, they very rarely have the budget for a knowledgeable DIT). 

    I do think the lack of understanding of the properties light and exposure from some (potentially due to digital technology) is an issue with contemporary film education.

     

     

  22. 6 hours ago, Stuart Brereton said:

    He frequently uses multiple bounces to wrap the light, rather than using fill, but I've never seen him use curved pipe or heard him refer to it as a "cove". It seems to be more of a description that other people use when referring to some of his setups. Cove just means a concave shape. It's not some magical lighting technique.

    I remember when Deakins talked about Revolutionary Road he talked about the curved pipe. Something along the lines of it giving a beautiful gradient. 

    Exactly as said above it's a concave shape, cover, third of a semi-circle etc. That's about the only constant. 

×
×
  • Create New...