Jump to content

Arun Kumar Pandey

Basic Member
  • Posts

    12
  • Joined

  • Last visited

Profile Information

  • Occupation
    Student
  • Location
    FTII, Pune
  • My Gear
    SRII,IIC,535,Alexa
  • Specialties
    Cinematography

Contact Methods

  • Website URL
    http://arunonnet2003.wordpress.com/
  1. Hi there, after few days of R&D, figured out a path for the desired functionality. 1. Shoot Stills in RAW with your desired Cameras (Nikon.nef, Canon.cr2) 2. Use Capture One to Convert these RAWs into Cinama DNG. Adobe DNG converter doesnot take care of the header read by Resolve. 3. With RAWs being converted to CinemaDNG all features of Dynamic Range of RAW and full capability opens up in the Clip Controls including WhiteBalance, Highlight Retaining, Setting Color Space etc. Now, of whatever i have read on DXO marks and Imaging rating sites stills RAWS have good dynamic range similar to Movie Cameras RAW. And hence may be helpful in creating an approximate look. But still i wish if Resolve could debayer these RAWs natively and ACES could include IDT's for these Cameras too. At peace as of now. Will do some more tests and post as i gain confidence in the process. :rolleyes:
  2. Thanks David and Robert to have roped in. Its been three or four months of getting into resolve, LUTs, Calibration, DCP space for me. The mentioned application is surely made for timelapse pipeline. Of whatever i have gathered sRGB is a gamut which nears DCI p3 and surely more than 709, now if i take stills in RAW and bring it to resolve there are many variables including colorspace, dynamic range that should be taken care of to even make this exercise fruitful. This might help in assuming a look and later on deriving the look from motion picture RAWs. I searched on net to figure out similarities or dissimilarities between Stills RAW and Motion Picture RAW but couldnot find it. As of now doing it as a test would be something that could clear off doubts, if this process can help in setting the look in a motion picture production pipeline.
  3. I had stopped myself from greeting birthdays and posting on facebook, but i think this can be an exception, Thank you very much for all the birthday wishes, God bless All. Stay healthy and Happy.

  4. Ideally, one can take alexa or epic on location take stills on it via sd card days before the shoot. Later use those files in resolve lite(or similar non professional di envoronment system ) create various looks. Pitch it to the director and artdirector to finalize. Carrying a DSLR with Raw capabilities and nearing the same function would be a lighter gear to travel with core crew. I hope in time, something might be developed for the funtionality needed. Googled this http://www.nullmedium.de/dev/cine/ ..........seems a step forward in the same direction
  5. Hello there, I was just curious if somebody tried to shoot location hunting reference with canon raw .cr2 or nikon raw .nef and later bring it to grade in resolve and create a look. Primarily a close reference for shooting on arriraw or redraw later. I am aware things like dynamic range and colorspace would come into play but has any body tried it?
  6. Thanks. Nice to have you guys share this info, it gives a good start, for my next step.
  7. Thanks Jake and Mathew for the response. I have already gone through Ansel Adams "Negative" and understand a fair bit of reflected and incident reading. My primary concern is on aesthetics of controlling practicals and curtain windows which conflict with regards to what we see by eye and what meter says. Again looking through the viewfinder with the set shooting aperture is another consideration. Is there a guideline as to how much of control in terms of stop is needed? and second :are there ways to judge lighting by eye or viewfinder better?
  8. After having shot on various film speed stocks, i still land into confusion as to when to trust eyes and when to trust meters.Say at f2.8 the practicals and windows look good to eye through viewfinder but give a reflected reading of f16 or f22 and then the confusion starts, should i bring them down or ND them and to what extent? Also for a night situation shooting at f2.8 on kodak5219 when the exterior light is a moonlight, how much stops should i control them to get the exact feeling of soothing moonlight on curtains? And say a character is sitting in the moonlight, how to judge by eye whether its right amount of light?. Should i also determine that the reflected from curtains on windows is a stop(not sure how much) more than on the face? Pretty much confused on when to use incident meter, Reflected meter or Eyes to determine according to film speeds and various lighting ratios situation? ' Sorry, if i have too many doubts posted at once. But that's how these things came to my mind while lighting.
  9. Thanks Dom, i was at the camera service dept and came to know that 235,435,535,416,Studio and SR3 have same eyecups and share the same modular design. I was pretty much interested in eyecup and got to know that this eyecup has a good space for prescription glasses to be sandwiched between the viewfinder glass and eyecup twin blade Iris.
  10. I dont know if this would help, but the third episode of Zacuto Shootout 2011 sheds light on various artifacts related with shutter among various cameras including RED, ALEXA, F35, CANON variants and WEISSCAM. Have a look its pretty interesting.
  11. I was pretty much curious to understand if ARRI cameras differ in their eyecups and viewfinder diopters for their 235,435 and 535. I have worked on ARRI 16SR and 535B and IIC and found variations among them. Is there any reference material on the net to provide info about these from ARRI side? Having Astigmatism, i was interested to know if there is consistency among eyecups for a common prescription glass articulated that fits into major of the 35mm and 16mm variants.
  12. Thanks, David and Brian for that quick response, it helps to move on for further reading than being stuck and wander.
  13. This has got to do with the dye layers. in Negative it is Y-M-C to record B-G-R(reason as i understand is just because red has more wavelength and blue has lesser to penetrate deep in layers, is there any other reason?) in Positive it is M-Y-C to record G-B-R(now my DOUBT is: Is only human sensitivity to green and register sharpness and detail, only the reason for this layering or is there any other reason too.) Doest it help in color balance, managing good skin tones(Kodak) or reproducing good green tones(Fuji) by any chance. And where can i find a good reference on the net? I hope i have managed to put across the question well.
×
×
  • Create New...