Jump to content

Rodrigo Silvestri

Basic Member
  • Posts

    39
  • Joined

  • Last visited

Everything posted by Rodrigo Silvestri

  1. I liked your approach, Russell. This is more to where I want to get. I didn't talk about light intensities. I use an intensity in my display which is comfortable to my eye in a lit condition. I am around the standards used in postproduction houses for color correction (about light intensity). About the wavelenghts and what I studied about them: I understand (mainly by testing, on film and digital) that a low CRI fluorescent tube (even if it gives white light) only affects the intensity of the colors (it'd be like talking about the tube's "color reproduction gamut"). High CRI tubes have more phosphors, which reproduce more wavelengths, and this is why it gets closer to an incandescent light source (about color reproduction). Some people commit a big mistake thinking that a 100% CRI light (incandescent) gives a pure white light (while actually, a 3200ºK tungsten with a Quarter Plus Green does also give 100% CRI but not white light). Even corrected tubes say "color corrected" when they are actually "high CRI", and can be sometimes greenish or magenta. But white (not colors) does look white, with any of those (corrected) light sources. and medium grey looks medium grey, they are not affected by the CRI. So I think I shouldn't care much about the backlight source. The LCD's light panel is -as in most of consumer LCDs- a CCFL, I don't know the CRI but I know it should be around 70% NTSC color gamut. I'd use this graphic instead of the one you posted, for talking about where my white point is. Is there an exact white point for the eye? I'm talking about reality and not about standards. Would an "eye signal measurer" (imagining it exists) be able to difference between an eye that is receiving pure D65 light and one which is receiving a slightly green light? Basing on the "quality of the green signal" to report "it is not receiving pure D65 light"? Then isn't staring at the screen for long times much worse than adjusting your eye to a little green? I am talking about an environment with a "green", let's say, near X=0,3 Y=0,35 or maybe a bit more (watching the C.I.E. graph and supposing how far this "supposed green" might be from D65). (Also watching the CIE graph and tracing a circle of all the similar color hue shifts from D65, this difference could be around 6000ºK if talking about blue shift instead of green. ---- To establish the green level, I opened a gray card in Color, divided it into 2, put a black bar in the middle, and graded one of the parts to look as green as I am talking about. The HSL showed this is "0,03" of saturation, while pure green is 1.00, and the saturation value needed to change a scene from 3200ºK to 5500ºk might be around 0,30sat. ---- I had already read about the Purkinje Effect, but it is way too extreme. I am not using the moon to light the wall behind my monitor :P . The Kruithof curve is also somehow beyond the extremes of what I talk about. Anyway, I am looking to an answer of this type, but related to the small color shift from D65 to a very near point. Is it just about "SMTPE and standards" that everyone gets so crazy? But why do they get so crazy when I ask not to think about SMTPE, when I say I won't be following any SMTPE standards, I just want to be inside my eye's limits? (I think the answer is: because they've never been a step outside of SMTPE). I wouldn't think in a situation like that, because I know software calibration reduces the color bit depth, which might produce banding. Also, calibrating to 2000ºK may make deep blues look flat (I would be going beyond the monitor's "reproduction range limits", I suppose). My actual setup is so near to D65 that it doesn't produce banding. (actually, this "supposed greenish setup"-"X=0,3 Y=0,35" of which I talk about is not real currently, I just took an example of my previous setup in which I was also able to calibrate grey cards to neutral gray. And didn't produce banding.). I don't agree. When talking about "identifying" I would think about a brain or psycological process. When cones "get tired" (I understand this is a chemical process) they capture less of that color. The brain doesn't notice the difference. A green-capturing-cone gets too much green and gets tired, so it sends less green and the brain receives white. But: (when getting used to the slight green of my supposed situation) Does it "send less information"?, "modify the eye's response curve in a noticeable way"? another bad thing? Or is it the same as with the color levels which we call white? This is only in extreme situations. I understand what you talk about, but I am talking about things near D65. Bingo! This is where I was going. What is the acceptable limit of color shifts to not loose information? (Not to even be near of the eye's colour space limit). I suppose not every perfectly balanced color correction room has the EXACT same levels of RGB. "SMTPE might have their limits, but which are the real -acceptable- limit of the eye?" And this might be the answer: No. I am not trying to change the world. I would love to have lotsa money or to be a very solicited colorist and work in extremely perfect environments. But, again, I am just learning. Thanks thanks thanks. I hope not to have offended you with the things I think. Rodrigo Silvestri
  2. Thanks for your answers David and Stuart. I already know about color calibration devices such as Datacolor's Spyder3 or XRite's EyeOne. I (now) also know about the best ways of calibrating an LCD using software and the eye. I also understand there are standards. I already said, my question is not about things that are written on standards or books. I try to give it another perspective. I feel my questions go a bit deeper in the matter. It's more related to this question: Or to this one: Thanks again, Rodrigo
  3. Hi. I am starting this topic because I thought this might be a better forum to ask about these things. I already started a topic about this in CreativeCow, but it lead to a disaster. Maybe because the people who answered were very professional colorists, or people whose experience made them sensitive to newbie questions and environments. Somehow they answered some of my questions. But there is still one thing I would like to know. (Some introduction:) I don't know exactly what I am. Student? Newbie? Director of photography? Electrician? Future colorist? Second camera assistant? I was DP of many short films (both video and 16mm film). 3 of my film works were scanned in an Arriscan to 2K DPX. I grade them at home. Recently I did the "remaster" of a short film (don't know how you call it: we scanned and reedited a short film to do the grading again, while it was already finished, so now there's a new version with better image). Most of that was in black and white film. Only some scenes in color for keying. And everything is still in post. -- First, I want to suggest a simple thing: I will establish an environment, and then make questions about how things would work in that situation. Please don't answer in a bad manner if you do not intend to help me with my question or you do not agree with my established environment. Maybe this example will clarify things: -"If it rains, should I use an umbrella or a raincoat?" -"I have a car." -"OK, but.. in case it rains.. which is better?" -"No rain please, you should use a car or stay at home. A lightning can strike you.". (I think it's more than clear). -- I am about to grade a friend's short, shot in 35mm color film, 2K scan. My computer is a "Hackintosh", Q6600, 7300GT 256MB, 4GB RAM, 2TB (HDs), a Samsung 2032NW TN-panel LCD. Wacom Bamboo Fun. One of these LEDs as a backlight. I use Final Cut Studio 2 (Color for grading). I also have access to an Assimilate Scratch 3.5, but I feel better using Color now. I think Scratch is only better if you have a control surface and many displays. I currently do not afford better equipment. My next investment will be an S-IPS panel monitor. I balance my display using Apple's integrated (software) display calibrator. I also tried SuperCal but found Apple's one better. Then I balance the LCD's RGB controls to match the background, then again software calibration. My LCD is not producing banding, and a grayscale looks all the same color of grey. I don't want to do magic. I know I am grading to finish on DVD or YouTube, Vimeo, etc, and that every viewer's TV or display will not be in the SMTPE standards. But I still want to learn. I have studied about color and did a course about color theory. I understand some basic things. So, what I say: First. Comparing 2 situations: - Slightly green backlight - slightly green display. Both the same. (So slight that the eye gets used in 15 seconds) - Daylight backlight, daylight display. Both the same. I'd say that as every human being's eye balances the colors, then both situations should be the same, or almost (not something significantly different). And I say this because my last backlight was greenish, then I corrected the levels of the LCD to match it, and then I tried correcting a grey card with blacks and whites, and I got good results (when checking with the vectorscope). Can someone contribute something about this? Pros? Cons? (Apart from "filter the light and correct the screen" or "It's not in the standards, that's why you should never correct there"). I think that the one who can answer this must be somene with experience, not in reading SMTPE publications, but experiencing the real thing. i.e. * -Does the "vision quality" lower when the eye is not Exactly in D65? Why? -A green alien shown in this greenish LCD with the greenish backlight will look greener than normal. But as the eye gets used to the environment, the alien will look the same green level as it would look in a perfect SMTPE environment. Wouldn't it? -- Shouldn't we grade thinking on what the viewer will actually see?. If their eye will get used to our slightly blue scene, then why should we use the same grading for all the scene? I know, I know, "the client will not trust the colorist". But I say again, I am talking about my situation, or student's situation. I think of these ideas: - Always putting a white reference in shot, like a white window or something. - Putting a whiter shot in the middle of same color scenes so the eye "refreshes". - Increasing the color as the time goes by. - Put a medium grey frame around the image and a grey slug every 30 seconds :P If mom's TV's image is reddish, then her eye will get used to it, and she'll be in the standard, and her brain will not always say "how red!". I am obviously talking about the normal color shifts we see on consumer TVs, not about TVs with technical problems. I really hope to find someone who can answer this with good manners, I don't want to argue. Thanks :) Rodrigo
  4. In this album I have pics of Fuji Eterna 400T with daylight (no filters), and overexposing 2/3 (they're a bit corrected in post). The first pic is 400T with tungsten light, pushed 1 stop. All this has been shot on an SLR camera, C-41 process, not exactly the same as 16mm with ECN-2. The last 2 pics are other things. Consider Fujifilm, it looks great for me. Love Eterna 400T for exteriors with no correction. Rodrigo.
  5. Forgot to mention, that here in Argentina Fuji costs much cheaper than Kodak. And if it is for something that will end up on video, it looks good. They also have special discounts for students. Maybe you should ask there. One thing I learnt from Adam Frisch, is that it's good to Always overexpose 16mm 1 stop, so you get rid of the grain (in case you don't need grain). It will make it look just less grainy, but it doesn't change the colors or levels (if your gray card is 1 stop over, the transfer will then push everything 1 stop down). One thing I learnt from.. life? Is that you should always shoot tests if you're using stocks that are new to you. I shot a short with 16mm F-400T and it looked underexposed, and too grainy. I then talked to many people who had used that film in 35mm and had no exposure problems. Months later, I talked to the woman in the lab and she told me this: The sensibility of the film is defined by the place where the curve starts to raise. But it doesn't depend on the film's contrast. So if you have a film rated at 50ASA which latitude is only 1 stop (just inventing), "exposing normally" would make you expose 4 stops over the "start of the curve" (the "zone 5"), and everthing would be overexposed. So, with F-400T, you need to "overexpose" 1 stop if you want your "zone 5" to have the correct density, because it is a low contrast film, and in order to achieve the "medium grey" density you need to overexpose 1 stop. Rodrigo
  6. Hi Stephen. I am a starting DP but have worked in many shorts as electric or gaffer, and I saw many of the dailies. I think you should first talk about the look you want. Because if the only info I have is that you're gonna shoot late exteriors and day interiors, I'd just say "I like Kodak Vision3 500T for night exteriors" and "I like Fuji F-400T for daylight, with 85 camera filter". Answer these questions, once for daylight and once for night: Saturated/desaturated/pale colors? Which range of colors (clothes, furniture)? Perfect white? Greenish? Blueish? Yellowish? Contrasty? Washed-out? Good blacks? Detail in highlights? Grainy? Clean? Under/overexposed? Any effects filter? Promist? Star? Any postproduction FX? Chroma key? Which final format? HD? Maybe a 35mm blow-up? Scan? Telecine? And the list can go on... But I think that is OK for a start... Rodrigo
  7. Oron. I totally disagree with your position of "keep things as they are". All the world is changing, technology is advancing at high speed, and cinema is one of the environments that has not had so many modifications. There are lots of cons when using film, and these things together are starting to make changes. People have to learn for years to imagine how an image would look after processing. That, for me, is old. Change is good, and is always not so easily accepted by the people. Because most of you are so used to film, that now an earthquake seems to be coming. I have a Kodak leaflet, where there are questions like "why should I use film for my project?", and the answers are nothing but crap like "it will help you to get exactly what you ever imagined". Reading it gives you the sensation that if you use film, you will be able to play poker with god. It's not that I don't like to film, I did film a lot, and I understand that film is still (a bit) better than the newest digital format. Change is necessary for all of us to improve. There are lots of new things that people is learning about digital formats, video, or people from the video industry are learning about film, and that's just great, for our minds and lifes. It's just good to see it happen, learn, discuss, understand. I've been recently learning from a DP here in Argentina who talked about his experiences, and he told me that most of the material he had shot on film could have been shot even in HDV (referring to the weakness of the scripts, content and all that). I am not saying that film is bad and digital rules. I am saying that there is no reason to say "keep on film, why d'you want tha' digital thing?". Cheers, Rodrigo.
  8. Since I first used Fuji Sper-F400T and it came out too grainy, I posted my results here and they told me that it's a good idea to always overexpose 16mm a bit, to get away from the grain. I did it once since then, and I think it's the way to go. Then you bring it back to normal in the telecine (well, I've always scanned 16mm to 2K, so my "correction" stage is when converting DPXs to video), there's almost no difference, because you always use only a part (around 6 stops) of the negative's dynamic range (10 stops). Having a truly 10-stops image in a screen would result in a flat image with no contrast. So, imagine that normally you would use the part of the negative that goes from "zone 2" to "zone 8". Now, you use from 3 o 9. The only situation when I don't like to overexpose that 1-stop is when using B&W negative. B&W's grain behaves differently. Almost everything has grain, LOL. About "using a stock that your friend already has": always test it. Mainly if it's high speed (500ISO). Luck, Rodrigo
  9. That was just beautiful. More info about other works? (I prefer not to believe this is your first incursion in DP, but in "Real DP", first time that you can do what you want to do with a script.. isn't it?). Production design was good too. I think the same as Francisco Bulgarelli, about the janitor being flat, but I wouldn't call it a "mistake"... just something I'd change. I'd love to have such a rich script (visually-wise) to be able to do great things ^^. Rodrigo
  10. (Sorry, hadn't read your replies).. Thank you guys. I'll do a sensitometry test to the rest of film I have at home, to see if the problem was there or with the lenses and that. Yes, I am thinking of overexposing a bit in my next works. I might also do some test footages soon (I should have done that before, LOL... anyway I didn't have time). Oh, Adam, I liked the Dragonette music video (viewed it after reading your post about that). Great job! Any good tool for correcting DPX's timecode? Or I think I'll have to go back to the post house, they gave me 24fps and it should be 25. Thanks again, Rodrigo
  11. Thinking of a way to reduce the grain... Mainly in the black, which is the part that really matters to me (I don't want washed blacks, want something with contrast). Do you think it'd be a good option to make a luma key to the blacks, then blur it and apply it over the original image? Any recommendation? Thanks, Rodrigo
  12. Thank you so much guys!! I did use all RX lenses, which I understand are made for the Reflex Bolex and their F scale is compensated (is actually a T scale). And I haven't used a zoom. So... All the short will be finished in B&W. Which are the best steps to follow? One of the ways I think is: in Color: correct all in order to achieve the best green without grain. Export ProRes. in FCP: edit. Export EDLs. in Shake: using EDL from FCP, apply keying & backgrounds. Export DPXs. in Color: desaturate, try to match the B&W part, edit contrast, etc. Export ProRes. in FCP: finish editing and render. The other way would be: in Color: correct all in order to achieve the final look (contrast mainly). Export ProRes. in FCP: edit. Send to Shake. in Shake: Apply keying & backgrounds. Desaturate. Back to FCP. in FCP: finish editing and render. I suppose the best way is the 1st option. Anything you can tell me to improve that? Thanks again, Rodrigo
  13. Used the ones for the Bolex (C mount), like Angenieux and Switar. I think the set was 10-16-25-75mm. Rodrigo
  14. I am using Color, and yes, I made it look better. But the grain in the blacks is too ugly for me. Keying will be sort of a pain. It's strange, I used 2 light meters, one for incident light and one spot meter. The spot was of a friend who always uses it and has good results, and the other is from my school, which was not callibrated (-1/2 stop). Where should the medium grey be in the Waveform monitor (in Color)? 50? Thanks, Rodrigo
  15. I've seen the footage (converted to ProRes), and yes, it looks grainier. These are DPX files from an Arriscan, a "digital negative", they just put the film and scan it, there's no kind of "adjustments" (color correction, gain, etc) made to this footage. The JPEGs are exported from the DPX without corrections too. I even called them to be sure and they confirmed this. So the exposure is OK (well, I don't see it under or overexposed). Thanks, Rodrigo
  16. Hi. I got my DPX scans yesterday. (coming from this topic: http://www.cinematography.com/forum2004/in...showtopic=32035 ) I filmed 400ft of 16mm with a Bolex H16-RX camera. 200ft of Kodak 7222 and 200ft of Fuji F-400T. Processed it in the best lab in Argentina (there are 2, the biggest one is Cinecolor, and all the movies and commercials produced here are processed there, like the last one of F. F. Coppola, so it's not a bad place). Scanned in Che Revolution Post (in an Arriscan, 2K DPXs). Here I post 2 DPXs and their JPEG version. Color film scanned using the double flash option and B&W single flash. Double flash is used to increase latitude. B&W: http://www.keewee.com.ar/R2.0182405.jpg http://rapidshare.com/files/146109529/R2.0182405.dpx.html (10MB) Color: http://www.keewee.com.ar/R3.0273072.jpg http://rapidshare.com/files/146115810/R3.0273072.dpx.html (10MB) So the problem is: I expected to have a clean color image and a very grainy B&W, but that was not the result. As you can see, the color image is too grainy, mainly in the blacks (I suppose that would be called "velo de fondo" in spanish, I think it's just veil in English). The B&W is grainy too, but doesn't look bad. So... I am still learning, and I won't want to kill Mr. Fuji or Mr. Cinecolor, maybe just hate myself, but I think that it was not something that I could anticipate. I just want to know what it is, if you can think of some cause just by viewing it here. Or if it is normal, and I have to do something to view it better. If it is because of the double flash, I can go and make a single flash copy, but I want to be sure before bothering the people from the post production house, since they did this for free. Thank you, Rodrigo
  17. I found the problem. The manual talks about the DPX files as [name].[7digitsnumber].dpx My files are [7digitsnumber].dpx So I added the name to the files (using automator, of course), and it works :D
  18. Hi. I am a student and I had the opportunity of having a free scan of my first Film short (16mm) on an Arriscan. This is my previous post: http://www.cinematography.com/forum2004/in...showtopic=32035 I am now here with my DPXs. I have 3 folders (one for each roll) and each one contains only DPX files. I want to import them to Color, so I can do my Offline master to edit in FCP. In Color (setup, shots tabs) all the files do not seem to follow an order, and the only way to import them seems to be to grad and drop one by one. How can I select all the files at once, and put them in Color? Thanks, Rodrigo
  19. Hi again. I'm back.. We filmed the short movie, and I have the processed negs with me. I'm about to do the scan, it'll be between this week and the next. And then I'll start the hard work! But I still have some things that are not very clear, so I thank if anyone can answer these questions... :) (copied from before) When I open the DPX file (from projectred.net) in Photoshop or in Shake it is ok, but when I open it in Color, it shows a very contrasted image. To make it look good I have to reduce saturation and gamma. Is this normal? Is this related to LUTs? I already used Color sending projects from FCP and had no problems. ---- I am doing some tests with 4K DPXs from ProjectRed.com. I can't get a correct Apple ProRes copy from Shake. What I am doing: In Shake, set the frame size (2048x1556) Drop the DPX in Shake Add a Scale node Add a Keylight node Add a FileOut node and choose file name/folder Choose Apple ProRes in FileOut settings Render FileOut The rendered file seems to have the alpha channel OK, because the sky is cleared, but not the image, it appears as a mass of noise or strange colors. If I choose Apple Intermediate Codec (don't know what it's for) it works, but doesn't include Alpha. The rendered file is in this link. It might be a problem with my codecs too... Please tell me if you can see it right. Thanks, Rodrigo
  20. LOL! 'cause it's expensive!. And it's not worth it. The whole cost of the project doesn't exceed AR$2000, which is less than US$700. We are not paying to actors, crew, etc. We are all learning, and trying to keep it as "just a practical work for school"... Also, HD is fine for me. I'm sure that it will be seen by more people in Youtube's lowest quality than in the cinemas if I ever project it. -- Thanks Keith. I already knew the meaning of HQ, but I didn't understand why there were two codecs with the same chroma subsampling and the same frame size. -- I still can't find the problem. I was able to use Color to export to ProRes. But anyway, I'll need Shake, so I still want to fix the problem. -- Another thing I can't understand: When I open the DPX file in Photoshop or in Shake it is ok, but when I open it in Color, it shows a very contrasted image. To make it look good I have to reduce saturation and gamma. Is this normal? Is this related to LUTs? -- Thanks, Rodrigo
  21. Thank you a lot John! Your post helped me a lot to understand things... I'll follow it almost as you proposed. I am doing some "tests" with 4K DPXs from ProjectRed.com. I can't get a correct Apple ProRes copy from Shake. What I am doing: In Shake, set the frame size (2048x1556) Drop the DPX in Shake Add a Scale node Add a Keylight node Add a FileOut node and choose file name/folder Choose Apple ProRes in FileOut settings Render FileOut The rendered file seems to have the alpha channel OK, because the sky is cleared, but not the image, it's completely noisy. If I choose Apple Intermediate Codec (don't know what it's for) it works, but doesn't include Alpha. The rendered file is here. It might be a problem with my codecs too... Which is the difference between Apple ProRes 422 and Apple ProRes 422 (HQ)? Thanks, Rodrigo
  22. Thank you guys! Any cheaper ways of working with DPXs? I don't want to buy GlueTools if I'm only using it for one project (don't expect to do many projects in DPX soon...). Thanks, Rodrigo
  23. Been thinking more about this... How do I really do it? I mean, thinking of steps... Because I don't want to have the DPXs here and die of anxiety because I don't know what to do. So let's suppose that I have the DPXs now... I "down convert" them to ProRes HQ. How? What software should I use to do it? Then do I send the ProRes edit to color? How would I do it if I want to use the DPXs again? (grade directly on DPXs rather than on ProRes). How is it done in the "real" market? Do you use the "converted" video files for grading or you go back to DPXs? Thanks, Rodrigo
  24. No. I said that because you recommended me to have "the biggest drive available for storage", and I know I'll need a big disk only for the first stage. The school gives us 200ft and we should use only that. One of the causes is because in the earlier years, the "200ft short" didn't have a limit of time, so most of the shorts were mixed with video, or with more film, and most of them were boring. This year, they changed many things in the school (the limit of time between them). Our professor knows that we are doing all this, but he asked us to please keep it between the limits. Thanks again, Rodrigo
  25. Thanks Chris! This lab, Che Revolution, is in Argentina. After being offered to do the scan gratis, I was told by another cinematographer that that lab does that with students sometimes. It's great! :D I still have (more) doubts... You don't mention Motion for the keying because you don't know it or because you merge it with Final Cut (Studio)? I'll buy a 500GB SATAII drive for this. The film can't overpass 4 mins and a half (We are supposed to be using 200ft B&W), so the first thing I'll do is make the director choose the shots. Isn't it better to do the keying with the DPX's directly? Or is it the same? (considering that I pretend to have an HD 1920*1080 master). Thanks again, Rodrigo
×
×
  • Create New...