Jump to content

Display Calibration - Newbie.


Recommended Posts

Hi. I am starting this topic because I thought this might be a better forum to ask about these things.

I already started a topic about this in CreativeCow, but it lead to a disaster. Maybe because the people who answered were very professional colorists, or people whose experience made them sensitive to newbie questions and environments.

Somehow they answered some of my questions. But there is still one thing I would like to know.

 

(Some introduction:) I don't know exactly what I am. Student? Newbie? Director of photography? Electrician? Future colorist? Second camera assistant?

I was DP of many short films (both video and 16mm film). 3 of my film works were scanned in an Arriscan to 2K DPX. I grade them at home. Recently I did the "remaster" of a short film (don't know how you call it: we scanned and reedited a short film to do the grading again, while it was already finished, so now there's a new version with better image).

Most of that was in black and white film. Only some scenes in color for keying. And everything is still in post.

--

First, I want to suggest a simple thing: I will establish an environment, and then make questions about how things would work in that situation. Please don't answer in a bad manner if you do not intend to help me with my question or you do not agree with my established environment.

Maybe this example will clarify things:

-"If it rains, should I use an umbrella or a raincoat?"

-"I have a car."

-"OK, but.. in case it rains.. which is better?"

-"No rain please, you should use a car or stay at home. A lightning can strike you.".

(I think it's more than clear).

--

I am about to grade a friend's short, shot in 35mm color film, 2K scan.

My computer is a "Hackintosh", Q6600, 7300GT 256MB, 4GB RAM, 2TB (HDs), a Samsung 2032NW TN-panel LCD. Wacom Bamboo Fun. One of these LEDs as a backlight.

I use Final Cut Studio 2 (Color for grading). I also have access to an Assimilate Scratch 3.5, but I feel better using Color now. I think Scratch is only better if you have a control surface and many displays.

I currently do not afford better equipment. My next investment will be an S-IPS panel monitor.

 

I balance my display using Apple's integrated (software) display calibrator. I also tried SuperCal but found Apple's one better. Then I balance the LCD's RGB controls to match the background, then again software calibration. My LCD is not producing banding, and a grayscale looks all the same color of grey.

 

I don't want to do magic. I know I am grading to finish on DVD or YouTube, Vimeo, etc, and that every viewer's TV or display will not be in the SMTPE standards. But I still want to learn.

I have studied about color and did a course about color theory. I understand some basic things.

 

So, what I say:

First. Comparing 2 situations:

- Slightly green backlight - slightly green display. Both the same. (So slight that the eye gets used in 15 seconds)

- Daylight backlight, daylight display. Both the same.

I'd say that as every human being's eye balances the colors, then both situations should be the same, or almost (not something significantly different). And I say this because my last backlight was greenish, then I corrected the levels of the LCD to match it, and then I tried correcting a grey card with blacks and whites, and I got good results (when checking with the vectorscope).

Can someone contribute something about this? Pros? Cons? (Apart from "filter the light and correct the screen" or "It's not in the standards, that's why you should never correct there"). I think that the one who can answer this must be somene with experience, not in reading SMTPE publications, but experiencing the real thing.

i.e. * -Does the "vision quality" lower when the eye is not Exactly in D65? Why?

-A green alien shown in this greenish LCD with the greenish backlight will look greener than normal. But as the eye gets used to the environment, the alien will look the same green level as it would look in a perfect SMTPE environment. Wouldn't it?

 

--

Shouldn't we grade thinking on what the viewer will actually see?. If their eye will get used to our slightly blue scene, then why should we use the same grading for all the scene? I know, I know, "the client will not trust the colorist". But I say again, I am talking about my situation, or student's situation.

I think of these ideas:

- Always putting a white reference in shot, like a white window or something.

- Putting a whiter shot in the middle of same color scenes so the eye "refreshes".

- Increasing the color as the time goes by.

- Put a medium grey frame around the image and a grey slug every 30 seconds :P

If mom's TV's image is reddish, then her eye will get used to it, and she'll be in the standard, and her brain will not always say "how red!".

I am obviously talking about the normal color shifts we see on consumer TVs, not about TVs with technical problems.

 

I really hope to find someone who can answer this with good manners, I don't want to argue.

 

Thanks :)

Rodrigo

Link to comment
Share on other sites

Hey newbie DP!!

Good luck with your question!!

I have just read it, but I´m trying to figure it out...

By the way: have you asked yet why a bad magazine in a BLIV could fog a couple of feet only at the starting of a take? I was going to ask that...

Again, good luck

See you!!! ;)

Link to comment
Share on other sites

  • Premium Member

I can't answer the questions because I'm not a colorist or someone who has done their own grading, but the general answer is that there are standards for monitor calibration, but what you are calibrating for depends on the intended release format -- usually a monitor would be calibrated for TV broadcast standards, whereas a digital projector may be set-up for digital cinema display or for film D.I. work, calibrated to match a print.

 

In your case, probably trying to get the monitor close for TV broadcast standards would be the right approach. How to do that, other than setting it to SMPTE color bars, I don't know. And ideally you'd want a waveform and vectorscope display to make sure you were at the right levels for black, white, color saturation, etc.

 

As for how it looks on You Tube, etc., well, some people use a cheaper secondary monitor as a reference for a "typical TV display" for example, but the truth is that there is a lot of variation out there -- I believe Apple monitors use a different gamma than PC monitors, for example.

Link to comment
Share on other sites

There are devices like THESE which have calibration profiles for PAL, NTSC and HD color spaces/gammas. I don't know how effective they are.

 

Your approach seems pretty sound, at least as good as any other I've heard of. As long as you remember that LCD monitors never reproduce blacks properly, you should be ok.

Link to comment
Share on other sites

So, what I say:

First. Comparing 2 situations:

- Slightly green backlight - slightly green display. Both the same. (So slight that the eye gets used in 15 seconds)

- Daylight backlight, daylight display. Both the same.

I'd say that as every human being's eye balances the colors, then both situations should be the same, or almost (not something significantly different). And I say this because my last backlight was greenish, then I corrected the levels of the LCD to match it, and then I tried correcting a grey card with blacks and whites, and I got good results (when checking with the vectorscope).

Can someone contribute something about this? Pros? Cons? (Apart from "filter the light and correct the screen" or "It's not in the standards, that's why you should never correct there"). I think that the one who can answer this must be somene with experience, not in reading SMTPE publications, but experiencing the real thing.

i.e. * -Does the "vision quality" lower when the eye is not Exactly in D65? Why?

-A green alien shown in this greenish LCD with the greenish backlight will look greener than normal. But as the eye gets used to the environment, the alien will look the same green level as it would look in a perfect SMTPE environment. Wouldn't it?

 

--

Shouldn't we grade thinking on what the viewer will actually see?. If their eye will get used to our slightly blue scene, then why should we use the same grading for all the scene? I know, I know, "the client will not trust the colorist". But I say again, I am talking about my situation, or student's situation.

I think of these ideas:

- Always putting a white reference in shot, like a white window or something.

- Putting a whiter shot in the middle of same color scenes so the eye "refreshes".

- Increasing the color as the time goes by.

- Put a medium grey frame around the image and a grey slug every 30 seconds :P

If mom's TV's image is reddish, then her eye will get used to it, and she'll be in the standard, and her brain will not always say "how red!".

I am obviously talking about the normal color shifts we see on consumer TVs, not about TVs with technical problems.

 

I really hope to find someone who can answer this with good manners, I don't want to argue.

 

Thanks :)

Rodrigo

 

there is a lot wrong with your approach, and I think the attitude you took to creative cow.

 

a simple wikipedia search would have told you all you need to know really.

 

you've started by assuming the daylight vs green backlight will have the same intensity, which is patently not true. Apart from being a different power source (sun vs light bulb) they are different wavelengths, different mixes of wavelengths and have different energy levels, thus your ability to judge contrast/luminosity will be compromised.

SPD.png

 

 

 

likewise, your eyes use different cones when viewing different wavelengths, thus by viewing it in a non-neutral environment you are shifting the ranges being viewed by the eye.

Choosing different colour environments changes where on this graph your white point occurs (the graph depicts the wavelength response of the three cones systems your eyes have)

http://upload.wikimedia.org/wikipedia/en/1...ones_SMJ2_E.svg

 

 

read up on this

http://en.wikipedia.org/wiki/Purkinje_effect

http://en.wikipedia.org/wiki/Kruithof_curve

 

 

 

If you don't calibrate both your computer and monitor properly, how will you know what the result is. eg. you calibrate your computer (in hardware) to 2000k but then adjust your monitor to give you a nice white point in your greenish room. your nicely white CC will infact be calibrated to 2000k. your implicit assumption is that your computer has a natural white point, you also assume that your cheap monitor gives accurate temperature results.

 

you also haven't considered chromatic adaptation. you have make an assumption that "getting used to" = "identifying colour as white"

An object may be viewed under various conditions. For example, it may be illuminated by sunlight, the light of a fire, or a harsh electric light. In all of these situations, human vision perceives that the object has the same color: an apple always appears red, whether viewed at night or during the day.

another point is, that without balancing in a neutral environment, you can quite easily be fooled by whatever object you are looking at, you will be less sure of the colour.

 

 

you also haven't thought of the colour space compression - lets take this example to the extreme. you're in a room with colour temperature of 15000k and you adjust your monitor to reach that, your computer is nicely balanced however. So at 15000k you have got a perfect white point. only there is a limit to what human eyes can perceive (see above graph), and by shifting that white point, you have shifted the colour space closer to the limit of what the eye can perceive towards one end of the spectrum and have elongated the apparent colour space in the other. thus your ability to accurately identify the colour space is compromised

 

the thing is, you've probably only done this in a mildly different situation, so yeh it doesn't make a lot of difference, and what you're describing is no different to white balancing a camera. I guess it really just depends on your level of perfectionism/puritanism if you are happy to work with a reasonable estimate of the result, rather than a direct image you can confirm as the result then there's no problem.

If on the other hand you are trying to say that neutral environments are irrelevant as long as you balance the monitor to the light source, then you are wrong.

Link to comment
Share on other sites

Thanks for your answers David and Stuart.

I already know about color calibration devices such as Datacolor's Spyder3 or XRite's EyeOne. I (now) also know about the best ways of calibrating an LCD using software and the eye. I also understand there are standards.

 

I already said, my question is not about things that are written on standards or books. I try to give it another perspective.

I feel my questions go a bit deeper in the matter. It's more related to this question:

Shouldn't we grade thinking on what the viewer will actually see?. If their eye will get used to our slightly blue scene, then why should we use the same grading for all the scene? (.......

..........................

.......) If mom's TV's image is reddish, then her eye will get used to it, and she'll be in the standard, and her brain will not always say "how red!".

I am obviously talking about the normal color shifts we see on consumer TVs, not about TVs with technical problems.

 

Or to this one:

-Does the "vision quality" lower when the eye is not Exactly in D65? Why?

-A green alien shown in this greenish LCD with the greenish backlight will look greener than normal. But as the eye gets used to the environment, the alien will look the same green level as it would look in a perfect SMTPE environment. Wouldn't it?

 

 

Thanks again,

Rodrigo

Link to comment
Share on other sites

I already said, my question is not about things that are written on standards or books. I try to give it another perspective.

I feel my questions go a bit deeper in the matter. It's more related to this question:

 

It seems, as I've read it was stated on the other forums, you're here to tell people you're right, not learn or enter an actual discussion.

 

 

you're starting from an incorrect theoretical standpoint, and then trying to draw conclusions from that.

The "standards" that you are trying to ignore exist to try prevent you from making the bad assumptions that you have made.

 

If mom's TV's image is reddish, then her eye will get used to it, and she'll be in the standard, and her brain will not always say "how red!

 

so grade for moms TV, but of course Dad's TV is neutral but because you graded for a red TV your grade will no longer work. worse, Bob's TV is bluish, so your grading has even more significantly appeared wrong on his TV.

If you grade for a specific broken,non-standard system, then when they fix that system, your work won't be graded correctly - you have no longevity of work.

 

 

 

 

all your questions are answered in my above post, look at the spectral power distribution of the d65 standard vs different light sources. This will determine your ability to maintain a consistent grade, since you have a more consistent spread of wavelengths being emitted bit your rooms light source. i.e. you will be better able to judge the composition of wavelengths in the image being corrected for. (if you look at hte peak power it more or less corresponds to the weakest response from your eyes as in this link: http://upload.wikimedia.org/wikipedia/en/1...ones_SMJ2_E.svg )

792px-SPD_D65.png

 

SPD.png

 

 

you've also talked about what seems to be a dynamic colour grading. i.e. Bringing in more colour over time.

Again, "getting used to" a colour does not mean "seeing it as white".

the real questions should be asked: does this match the narrative? should the audience be subjected to more colour if the script calls for less? obviously no. these decisions should be up ot the director/cinematographer, or more holistically the story, not the concern that the audience won't "see" the colour.

 

 

 

i'm going to leave this discussion now, lest i get argumentative :blink:

Link to comment
Share on other sites

I liked your approach, Russell. This is more to where I want to get.

 

I didn't talk about light intensities. I use an intensity in my display which is comfortable to my eye in a lit condition. I am around the standards used in postproduction houses for color correction (about light intensity).

 

About the wavelenghts and what I studied about them: I understand (mainly by testing, on film and digital) that a low CRI fluorescent tube (even if it gives white light) only affects the intensity of the colors (it'd be like talking about the tube's "color reproduction gamut"). High CRI tubes have more phosphors, which reproduce more wavelengths, and this is why it gets closer to an incandescent light source (about color reproduction). Some people commit a big mistake thinking that a 100% CRI light (incandescent) gives a pure white light (while actually, a 3200ºK tungsten with a Quarter Plus Green does also give 100% CRI but not white light). Even corrected tubes say "color corrected" when they are actually "high CRI", and can be sometimes greenish or magenta.

But white (not colors) does look white, with any of those (corrected) light sources. and medium grey looks medium grey, they are not affected by the CRI. So I think I shouldn't care much about the backlight source.

The LCD's light panel is -as in most of consumer LCDs- a CCFL, I don't know the CRI but I know it should be around 70% NTSC color gamut.

 

I'd use this graphic instead of the one you posted, for talking about where my white point is.

 

likewise, your eyes use different cones when viewing different wavelengths, thus by viewing it in a non-neutral environment you are shifting the ranges being viewed by the eye.

Is there an exact white point for the eye? I'm talking about reality and not about standards. Would an "eye signal measurer" (imagining it exists) be able to difference between an eye that is receiving pure D65 light and one which is receiving a slightly green light? Basing on the "quality of the green signal" to report "it is not receiving pure D65 light"? Then isn't staring at the screen for long times much worse than adjusting your eye to a little green?

I am talking about an environment with a "green", let's say, near X=0,3 Y=0,35 or maybe a bit more (watching the C.I.E. graph and supposing how far this "supposed green" might be from D65). (Also watching the CIE graph and tracing a circle of all the similar color hue shifts from D65, this difference could be around 6000ºK if talking about blue shift instead of green.

 

----

To establish the green level, I opened a gray card in Color, divided it into 2, put a black bar in the middle, and graded one of the parts to look as green as I am talking about. The HSL showed this is "0,03" of saturation, while pure green is 1.00, and the saturation value needed to change a scene from 3200ºK to 5500ºk might be around 0,30sat.

----

 

I had already read about the Purkinje Effect, but it is way too extreme. I am not using the moon to light the wall behind my monitor :P . The Kruithof curve is also somehow beyond the extremes of what I talk about.

Anyway, I am looking to an answer of this type, but related to the small color shift from D65 to a very near point.

Is it just about "SMTPE and standards" that everyone gets so crazy? But why do they get so crazy when I ask not to think about SMTPE, when I say I won't be following any SMTPE standards, I just want to be inside my eye's limits? (I think the answer is: because they've never been a step outside of SMTPE).

 

If you don't calibrate both your computer and monitor properly, how will you know what the result is. eg. you calibrate your computer (in hardware) to 2000k but then adjust your monitor to give you a nice white point in your greenish room. your nicely white CC will infact be calibrated to 2000k. your implicit assumption is that your computer has a natural white point, you also assume that your cheap monitor gives accurate temperature results.

I wouldn't think in a situation like that, because I know software calibration reduces the color bit depth, which might produce banding. Also, calibrating to 2000ºK may make deep blues look flat (I would be going beyond the monitor's "reproduction range limits", I suppose).

My actual setup is so near to D65 that it doesn't produce banding. (actually, this "supposed greenish setup"-"X=0,3 Y=0,35" of which I talk about is not real currently, I just took an example of my previous setup in which I was also able to calibrate grey cards to neutral gray. And didn't produce banding.).

 

"getting used to" = "identifying colour as white"

I don't agree. When talking about "identifying" I would think about a brain or psycological process. When cones "get tired" (I understand this is a chemical process) they capture less of that color. The brain doesn't notice the difference.

A green-capturing-cone gets too much green and gets tired, so it sends less green and the brain receives white. But: (when getting used to the slight green of my supposed situation) Does it "send less information"?, "modify the eye's response curve in a noticeable way"? another bad thing? Or is it the same as with the color levels which we call white?

 

another point is, that without balancing in a neutral environment, you can quite easily be fooled by whatever object you are looking at, you will be less sure of the colour.

This is only in extreme situations. I understand what you talk about, but I am talking about things near D65.

 

you also haven't thought of the colour space compression - lets take this example to the extreme. you're in a room with colour temperature of 15000k and you adjust your monitor to reach that, your computer is nicely balanced however. So at 15000k you have got a perfect white point. only there is a limit to what human eyes can perceive (see above graph), and by shifting that white point, you have shifted the colour space closer to the limit of what the eye can perceive towards one end of the spectrum and have elongated the apparent colour space in the other. thus your ability to accurately identify the colour space is compromised

Bingo! This is where I was going. What is the acceptable limit of color shifts to not loose information? (Not to even be near of the eye's colour space limit). I suppose not every perfectly balanced color correction room has the EXACT same levels of RGB. "SMTPE might have their limits, but which are the real -acceptable- limit of the eye?"

 

And this might be the answer:

the thing is, you've probably only done this in a mildly different situation, so yeh it doesn't make a lot of difference, and what you're describing is no different to white balancing a camera. I guess it really just depends on your level of perfectionism/puritanism if you are happy to work with a reasonable estimate of the result, rather than a direct image you can confirm as the result then there's no problem.

 

No. I am not trying to change the world. I would love to have lotsa money or to be a very solicited colorist and work in extremely perfect environments.

But, again, I am just learning.

 

Thanks thanks thanks. I hope not to have offended you with the things I think.

Rodrigo Silvestri

Link to comment
Share on other sites

Is there an exact white point for the eye? I'm talking about reality and not about standards. Would an "eye signal measurer" (imagining it exists) be able to difference between an eye that is receiving pure D65 light and one which is receiving a slightly green light? Basing on the "quality of the green signal" to report "it is not receiving pure D65 light"? Then isn't staring at the screen for long times much worse than adjusting your eye to a little green?

I am talking about an environment with a "green", let's say, near X=0,3 Y=0,35 or maybe a bit more (watching the C.I.E. graph and supposing how far this "supposed green" might be from D65). (Also watching the CIE graph and tracing a circle of all the similar color hue shifts from D65, this difference could be around 6000ºK if talking about blue shift instead of green.

 

ok i'll bite again.

 

the D65 standard comes from the averaged spectrum of midday sun. the D65 standard it isn't pure light (pure light would be a constant gradient of all wavelengths). Its the average spectrum of light that most closely mimics midday. However you've answered your own question too by quoting the two effects i linked, there is no exact white point for the eye, that changes according to the intensity of the available light. You do however need a common point to work from the d65 has been chosen (for other reasons not just relating to film). The 6500k holy grail is actually 6504k, its determined by theoretical physics relating to black body radiation, all of which artistically is irrelevant. having said that, by choosing to stray from the standard you are choosing to risk imperfections in your work.

 

Again, it all depends on your need/want for perfectionism. My films get projected into theaters that don't calibrate their projectors, thus each theater has a substantially different setup - i.e. little point me losing sleep over a 500k difference. the larger the budget, the more controlled the environment, the more I'll care!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...