Calculating accurate values for the correct illumination of surfaces is possibly the most difficult task in computer graphics. In fact, the modeling of light reflection is so difficult to describe, and so computationally complex, that a great effort has gone into finding fast approximation methods to this problem.

Why is Shading a Hard Problem?

Consider the room in which you sit reading this document. This is a very complex scene, which is made up of several objects. Some of these objects may be considered light sources (or emitters of light energy) - perhaps your study lamp, or the fluorescent lights on the ceiling, or the window where the sun shines in, or the crack under the door if you are locked in a closet. In addition, you probably have many other objects in your room that are not usually considered to be light sources, but each of them reflects light - tables, chairs, your glasses, etc.

So, think about it! Think of rays of light or energy coming out of the light sources, bouncing around the room and finally reaching your eye. Your eye, being the remarkable instrument that it is, receives this light energy and passes it to your vision system which transmits the colors and illumination information in the scene to your brain.

Now, examine each thing a little more closely. Note that the light sources, your desk lamp for example, is emitting light from a filament inside a diffuser. Note that the table, even if it is darkly colored, reflects light in different ways (some parts of the table is brighter than other parts). Note that your hand is brighter when it is closer to the light source. We obviously have some complex things going on here.

To accurately construct a picture of this room via computer graphics, we have to simulate this illumination process and be able to calculate the shading at each point of each surface in our scene. Let's see if we can figure out how to do this.

One direct way to look at this problem is to consider an arbitrary point in space and a point on a surface . Lets try to calculate, in a straightforward manner, the amount of illumination that arrives at from . This quantity will be called and is illustrated in the following figure.

This illumination is a combination of many things. In particular,

- 1.
- The surface on which
lies could be emitting light energy -
*i.e.*, its a light source - in which case, a certain intensity from this source is transported from to . - 2.
- There may be other light sources in the scene, and other reflecting
surfaces. The illumination from these objects
arriving at
is
scattered by the roughness of the surface. A certain amount
is scattered toward .
- 3.
- There can be other surfaces or atmospheric effects that block a
portion of the illumination from reaching . This factor
(usually called the geometry factor) reduces the amount
of illumination arriving at , and depends on the positions of
other objects in the scene.

This doesn't seem too bad at first glance, but we have already made several simplifications - or actually, we ignored several things. For example, we have not considered the fact that the medium between the two points could also scatter light (e.g. fog) and have not considered either wavelength (for color) or phase. We have also been somewhat lax on getting our units correct (this is physics after all). But in any case, this initial model of illumination does not seem too bad.

First glances can sometimes be decieving, and unfortunately this is the case here. Examining the scattered component further, we see that can be calculated by considering all other sources of illumination in the scene. Consider any other point in the scene and let's examine how light energy coming in from could be scattered to .

Illumination from , incoming from , is scattered by the roughness of the surface at and a fraction of the result is directed toward . We will write this contribution as

If we consider all the possible points in the scene that transport light energy to , then can be calculated by summing up the scattering contribution from all possible points in . Thus,

where, of course, the sum is an integral in this case. Thus, the illumination equation takes the form

(We note that all of our illustrations are two dimensional, but in reality the picture is three dimensional. The possible directions from which illumination could impact the point lie on a hemisphere above with as its center.)

Now this looks a little more difficult - but only one integral
sign is not too bad. In looking at it a little further however, we
note that to calculate
we must know how to calculate
__for every
point __ in the scene. But, to calculate
we must
construct another integral of this type, which depends on
for example (since
is a point in the scene). However,
depends on
, which is * exactly the value that
we are trying to calculate!*

This interreflection, as is shown in the figure above, causes the quantity that we wish to calculate to appear on both sides of the equation (one is under the integral sign) - and thus makes it very difficult to solve. But, if think about it, this is exactly what is happening in the room around you - this interreflection is everywhere.

So it is a difficult task to try to solve such an equation.
This hasn't however, kept the computer graphics community from trying to solve
it - which in some circles is called the ``rendering
equation''. But even though the speed of computer systems is increasing
dramatically, the equation is still to difficult to solve
without making substantial assumptions about the environment and the
reflection properties of the surfaces of the scene. The biggest
problem in computer graphics is that
* we have to do this calculation at least once for each pixel that
that makes up the picture* - and with potentially millions
(billions?) of potential intersections, our illumination model must be
able to be calculated very quickly. Of course, we also require that
the picture still look realistic.

Now perhaps the reader can see that this problem is difficult and why we must address simplifications or approximations to the algorithm to be able to partially solve the illumination problem. The above model is conceptually straightforward, but is also quite difficult to solve - even though we didn't consider many quantities (phase, wavelength, scattering of light by the environment, etc.)

If we wish to render complex scenes quickly and wish to be as realistic as possible - and we do - then we must address simplifications to, or compromises in, the illumination algorithm. But we must also be careful that even though we are creating an illumination algorithm that may be easy to calculate, we do not compromise the quality of the picture.

Thus, we must make some simplifications to either the illumination equation or the environment so that the illumination equation simplifies to where we can calculate the result in a reasonable amount of time. That can be (and has been) done in many ways - and they each lead to a different illumination algorithm, or allow the inclusion of some different features in the illumination algorithm.

Many comprimises or simplifications in the environment can be posed. Those most frequently used in computer graphics are based upon a single factor: We wish to be able to calculate the illumination equation which usually implies that the integral must be replaced by a sum, and the geometry factor must be eliminated. These simplifications can be described as follows:

- 1.
__Uniform Media__- We assume that our media is such that no scattering occurs as light travels between objects.The scattering of light is virtually impossible to calculate directly and can only be simulated at a significant cost. The following figure illustrates the problems that are caused by scattering. As a light enters the media, it is scattered, frequently many times. Some of this energy will still be scattered toward the light source, but it is extremely difficult to tell through geometric means how much of the energy that leaves actually reaches .

This simplification implies that our light rays travel directly between points and the only factors that will reduce the light energy is the inverse square law and the transparency of the objects that the light encounters.

- 2.
__Opaque Objects__- We assume that our objects are opaque, that is light is not transmitted through the surface, nor is light refracted through the surface. Again, refraction is extremely difficult to calculate in the same way as scattering.This simplification implies that, if an object intercepts the light being transported from to , then no light energy reaches the point . This, together with the first simplification implies that the geometry factor is either zero or one.

- 3.
__Elimination of Interreflection__- We assume that interreflection takes place in the scene, but its result is*constant*throughout the scene. In fact, we assume that*the only reflected light that directly reaches the point**from**is reflected off the surface from the light sources of the scene*.- 4.
__Point Light Sources__- Light sources are points, and there are only a finite number of them.This will simplify the task enormously and, for making simple pictures is not a significant change. It allows the replacement of the integral sign by a simple summation over the light sources.

- 5.
__Color__Color can be accurately represented by utilizing three samples from the continuous color spectrum: red, green and blue.

These appear to be significant compromises, and in some sense they are. They imply that we cannot accurately do fog, or any technique that requires the scattering of light, and that we cannot accurately model transmission or refraction of light. However, they simplify our equation in that they imply that the geometry factor is either zero (i.e. if something is in the way, then none of the light energy reaches ) or one, that our integrals become sums (as we only need to consider reflection from a finite set of point light sources), and that we do not need to consider reflection from all surfaces. We only need to calculate the illumination for the three primary colors, and find the resulting color from these results.

In reality, the computer graphics community has made inroads in all of these compromises: Interreflection is treated in the radiosity algorithms, single scattering fog models are available, surface light sources and multiple sample color models have been tried. Ray tracing methods address simple refraction. But these are still considered special cases.

The basic illumination equation utilized in computer graphics is due to Phong Bui-Tuong, and consists of three components:

- A component representing the background (or
*ambient*) light produced by scattering from all objects in the scene. This component is usually represented by a constant for simplicity. - A component representing the light scattered from the surface by the
light source. This is normally represented by Lambert's law for
perfectly diffusing surfaces - i.e. The photometric intensity at a
point in any
direction varies as the cosine of the angle between a vector from the
point to the light source and the normal vector of the surface at the
point. This is commonly called the
*diffuse reflection*from the surface. - A component representing the light that is reflected directly back to
the viewer from the surface. This is commonly called
*specular reflection*, and is maximized when the vector to the light source is the vector of reflection from the vector to the camera position.

The total illumination is then calculated as

The difficulties with accurately calculating interreflection between
objects cause a problem, and the solutions in the computer graphics
community largely avoid these problems by approximating this
interrefletion by a constant. That is, * we assume that there is a
constant amount of background light that illuminates each surface.*

This factor, commonly called the * ambient contribution* can be modeled
simply by specifying a constant for each of the red, green and blue
spectra,

A Lambertian Surface (commonly called a * perfectly diffusing
surface*) has the property (commonly called * Lambert's Law*) that
the amount of light reflected from a small area toward the viewer is
directly proportional to the cosine of the angle between the direction
to the viewer and the normal vector.

If we have a small beam of light of width , being reflected in the direction , and we have an intensity per unit area on the small surface piece , then for a Lambertian reflector we can calculate the total intensity of light that will be transported in the direction by multiplying the intensity transported per unit surface area ( from Lambert's law), times the surface area of the small piece of surface that gives us a beam of width ( ). Thus the total intensity is

This is a very convenient situation, and it means that we only need calculate the intensity of light reaching a point on the surface from the light source, and then, no matter where the point is viewed from, the intensity reflected is constant.

We can calculate the intensity of light reaching the surface by considering the following figure

We note that as increases the cross section of the beam decreases - having a maximum when , and zero when . By a similar argument to the calculations above, we can conclude that if the light source has intensity , then the intensity falling on the surface must be

A way to look at this contribution to the illumination equation is to note that the contribution along a certain direction is the length of the chord of a circle of radius , where the chord is oriented from the origin in the direction of the light source

Your can see from this that the full intensity of is achieved only if and are identical, and that zero intensity is given when and are perpendicular. Thus, it is the light source that determine the contribution for the diffuse reflection.

and when the surface is viewed, we obtain the same value for the illumination, regardless of how we view the point.

Modeling surfaces only with ambient and diffuse factors in our illumination equation, gives very dull matte surfaces. Upon further examination, a direct reflection, or highlight, can be seen on the surface. This highlight is generated at those points , where the viewing direction is nearly in line with the perfect reflecting direction from the light source. The reason for this is clear if we consider our surface microscopically.

If we look closely a a surface, it can be considered to be made up from a number of ``microfacets'', each of which reflect the light: Very smooth surfaces will have microfacets whose normal vector is very close to the normal vector of the surface at , while rough surfaces will have microfacets with normals that substantially deviate from the normal at . If we consider the following figure

where the light approaches the point from the right, the light is scattered by these microfacets. For smooth surfaces this light is mostly scattered in the reflecting direction.

We can model this * specular reflection* in the same way as we
modeled light reflection from Lambertian surfaces - however we cannot
measure the contribution easily (recall that in the Lambertian case,
we could use the chord length of a circle). In this case, the
reflected intensity cannot be modeled by a circle but in reality is a
surface extending along the direction of the perfect reflecting
vector . If we view the point, the contribution is just the again chord
length, but it is much more difficult to calculate this length.

Phong was the first to give a workable approximation to this ``surface''. His idea was to first measure the angle between the vector (which is calculated as the vector reflected from the surface) and the vector to the light source . This would give an idea of the deviation of from the perfectly reflecting direction. If was close to the perfectly reflecting direction, then the specular contribution should be high, while if far away, very little specular contribution should be made.

Thus, the quantity would measure the variance of the vector from the perfectly reflecting direction.

To simulate the scattering of the light by the surface, Phong realized that this dot-product measure could be focussed by raising the cosine to a power. This can be written as

This approximation has become the defacto standard for calculation of the specular reflection. One simple change has been made to make the quantities easier to calculate. Instead of calculating , we calculate an ``ideal normal vector'' called , which is exactly halfway between the vectors and . If the actual normal vector to the surface was equal to this ideal normal then the maximum amount of light would be reflected back to the camera. The idea then is to measure the difference between this ideal normal, and the actual normal and base the amount of reflection on this value.

So, the intensity specular reflection is calculated as

In a RGB color model, these values are typically put together as three equations: one for each of red, green, and blue.

We note that the constant from above now represents the color of the surface, and the constant represents the color of the light source.

Below are illustrated several images of a red sphere illuminated with a light source coming in from the upper right. The first three images show the sphere under varying object colors of (0.2, 0.0, 0.0), (0.5, 0.0, 0.0) and (0.7, 0.0, 0.0) respectively.

The next three show the sphere under varying (dark to light) values of ambient or background light. This light is modeled as a shade of gray: (0.1, 0.0, 0.0), (0.3, 0.0, 0.0), and (0.5, 0.0, 0.0).

The final three illustrations show the sphere under varying specular values. All specular values shown are for ``white'' light ( red, green, and blue components are all equal).

Note the white spot that appears as a direct reflection from the light source. As these values decrease, the spot becomes larger, eventually being completely absorbed into the diffuse reflections.

**Return to
the Graphics Notes Home Page**

**Return
to the Geometric Modeling Notes Home Page**

**Return
to the UC Davis Visualization and Graphics Group Home Page**

**This document maintained by Ken
Joy**

All contents copyright (c) 1996, 1997, 1998,
1999

Computer Science Department

University of California, Davis

All rights reserved.

1999-12-06