# HIDDEN  SURFACE REMOVAL

•  Now we know how to set up a 3D view of 3D objects.

• We now want to determine which surfaces on the object are visible from the center of projection (what can the observer see in the scene?)

• There are numerous algorithms to do this  because it is a non-trivial, computation intensive task. CMSC 635 spends alot of time on this.

• Once you determine the visible surfaces, then    you need their color. This involves illumination models, texture mapping, transparency, etc.

• There are 2 general classes of Algorithms:

• Object space algorithms - determines  visibility of one object with respect to the other objects.
• Works at the resolution of objects. Worst case order n^2 (n number of polygons or objects)
• Independent of screen resolution.

• Image Space Algorithms
• determines which surface is visible at each pixel
• order size of screen.
Speed both by using coherence.

• Basic visible surface determination

• Ray Casting (Simplest Image Space Algorithm)
• for each pixel
Determine the closest object along the ray from the eye pt. through the pixel.
Find it's color
Set pixel color to this color.

• Basic Object Space Algorithm:
• For each object
Determine the visible surfaces of the object
(those portions that are unobstructed in the
view)
Find their color Draw them.

• Painter's Algorithm
• Sort all the polygons in the scene in Z depth.
• For poly = farthest to closest

•                 Find the color of poly.
Draw it on the screen.
• This works fine unless 2 polygons that

•                 overlap in x & y also overlap in Z.

Then need subdivision or some other
technique.

• Basic idea

•

• Problem Cases:

•

• Z-Buffer Algorithm:
• We know how to scan-convert polygons  onto the screen.
• What we want to do now is, only overwrite a pixel if the polygon element  has a larger Z value than what is in the frame-buffer currently.
• We will create a Z-buffer to hold the depth   values of each pixel in the frame-buffer.
• The general Algorithm:
• Initialize frame-buffer and Z-buffer

for each polygon P

for each pixel in poly's projection:  x,y
pz = the Z value of P at this pixel
polycolor[x][y] = color of P at this pixel
if Z-buffer[x][y] <=  pz
Z-buffer[x][y] = pz
FB[x][y] = polycolor[x][y]
end for each pixel
end for each polygon

• Z values  calculated incrementally from the plane equation (dz/dx, dz/dy).
• Number of comparisons independent of number of polygons.
• Works fine for patches & other primitives.
• Requires a large amount of memory ??
• (1280*1024*32bits typically > 5Mb)
• Often implemented in hardware.

• Scanline Z-buffer
• Instead of storing an entire Z-buffer, what if we store only 1 scanline at a time to save memory ?
• How does the algorithm have to change?
• We switch the order of the for loops.
• Initialize scanline color-buffer CB and Z-buffer
for each scanline
for each polygon P active in the scanline
for each x in poly's projection
pz = the Z value of P at this pixel
polycolor[x][y] = color of P at this pixel
if Z-buffer[x] <=  pz
Z-buffer[x] = pz
CB[x] = polycolor[x][y]
end for each x
end for each polygon
draw the  color-buffer CB to the screen
end for each scanline

• What are the disadvantages of this algorithm compared to the Z-buffer algorithm?

• Warnock's Algorithm:
• Screen-Space Subdivision.
• A polygon's relation to an area is one of  4  cases:

• Completely surrounds it.
• Intersects it.
• Is contained in it.
• Disjoint (is outside the area)

•
• If one of the four cases below doesn't hold, then subdivide until it does:
•   All polys are disjoint wrt the area =>
• draw the background color
•   1 intersecting or contained polygon =>
• draw background, then draw contained portion of the polygon.
• There is a single surrounding polygon =>
• draw the area in the polygon's color.
• There is a front surrounding polygon =>
• draw the area in the polygon's color.
• Recursion stops when you are at the pixel level, or lower for anti-aliasing.
• At this point do a depth sort and take the polygon with the closest Z value.
• Most of the work is done at the object space level.
• An Example:

•

# Illumination Models

• We are trying to model the interaction of light with surfaces to determine the final color & brightness of the surface.

• Global illumination models take into consideration the interaction of light from   all the surfaces in the scene.  Examples are radiosity, Kajiya's rendering equation & recursive ray-tracing.

•
• We will look at local illumination models:
• the lights, the observer position,  and the object characteristics determine its final brightness and color.

• Light Source Characteristics:
• Color, Intensity, Direction, Angle of illumination, geometry
• Material Characteristics: reflection properties, color, transparency, refraction, micro-surface geometry

• Types of Light: Ambient, Diffuse, Specular

• Ambient light :
• The light that is the result from the light reflecting off other surfaces in the environment that strikes all the surfaces in the scene from all directions
• This is independent of direction:
• The equation for the amount of light received by a surface is therefore:
• I = Iaka
• Ia is the intensity of the ambient light in the scene.
• ka is the ambient-reflection coefficient which determines the amount of ambient light reflected from a given surface. 0 <= ka <= 1.

•
• Diffuse Light:
• This simulates thedirect illumination that an object receives from a light source that it reflects equally in all directions.
• Dull surfaces exhibit diffuse reflection, also called Lambertian Reflection from Lambert's  Law.
• Brightness of object is independent of observer position (reflect equally in all directions).
• Diagram of geometry

• Lambert's Law: Id = Ipcos(q)
• where Ip is the point light source's intensity.
• Id = Ipkd (N * L) if N & L are normalized, 0 <= k <= 1

• So combined,
I = Iaka + Ipkd (N * L)

• Specular Reflection:

• These are those bright spots on objects (hot spots)
• If you look around, these move as you move around.
• Therefore, specular reflection is dependent on the observer position.

• We will use the formulation that has H, the half-angle vector. Also called the direction of maximum highlights.
• If the normal is alligned with H, you will have maximum highlights. As you move away from this, it will decrease.
• H = L+V, normalized,
• Phong's model:
• Is = ks(N*H)n
0 <= ks <= 1, property of the material
n <= 1. Good choice is 50.

Finally,

I = Iaka + Ipkd (N * L)  + ks(N * H)n

IMPLEMENTATION OF ILLUMINATION MODELS

• Let's look at some "tricks" of the trade for implementing this simple illumination model
• The original model is:
• I = Iaka + Ipkd (N * L)  + ks(N * H)n
• Ambient: Assume Ia is 1, choose ka to be between .15 and .3
• Diffuse: Normally, choose kd = 1 - ka, I often use ka = .2 and kd = .8
• Specular:
• don't want to wash everything out with specular (r, g, b > 1).
• don't want a specular highlight if light behind face (N * H could still be greater than 0)

•
solutions:
• To avoid colors > 1.0:
• da = Iaka + lpkd ( N*L)
I = da + (1-da)Ipks ( N*L)n
• add specular only if N*L  >  0
• ` `
For nice highlights, use ks = 1.0 & 10 <= n < 100 (50 is good).

DITHERING

• In dithering, we are trying to represent color and intensity information on a display with fewer colors/intensities.

• To do this, we use a pattern to approximate the brightness/colors of one image point with several pixels:

• Example 3x3 dither pattern

• A nxn dither matrix gives us n2+1 intensity levels

• Simple B&W ordered 4x4 Dither:
• Want the intensity values in the range 0-16

• Convert color to intensity by:

intensity = (int)(16*(.299*red + .587*green + .114*blue)+.5)

This is the NTSC intensity formula(Y) (what is shown on B&W TVs)

• 4x4 Dither matrix:

•    0     8      2     10
12    4     14     6
3    11     1      9
15    7     13      5

• How to use this:

• 1) calculate the intensity

2) if (dither[x%4, y%4] >= intensity) then plot(x,y)

• Why are we saying dither  >=  intensity?

• Shouldn't it be  <  ?
• It depends if you are adding white or black by plotting.
• In our case we are adding black, so if intensity is 0, you want to plot each point. Therefore use dither >= intensity.

• Dithering can be expanded to include error diffusion, random dithering and even color dithering to get better results.