Viewing Transformations and the Graphics Pipeline

Spring 1999

Computer Science and Electrical Engineering Department

University of Maryland, Baltimore County

- Now we know how to set up a 3D view of 3D objects.
- We now want to determine which surfaces
on the object are visible from the center of
projection (what can the observer see in
the scene?)
- There are numerous algorithms to do this
because it is a non-trivial, computation
intensive task.
- Once you determine the visible surfaces,
then you need their color. This involves
illumination models, texture mapping, transparency,
etc.

We will spend alot of time on this topic - There are 4 general classes of Algorithms:
- Object space algorithms (eye space)
- -determines visibility of one object with respect to the other objects.
- - Works at the resolution of objects (infinite resolution).
- - Worst case order O(n
^{2}) (n number of polygons or objects)- - Independent of screen resolution.
- List Priority Algorithms
- - Works partially in both spaces.
- - Sorts in object (eye) space.
- - Resolves visibility in image space.
- - Solves priority of all objects in scene.
- - Then draws them in according to the priority using a painters algorithm.
- - Difference comes in the priority determination.
- Image Space Algorithms
- - determines which surface is visible at each pixel.
- - Works at screen resolution (pixels).
- - order size of screen, O(number of pixels).
- Other Algorithms
- - wavefront (light) propagation
- - Image based rendering

- Object space algorithms (eye space)
- Speed up all using coherence.
- the degree to which parts of an environment or its projection exhibit local similarities.
- Types of Coherence
- span coherence
- values don't change much from pixel to pixel in a span
- scanline coherence
- values don't change much from one scanline to the next.
- the coverage (or visibility) of a face on one scanline typically differs
- little from the previous one.
- edge coherence
- edges intersected by scanline i are typically intersected by scanline i+1
- area coherence
- frame coherence
- values don't change much from pixel to pixel in a span

- We will look at a Taxonomy of the algorithms:
- an orderly classification.
- the degree to which parts of an environment or its projection exhibit local similarities.

## Object Space Algorithms

General Idea:For each Object For each Polygon For each edge (v1,v2) determine how many faces cover v1 {for each object, for each polygon} Determine all edges that intersect (v1,v2): For each object For each edge intersect 2D projection of (v1,v2) with edge if (v1,v2) behind edge record x,y,z intersection and +/- 1 sort intersections keep running count (v1,v2): if count 0, visible

- Roberts - 1st Hidden Line Algorithm
- - Test each edge to see if it is obstructed by
*volume*of an object.- - Write parametric equation for viewpoint to point on the edge.
- -
*P*is inside a convex object*j*if*P*is on the inside of all planes that form*j*.- - All objects must be convex.
- - O(n
^{2}).- - Uses spatial coherence - tests edges against object volumes.
- Appel (Loutrel, Galimberti similar) - Quantitative Invisibility - Hidden Line
- -quantitative invisibility of a part =
- number of relevant faces that lie between the point and the viewpoint.
- -need to compute quantitative invisibility of every point on every edge.
- -edge coherence: quantitative invisibility of an edge only changes when
- projection of that edge into the picture plane crosses the projection of
- a contour edge. At this intersection, quantitative invisibility changes by +1 or -1.
- -edges are divided into segments at these intersections.
- Weiler-Atherton - Area - HS or HL
- - Perform preliminary depth sort of polygons
- - Clip based on polygon nearest the eye-point
- creates inside list and outside list.
- - Removal of all polygons behind the nearest to the eye-point
- - Recursive subdivision, if required,
- - Final depth sort to remove any ambiguities.
- - Test each edge to see if it is obstructed by

- Roberts - 1st Hidden Line Algorithm
## List Priority Algorithms

- Schumaker
- - Priority calculates
*a priori*- - Good for static environments (flight simulators)
- - Only works for convex faces and linearly separable clusters
- - Clustering: face priority within cluster
- cluster priority between clusters
- - Face priority can be determined independent of the viewpoint
- - Frame to frame coherence
- Newell, Newell, Sancha
- - Can "overwriting faces" to achieve transparency
- - Characterized by performing easiest to hardest tests:
- Sort all faces by Max Z
- Perform overlap in X box test
- Perform overlap in Y test
- If overlap in both,
- Check if all vertices in P deeper than Q
- Check if all vertices in Q closer than P
- Perform actual overlap in X&Y test

- General Case:
- If can't determine the correct order, split face. ** EXPENSIVE **

One problem is cyclical overlap

- Sort all faces by Max Z

- - Priority calculates

- Schumaker
## Image Space Algorithms

**Ray Casting**- The simplest ray tracer does hidden surface removal, lighting, and shadows.

In a simple ray caster you can trace the scene in a restricted world (eye) space by requiring the viewer to be at the origin, looking down the negative Z-axis

**To calculate the field of view,**assume that your rays go from

- X: -tan(theta) to tan(theta)

- Y: -ratio*tan(theta) to ratio*tan(theta), where

**ratio = (Vyt - Vyb)/(Vxr - Vxl)**

### General Algorithm Outline

for each pixel Determine the closest object along the ray from the eye pt. through the pixel. Find it's color. Set pixel color to this color.

### More Detailed Algorithm

- Read in scene information and object information
- Initialize color table (using ct_gen)
- Calculate viewing parameters and area to raytrace
- For Y = -ratio*tan(theta) to ratio*tan(theta)
- Initialize color_buffer to background color
- For X = -tan(theta) to tan(theta)
- For each object in the scene
- Test to see if ray intersects each object and save
*t*value of this intersection and object number if this is the closest one so far.

- Test to see if ray intersects each object and save
- Shoot shadow rays for this pixel for the closest object
- Calculate the illumination and color of this object based on the illumination model.
- color_buffer[x] = color from above

- For each object in the scene
- Write color_buffer to screen or file (using write_buffer cmd)

- Z-Buffer Algorithm:
- We know how to scan-convert polygons
onto the screen.
- What we want to do now is, only overwrite a pixel
if the polygon element has a larger Z value
than what is in the frame-buffer currently.
- We will create a Z-buffer to hold the depth
values of each pixel in the frame-buffer.
- The general Algorithm:
Initialize frame-buffer and Z-buffer for each polygon P for each pixel in poly's projection: x,y pz = the Z value of P at this pixel polycolor[x][y] = color of P at this pixel if Z-buffer[x][y] <= pz Z-buffer[x][y] = pz FB[x][y] = polycolor[x][y] end for each pixel end for each polygon

- Z values calculated incrementally from the plane
equation (dz/dx, dz/dy).
- Number of comparisons independent of
number of polygons.
- Works fine for patches & other primitives.
- Requires a large amount of memory
(1280*1024*32bits typically > 5Mb)
- Often implemented in hardware.

- Scanline Z-buffer
- Instead of storing an entire Z-buffer, what
if we store only 1 scanline at a time to
save memory ?
- How does the algorithm have to change?
- We switch the order of the for loops.
Initialize scanline color-buffer CB and Z-buffer for each scanline for each polygon P active in the scanline for each x in poly's projection pz = the Z value of P at this pixel polycolor[x][y] = color of P at this pixel if Z-buffer[x] <= pz Z-buffer[x] = pz CB[x] = polycolor[x][y] end for each x end for each polygon draw the color-buffer CB to the screen end for each scanline

- What are the disadvantages of this algorithm compared to the Z-buffer
algorithm?

- Warnock's Algorithm:
- Screen-Space Subdivision.
- A polygon's relation to an area is one of 4
cases:
See Figure 15.42 from F&vD pg 687.

- Completely surrounds it.
- Intersects it.
- Is contained in it.
- Disjoint (is outside the area)

- If one of the four cases below doesn't
hold, then subdivide until it does:
- All polys are disjoint wrt the area =>

draw the background color - 1 intersecting or contained polygon =>

draw background, then draw contained portion of the polygon. - There is a single surrounding polygon =>

draw the area in the polygon's color. - There is a front surrounding polygon =>

draw the area in the polygon's color.

- All polys are disjoint wrt the area =>
- Recursion stops when you are at the pixel
level, or lower for anti-aliasing.
- At this point do a depth sort and take the
polygon with the closest Z value.
- Most of the work is done at the object
space level.
- An Example: See Figure 15.44 from F&VD p 688

**Spanning Scanline (Walkins, Romney)**- fewer depth comparisons
- uses spans -- divide scanline at each edge crossing => spans
- 3 span cases
- empty --> draw background
- contain only 1 segment --> display on screen
- contain only multiple segments --> calculate depth
of each span draw one with smallest
*z* - if penetration allowed, divide span at point of intersection.
- simple modification of scanline Z-buffer.
- need to check for intersecting edge in a span (sign of end point difference change).

Return to CMSC 635 Notes Page

Last modified: Mon Feb 15 14:24:40 EST 1999- The simplest ray tracer does hidden surface removal, lighting, and shadows.