Plan ahead

This is a big assignment. Start NOW, or you will probably not finish.

I recommend laying out a plan of attack before coding. To assist in your planning, here is an outline of the steps your ray tracing program will need to do:

  1. Read file format into scene data and list of objects.
  2. For each pixel:
    1. Calculate pixel location in world space.
    2. Calculate ray from eye point through the pixel and into the scene.
    3. Calculate ray-object intersections, choose smallest/closest.
      • I reserve the right to test your code on other legal files, so test your code thoroughly.
    4. Set the pixel to that color.
  3. Write all pixels out to your PPM file.

Using C++

Creating a C++ 3D vector class with addition, scalar multiplication, and dot product operators will make many operations more compact and more like the vector math equivalent.

The std::list and/or std::vector data structures may also be useful for this assignment for your list of objects.

The std::map data structure may be useful to map "surface" names attached to the shape to the properties from when that surface was defined.

If you are doing the extra credit, with more than one primitive type, a common effective strategy is to create a generic object class with spheres and polygon classes derived from it, each with a specialized virtual method to compute the intersection of a ray with that primitive type.

Parsing

For a file format as simple as this one, it isn't necessary to get too fancy with the parsing. The easiest method is probably to read one whitespace-delimited string at a time, then compare that to each of the keywords you handle. Once you have found a keyword, read in the expected number of numbers or strings (or for "polygon" or "triangle", until you find one that is not a number)

Test input

It is very difficult to find the error in one of hundreds to thousands of primitives across thousands to millions of pixels without a systematic approach and simplified data.

Note that you can easily write your own .ray files by hand, which can be very handy for debugging. I recommend that you start with a test scene looking at (0,0,0) from an eye at (0,0,3) containing a single triangle, say between (-1,-1,0), (1,-1,0), and (0,1,0). With a single triangle, it's easier to tell if your loading is working, and with a simple axis-aligned view it is easier to tell if your ray positions are correct.

Start by trying to find intersections with a 1x1 pixel image, which should give you a ray straight down the Z axis with a closest hit at (0,0,0). Move the triangle around, making sure you get the right answer when you miss it, when it is closer or farther away, or when it is behind you. Then, you can scale up to 2x2 or 3x3 images to make sure your ray position code is correct. Once you have the basics working, move up to a larger window so you can visually tell if your triangle is rendering as a triangle.

Only when you are confident of your ray direction computation and intersection code should you switch to the scenes I've provided or the SPD test scenes. If you still run into problems, you can successively eliminate objects from the SPD file to find the one that is not being computed correctly

To generate a tetra.ray SPD test file on GL:

~olano/public/spd3.14/tetra -r 8 > tetra.ray

Since that has 4096 triangles and can be somewhat slow to ray trace, using the -s option can yield a simpler model. For example, this tetra-3.ray with 64 triangles was generated with

~olano/public/spd3.14/tetra -s 3 -r 8 > tetra-3.ray

The tetcolor file was generated using the tetra program at size 4 (256 triangles), run through a script to uniquely color each triangle, then five surrounding walls were added manually in a text editor. The walls are there to make sure you can handle polygons with more than 3 vertices. There are only five rather than six, so at least some of your rays will make it all the way to the background.

Debugging output

It is also worthwhile getting image output working early. Printing debugging values works OK for one pixel/one primitive scenes, but for larger images and scenes, outputting values other than colors at each pixel can be a valuable debugging tool (for example, visualizing ray intersection distance as color to be sure you're reliably finding the closest intersection).