FPGA Ray Tracer
Scott Bingham / Donald Zhang

| Introduction | High Level Design | Program/Hardware Design | Results | Conclusion | Appendix |

High Level Design

The idea for our project came from Prof. Land's lectures on ray tracing. Ray tracing is very well suited for FPGA's where many calculations can proceed in parallel. Spheres allow quite interesting scenes to be drawn, especially when reflections are added. We found that once we had implemented sphere rendering, adding planes was easier because fewer calculations were needed and we had existing experience and states in place to do the required calculations, reflections, and shadowing.

Ray Tracing
The basic idea of ray tracing is to trace the path of a photon through a scene to the eye. Because we are only concerned with the photons that actually hit the eye, we actually shoot rays out from the eye through the screen into the scene and seeing what objects it hits. The ray picks up color based on the reflectivity as it collides with objects in the scene. Eventually the ray is stopped because it misses objects in the scene or picks up negligible color in subsequent reflections. Shadows are determined by shooting rays from intersection points back towards the light source(s) in the scene and seeing if there is an object in the way to block the light. Shading is also done by weighting light that hits a surface perpendicularly greater than light that merely glances off.

Figure1

Figure 1

Figure 1 shows an example of a ray tracing through a scene. The initial ray leaves the eye in a direction such that it passes through one of the pixels in the screen. It then collides with the red sphere. A shadow ray is shot towards the light source; since that shadow ray reaches the light source, the intersection point on the red sphere would be lit, and is not shadowed. A reflection ray is then shot towards blue sphere, getting its direction from a specular reflection. Next, a shadow ray is shot to the light source, which again makes it without obstruction and so the blue sphere is lit at the intersection point. Then another reflection ray is shot from the blue sphere intersection point towards the green plane, which it hits. The shadow ray from this point, however, is blocked by the red sphere and so that point on the green plane is in shadow. The reflection ray from the green plane then leaves the scene without hitting another object so the tracing for that pixel is completed. Its color is the sum of the lit red sphere color, the product the red sphere reflectivity and the lit blue sphere color, and the product of red and blue sphere reflectivity's and the shadowed green plane color.

The tracer would then repeat the process for the next pixel in the scene (or another ray for that pixel if anti-aliasing is used). Since each ray must check for intersections with every object in the scene, the processing time increases significantly with a large number of objects. However, since pixel is independent of each other, they can be processed in parallel if the hardware is available.

Figure2 Figure3

Figure 2 and Figure 3

Figure 2 shows the affects of varying the distance from the eye to the screen on the viewing frustum. While you can see a wider angle and more of the scene with the screen close, the pixel density goes down, which can cause pixels to miss objects they would had previously hit. Also, when the screen is too close to the eye, objects become skewed, where spheres get stretched into ovals as they are farther from the center of the screen. The tradeoff is how much of the scene you see with how much detail you see.

Our coordinate system and sample scene setup are shown in figure 3. Depth is in the Z direction. Because we use 12.12 two's complement fixed point numbers, each coordinate is limited to between -2048 and +2047 (considering the 12 integer bits only). For scenes with planes, we use the configuration shown so that we can get reflections all around the spheres while still being able to see the scene.

Figure4

Figure 4

Our ray decision tree is shown in figure 4. Each ray that intersects with an object shoots a shadow ray and possibly a reflection ray depending on the switches and weight given to the reflection ray to be launched. If less than 1% of the light is going to be reflected, we don't launch a reflection ray. We also impose the restriction of a maximum of three reflection rays to limit the time spent per pixel.

Figure5

Figure 5

A high level state diagram for the ray tracer is shown in figure 5. At reset, the tracer is initialized. It then proceeds to load the sphere list from the CPU. The transfer is controlled by the CPU once the hardware signals that a frame has completed. A ray then checks for the closest intersection, if any, with the spheres in the scene. If it hits one, Lambertian lighting is applied to give it a color based on the amount of shadow. If the intersection is completely in shadow, no shadow ray would be needed as it will be blocked by the sphere the ray intersected with. If the ray did not hit a sphere, the planes in the scene are checked for the closest intersection. We chose to give spheres priority and not check planes in the event of a sphere intersection because of performance considerations and that we added planes as a last minute extra once we satisfied our initial project specifications. This imposed the restriction that spheres must be on top of planes and not behind them, which is a reasonable restriction for the scenes we wanted to render.

A shadow ray was launched towards the light source. Again, the shadow ray checked the sphere list for an intersection, but only spheres closer than the light source were counted as actual intersections. Because spheres must be in front of planes, the plane list was not checked to see if a plane cast a shadow on a sphere.

At this point, the pixel has an initial color, it is either black because it missed all objects, the color of the intersected object, or the color of the intersected object with a shadow. For shadows, we halved the color of the intersection object to allow for some ambient lighting affects. For the Lambertian/cosine shading, the dot product of the normalized normal with the normalized vector from the intersection point to the light source was multiplied by the object's color. Because both vectors were normalized, the dot product produces a scaling factor between 1 and 0. We offset the resulting color by the completely shadowed color and made sure the resulting color did not overflow when the offset was added to the color with a saturating addition.

The next ray could be a reflection ray, in which case the color from that ray would be scaled by a reflection weight and added to the original color. If anti-aliasing is used, all the rays for each pixel are combined with different weights as will be discussed later. Finally, if a pixel is done, the tracer moves on to the next pixel. When the last pixel of the frame is drawn, the sphere list is again loaded from the CPU to allow for sphere motion and rotation. The steps involved in each state are discussed in more detail later.

Sphere Background Math [1]

Figure6

Figure 6

Rorigin = Ro = [ Xo Yo Zo ]
Rdirection = Rd = [ Xd Yd Zd ]
R(t) = Ro + Rd * t
Rintersection = Ri = [ Xi Yi Zi ] = [ Xo + Xd * t Yo + Yd * t Zo + Zd * t ]
Rnormal = Rn = [ Xn Yn Zn ] = [ (Xi Xc)/Sr (Yi Yc)/Sr (Zi Zc)/Sr ]
t = intersection distance
D2 = L2oc tca2
t2hc = Sr2 D2 = Sr2 - L2oc + tca2
OC = Sc - Ro
L2oc = OC OC
tca = OC Rd
Scenter = Sc = [ Xc Yc Zc ]
Sradius = Sr
Ssurface = Ss = [ Xs Ys Zs ]
Sr2 = (Xs Xc)2 + (Ys Yc)2 + (Zs Zc)2

This final equation gives us the implicit equation for a sphere. We can test points to see if they in fact lie on the sphere's surface. The algebraic solution is as follows. By substituting X(t), Y(t), and Z(t) in the form of
R(t) into the implicit equation, we get that,
Sr2 = (Xo + Xd * t Xc)2 + (Yo + Yd * t Yc)2 + (Zo + Zd * t Zc)2.
In terms of t,
A * t2 + B * t + C = 0.
A = Xd2 + Yd2 + Zd2 = 1
B = 2 * (Xd * (Xo Xc) + Yd * (Yo Yc) + Zd * (Zo Zc))
C = (Xo Xc)2 - (Yo Yc)2 - (Zo Zc)2 - Sr2

You can then solve the quadratic equation for t and find the closet intersection point, if any.

However, we chose to use a faster geometric solution to the intersection problem which delays the square root of the quadratic equation and offers checks to bail out of the calculations sooner if an intersection is impossible.

First we check if the ray originates inside the sphere by calculating a vector from the ray origin to the center of the sphere and its magnitude:
OC = Sc - Ro
L2oc = OC OC

If L2oc is less than Sr2, then we know the ray originated inside the sphere. If the ray originates inside any sphere, we chose to color the pixel black and move on because no light penetrates our spheres. (Note this is not true of shadow rays because they may originate (Ri) under the surface due to the limited precision of our calculations and we ignore the result of this comparison for shadow rays.)

Next we calculate the distance from the origin to the point along the ray that is closest to the sphere's center.
tca = OC Rd
If tca is negative, then the sphere is not in front of the ray origin (as defined by the ray direction) and so we know that the ray does not intersect this sphere and can move on to the next one.

Following that comparison, we next calculate the half cord distance squared, where the half chord distance is the distance from the point found by tca and the surface of the sphere.

t2hc = Sr2 D2 = Sr2 - L2oc + tca2
D2 = L2oc + tca2

If t2hc is negative, the ray misses the sphere. We then calculate the intersection distance.
t = tca - √(t2hc )
Once we have the intersection distance, we can calculate the intersection point and the normal.

Ri = [ Xo + Xd * t Yo + Yd * t Zo + Zd * t ]
Rn = [ (Xi Xc)/Sr (Yi Yc)/Sr (Zi Zc)/Sr ]

We check all spheres in the sphere list in order to find the closest intersection if there is more than one.

All direction vectors are normalized in our calculations to simplify and reduce the number of calculations required. This also helps prevent overflowing when we determine the magnitude of vectors by limiting the size of the result. The inverse radius and radius squared are precomputed and stored in the sphere list to save calculation time at the expense of memory/register usage.

Plane Background Math [1]

Rorigin = Ro = [ Xo Yo Zo ]
Rdirection = Rd = [ Xd Yd Zd ]
R(t) = Ro + Rd * t
Rintersection = Ri = [ Xi Yi Zi ] = [ Xo + Xd * t Yo + Yd * t Zo + Zd * t ]
Rnormal = Rn = [ Xn Yn Zn ] = [ (Xi Xc)/Sr (Yi Yc)/Sr (Zi Zc)/Sr ]
P = A * x + B * y + C * z + D = 0
A2 + B2 + C2 = 1
D = - Pn point, distance from [0 0 0]
Pnormal = Pn = [A B C]

Planes, in comparison, require fewer calculations to determine if there is a ray intersection. We start with the implicit equation for a plane.
P = A * x + B * y + C * z + D = 0
Which can be written as,
A * (Xo + Xd * t) + B * (Yo + Yd * t) + C * (Zo + Zd * t) + D = 0
We can solve this for t, the intersection distance, and get

t = - (A * Xo + B * Yo + C * Zo + D) / (A * Xd + B * Yd + C * Zd)
t = - (Pn Ro + D) / Pn Rd
t = vo / vd , where vo = - (Pn Ro + D) and vd = Pn Rd

We calculate the denominator first. If vd equals zero, the ray is parallel to the plane and we can disregard it. Likewise if vd is positive, the normal is pointing away from the plane, and we disregard it in our rendering. This is done so that planes cannot block spheres. If we move the origin behind a plane, it is simply not drawn. This gives the affect of it being a one way mirror. When behind it, it appears to be not there but when in front, it acts as a normal surface (mirrored if the reflection is not zero).

Next, we calculate the numerator vo and then t.
t = vo / vd
If t is negative, then the intersection is behind the origin and therefore not a hit. Again the intersection point is calculated.
Ri = [ Xo + Xd * t Yo + Yd * t Zo + Zd * t ]
We do not have to calculate the normal as with spheres because we already have it in the plane table in
Pn = [A B C] which was used in the previous calculations. Also, the normal is the same for any point on the plane (opposite sign on the other side), which is not true of spheres. This also makes planes much quicker to render than spheres.

Reflections [1]

Figure7

Figure 7

θincident = θi = θreflected = θr
R = αI + βN

Physics tells us that the above two statements are true; the angle of incidence equals the angle of reflection, and the reflection vector is a linear combination of the incident and normal vectors. This can be transformed into a useable equation by the following:

cos(θi) = cos(θr)
- I N = N R
- I N = α(N I) + β

If we set α = 1, β = - 2*(N I). Substituting into our physical law, we get that,

R = I 2*(N I)*N

The resulting reflection vector R is also normalized when the incident vector I and the normal vector N are also normalized.

Software/Hardware Tradeoff
Many of the functions in the ray tracer can be performed by either the hardware or the software. We tried to take advantage of the hardware parallism as much as possibile by calculating most of the arthtimics using hardware modules. The software on the other hand can compute more complex calculations that are not crucial to ray tracing itself. Calculations such as sphere movement and rotation are performed by the software while the hardware is drawing the frames. This maximizes the efficiencies of both parts as hardware statemachine can run as fast as possibile while the software will not be sitting there idle waiting for the hardware. By using a real floating point unit the software also has the advantage of having higher precision than the hardware, which uses fixed point.