ECE 5760: Graphics Processing Unit
 | Introduction | High Level Design | Hardware Design | Results | Conclusion | Appendix |

 Links ECE 5760 CS 465 Terasic

 Authors Amit Penmetcha (ap328) Shane J. Pryor (sjp45)
 Hardware Design

Floating Point Hardware:

The floating point hardware we used was provided by Professor Land. There were 4 modules we used. One converted a 10 bit number into an 18 bit floating point number. Another converted it back from floating point to an integer. The other two modules were arithmetic computations. One did floating point add and the other did multiplies. We used floating point representation for all of our computations, since most of the numbers we used were between 1 and 0, since they were normalized. It also allowed us to use 9 bit multipliers for each computation. Had we used fixed point representation we may have had to use 2 or 3 multipliers per computation to insure high enough precision. The floating point representation consists of 3 parts. The MSB (Bit 17) is the sign bit (1 for negative, 0 for positive). Bits 9 - 16 are the exponent. Bits 0 - 15 are the mantissa.

Here is a high level view of our GPU

Transform:

The transform is done in two main stages in the pipeline. First the vertices are transformed into eye space coordinates using the matrix given in the high level design section of this site. This is done in the calcMv module. It uses a state machine to do 2 matrix multiplications between the view transformation matrix and the vertices matrix to get the rotated vertices. There are three instantiations of this module in the top level module - one for each vertex per triangle. The module only does the necessary multiplications and additions, rather than using all of the hardware for a full matrix multiplication. The normals are transformed to eye space in parallel with the vertices, using the calcMvNormal module. Once again there are three instantiations of this module - one for each normal. The normal is transformed using the transpose of the inverse of the 3x3 upper left part of Mv given in the high-level design portion of the page. Once again, only the required multiplications and additions are performed to save space on the board and computation time. The normals are transformed to the new coordinate system independent of the new location of the eye because normals consist of direction and not position like the vertices. These transformation are implemented using the 18 bit floating point hardware given to us by Professor Land for ECE 5760. This is done with as much parallel computation as possible using the floating point hardware. The determinant was passed in as a constant from the lookup table so that we did not have to do division. These transform modules get the u-v-w vectors from the camera module.

The next step of the transform is to take the vertices and transform them to the screen space from the eye space coordinates. The screen space consists of the integer values 0 to 320 in the horizontal direction and 0 to 240 in the vertical direction. To put our values in this range for the u-axis, each unit in the horizontal direction must be multiplied by the number of pixels per unit. In our case, we used 16 pixels per unit for a u range of -10 to 10. For the v-axis, we also did 16 pixels per inch, for a v-axis range of -7.5 to 7.5. These multiplications produce results in the [-160,160] and [-120,120] range. To get these in the correct screen range, we add 160 and 120 to the results respectively. These screen coordinates are then passed to the rasterizer.

Lighting:

The lighting model in this project is fairly simple. The hardware uses one light shining on the object we want to display. The light is specified by a direction vector; therefore, the light acts similar to the way the sun shines on the earth. Any part of the object that is facing the light is shaded, the other side of the object is black. To determine which triangles are on the light side, the dot product of normal vector off the surface of the triangle with the light is computed. This value corresponds to the angle between the two vectors. If the angle is greater than 90 degress (dot product greater than 0), then the triangle is facing away from the light. If the angle is less than 90 degress, the triangle faces towards the light. In this case, the shading portion of the hardware uses the result of this dot product to shade the brightness of the triangle. It uses a direct multiplication of the dot product result with the underlying color computed for that pixel for that triangle.

Camera:

The camera portion provides the vectors for the transformation on the vertices, normals, and light vectors, based on the current position of the viewer. The vectors are pre-computed using the Java program described in the high-level layout. The positions that we allow the camera to use are every 45 degrees around the unit circle times 10. That is, we take 10*sines and 10*cosines of every 45 degrees around the circle. The camera module uses a LUT to output these values based upon the input position of the camera. The camera also stores off the position of the camera, the eye, for each of these eight positions. This is also output from the camera module for each position. The position of the camera can be changed using two pushbuttons on the DE2 board. One button spins the camera one direction, the other button goes the other direction.

Rasterizer:

After having transformed the coordinates, the rasterizer determines which pixels contain an object and therefore a color. These are done based on the input vertices for the triangles given. The rasterizer uses the vertices to determine which screen coordinates are within the triangle. To save time, bounding squares are used, where the pixels tested are limited to the area in which the box is located. Gouraud shading uses the following algorithm, taken from a graphics textbook (Fundamentals of Computer Graphics, referenced in the appendix).

Using a state machine that went through each pixel within the bounding box, we compute the barycentric coordinates for each screen coordinate by using a systems of equations, using the 3 lines formed from the 3 vertices. The division is done by multiplying by one over the constants being applied, which are in a look up table generated through a JAVA program. The values are stored in the face ROM for each triangle, then sent to the rasterizer. We compare the calculated alpha, beta, gamma values to 0 to determine if they are within the triangle and use these values to assign a color.