ECE 5760: Graphics Processing Unit |
|
Background: A Hardware Graphics Pipeline operates in the following manner:
*Note: Slide taken from CS 465 Lecture, Fall 2007 The key parts to this pipeline are the transformation, rasterizing, shading, and buffering. The transformation is merely computing the screen coordinates for each object (pixel x, pixel y) based on the real world coordinates. The rasterizer and shader determine the color value each pixel contains. The z-buffer determines what color should actually be displayed at each pixel based upon which part of the object is in the front. Transformation / Camera View:
The w vector points in the opposite direction from the gaze direction of the camera. That is, negative values of w indicate a point is
in the side of the world the user can see. The u vector points to the right and is orthogonal to the w vector. The v vector is the cross product of these
two vectors and points upward from the plane they lie in. Once we matrix multiply Mv to the real-world coordinates to get the transformed coordinates, we
then must transform these new coordinates to fit on the screen properly. This involves doing a windowing transform to fit the real world coordinates into
the screen at a particular ratio. That is, for each unit in the u, v, and w directions, we specify the number of pixels on the screen this corresponds to.
The scene is then flattened to fit on the screen in this manner. Instead of doing a full matrix multiplication for this part, we do a multiply and add to
get the screen coordinates as this eliminates unnecessary calculations. The multiply multiplies by the number of pixels per unit, then the addition offsets
from the middle of the screen. Rasterizing/Shading: We chose to use the Gouraud Method of rendering a triangle, as it allows us to use interpolated shading on the triangle. The details of the algorithm will be discussed in the hardware design section, but the background math will be explained here. This method computes the Barycentric coordinates for each screen coordinate (i.e. pixel). Any triangle can be described by three vertices that are not co-linear, which means any point within the triangle can be described as a weighted sum of the three vertices. If the sum can be represented by λ1v1 + λ2v2 + λ3v3, then each coordinate can be represented by the following two equations: To determine if the coordinate is in the triangle, the lambdas have to be greater than zero. If the pixel is within the triangle, then the color is determined by interpolating between the three vertex colors using these same Barycentric coordinates. Solving this is discussed in further detail in the hardware design. Z-Buffer: After adjusting the color based on the light source, the data enters the z-buffer stage. This is the final part of the graphics pipeline. Each pixel has color data store for the object with the closest z value that is in front of the viewer. Each newly computed data is compared to what is stored in the z-buffer. If it is closer it replaces the previous value, otherwise it is discarded. Java Software: All of the vertex, normal, color, and face data is read into the pipeline from look-up tables (LUTs). These LUTs are different for
each object to be rendered as the geometry for each object is different. To generate these, we wrote a few Java programs which read in the values from
.msh and .obj files obtained through CS 465 and turbo squid. This program would read the data in from these
files and output two Verilog (.v) files that could be copied and pasted into the project directory. Logical Structure: The graphics pipeline runs completely in hardware. We generated various look up tables that contain information on the normals, vertices, faces, etc, which are used to do calculations. They use up a signifcant portion of the logical elements. The graphics pipeline, which is discussed in the hardware section, computes which triangles should be seen under the current camera angle and stores the calculated color value in the SRAM and the Z coordinate in a dual port ROM made out of M4K blocks. The VGA controller then reads these values out of the SRAM and displays them on the VGA. Hardware/Software Tradeoffs:
|
|