Amit Penmetcha (ap328)
Shane J. Pryor (sjp45)
Overall our design almost met our expectations. We can render complex objects that consist of hundreds of triangles and rotate the object in almost real
time. We had hoped to be able to render these objects at about 5 frames per second, but it is slightly slower than that (couple frames a second or so).
We originally wanted to be able to render objects made of thousands of triangles, but there was not enough physical space on the DE2 board to do this and
draw in real time. We would have had to use the Flash, SD Card, or SDRAM, all of which introduce additional latency. We ran out of a significant amount of
space in the SRAM, logical elements, and M4K blocks since we chose to use a higher color depth and more accurate z-buffer. Also using look up tables for
vertices, colors, normal information, etc took up a great deal of the memory bits and logical elements, preventing us from putting larger images on the
DE2 board. Using the floating point hardware allowed us to do complex calculations quickly with precision, but it used additional logical elements that
prevented us from storing more object data. We would not have done anything significantly different, but we would have added additional features if time permitted.
Something we had thought about implementing would have been antialiasing, which averages out the color data on adjacent pixels to get smoothed out borders along the object,
similar to interpolated shading.
This project went through many phases until we decided on a final idea that was implemented. First, we tried to implement a ray tracer
using parallel Nios II processors. This approach was very, very slow. A general purpose CPU is not specialized for what we need. Once we decided on the dedicated graphics hardware
to do everything, we implemented several approaches that are no longer used. The shading for our project was originally flat shading, one color per triangle.
We were not satisfied with this approach, so we then switched to the interpolating Gouraud shading. This changed the construction of our pipeline to do the
shading at the end with the z-buffer on a per pixel approach, rather than in parallel with the transform. We felt that adding the additional RAM to provide us more
space for the z-buffer and color depth was a good trick. We realized this about halfway through the project. The final stages involved adding the camera
and debugging. The camera made all of our LUTs much larger than they initially were, giving us much larger compile times for debugging. Because of this compile
time, which ranged anywhere from 20 minutes to 2 hours and 30 minutes, it became difficult to work out the last few bugs in the pipeline. It also made us
rewrite a good bit of the Java code to include the new coordinates in each respective file.
The only code we used was the original top level module and VGA controller for the DE2 board, that Terrasic provided, Professor Bruce Land's floating point hardware, and we used .msh and .obj files that other people had created and uploaded onto the internet for public use. We implememted various algorithms we had learned from a CS graphics class in order to incorporate various features.