At the end of the day, we were able to create a room with four walls and spin the viewer around in 360 degrees. Running at the highest clock speed, we were able to output 8 to 10 frames per second and transmit 4 polygons per second. There was a lot of artifacting, however everything looked reasonable. Below is a picture of a wall sloping away from us:
We also ran tests that did not use the perspective projection and relied purely on the hardware's Z-buffer and point-in-polygon algorithms. The picture below shows two triangles with varying depths intersecting and only the lighter colors (the nearer Z-coordinates) are shown.
Other tests that we ran included moving multiple polygons and noting the differences between the frames per second and polygons. A table of results is shown below:
|# of polygons||fps|
|6|| 12 to 13|
|4|| 16 to 17|
|3|| 19 to 20|
|2|| 24 to 25|
|1|| 32 to 33|
The following information describes our final usage of the FPGA:
We ended up using 42% of the FPGA. Our compilation time was about 18 minutes: its length was due in no small part, to the fact that we were using 69 of the 70 available 9-bit multipliers. We chose to limit ourselves to using the onboard multipliers only instead of generating them out of logic, because multiplication time was a key issue with our project. Interestingly enough, despite the fact that we were constantly running into memory issues, we only used 15% of the M4K blocks. The reason for this is that we often needed large amounts of memory: for example, a typical Z-buffer routine as opposed to our modified one would have needed 304K of memory. Our solution to this, however, only used a single register. Hence, at the end of the project, we were left with the majority of MK4 blocks free.