Our code can be divided into to logical portions: the raster/video generation, and the actual game design itself. Our video generation scheme was the exact same as the one given in labs 3 and 4 for the Mega163. A template for the Mega32 version (which we used) can be found near the bottom of the Video Generation with AVR link on the ECE 476 main page. Since the video scheme is described in such detail on the above link, and considering that several other groups are using the same template, further discussion here is unnecessary.
All of the code to execute Cantneroid is executed between video lines 231 and 262 of the raster generator, when nothing is being blasted to the screen. Within this few millisecond time gap before the next frame started, we built a multi-layered state machine, ensuring that only a small portion of the code would execute each frame. This was necessary in order to keep the sync pulses in time and maintain a good video image on the screen.
The sate-machine consists of four embedded case-switch statements: action, paddlenothing, paddle, and direction. The latter three states were all contained within state action which draws and controls the transition between the initial screen and all three levels, stores and displays the number of lives remaining, and clears the final screen.
The paddlenothing state is reached after the user hits “start” on the initial screen or after the user misses the ball, thus losing a life. In this state, the user can move the paddle (with the ball on it) across the bottom of the screen to align their shot. Once he/she hits “button B” on the controller, the ball begins to move and action is sent to case gameplay in which the last two state-machines are contained. The paddle state simply controls the action of the paddle left or right across the bottom of the screen as dictated by the user with the Sega controller (shown further in the Hardware Design section).
The final state, direction, is by far the largest most complicated. It not only controls the motion of the ball, but also determines when a brick “hit” has occurred and calculates which brick to erase. The four cases within this state correspond to the four possible directions the ball can be moving on the screen, i.e. up-right, down-right, up-left, and down-left. Since the ball is always moving at a velocity of 1 pixel/frame, we only need to check if the pixels immediately adjacent to the ball are illuminated. In the up-right case, we need only check the pixels directly above and to the right of the ball. In the down-left case we only need to check the pixels below and to the left of the ball etc… We “check” via the video_set(x,y) function, which returns a logic 1 if a pixel at screen location (x,y) is illuminated.
Each time the ball makes a collision, it is transferred to the next appropriate
directional state depending on its current screen direction. If the hit was
a brick hit, we erase the brick by stamping a blank 8x4 pixel bitmap (same
size as the bricks) over the appropriate location on the screen. We also decided
to index brick locations only at x-multiples of 8 and y-multiples of 4. This
greatly simplified the process of calculating which brick needed to be erased
and replaced costly (and timely) multiply and divide calculations with logical
shifts (i.e. x/8 could be replaced by x>>3).