Testing and Debugging
This project was done in steps leading up to the final goal. After each step we made sure we had the current goal achieved to a satisfactory level and only then did we proceed further.
The initial step was to decide how detection of hand movement and gestures was to be done. We decided to go with detection based on color recognition and wrote the code accordingly. Alterations were made in the top module so that the camera would only detect and display the colors we wanted which were red, green and yellow. We wanted to pick a color that would not possibly be present in the background or environment such as fluorescent orange, however deciding threshold RGB values for the same was a problem and hence we went with simple red and green.
To make the detection mechanism light intensity independent we tried to implement the YUV scale instead of the RGB scale of color patterning wherein the Y axis represents light intensity and hence by ignoring the Y scale detection can be made light intensity independent. However, changing to the YUV scale also did not help our cause too much as it still remained intensity dependent. Hence we reverted back to the RGB scale.
The next step was to store the recorded pixel values which detected the movement of the hand. Memory had to be used with care here since SRAM or M4K blocks were limited and we could not store pixel addresses directly. The solution to this issue was to store bits for each pixel which represented the color being detected by the camera for that particular pixel which is further explained in design. Once this was achieved we noticed that a lot of noisy pixels at random addresses were being detected in the background. To nullify this noise we decided to include the bounded box logic that is described in detail in the design.
Bringing a cursor on the screen was a little difficult as we could not send out two different outputs to the VGA. However the VGA controller itself had the function of displaying a cursor which we altered to suit our needs.
We have also assigned switch 17 on the DE2 board to act as a mode selector such that when in ON position the system is in debug mode where in instead of having a white canvas as the background we display the current view captured by the camera on screen this was really helpful in debugging the system and recalibration with changing light setting. We can use Switches 0-15 to adjust the exposure rate of the camera with Switch 15 being the MSB. But with increased exposure the camera frame capture rate decreases so instead of increasing the exposure we used well lit settings to develop the system.
As an added functionality in our last stages of the project we tried to add depth perception of our hand such that the paint brush size would vary according to the distance of the hand from the sensor. So as the users hand moved closer to the sensor the paint brush size would increase in pixels it drew and would decrease similarly as the hand moved away. It involved minor changes in the code and we were able to implement it. However it did not have reliable functionality as the depth perception of the sensor was based on the number of pixels of red or yellow that it detected in the entire image. However again as the image being detected was not light insensitive so at the same distance from the camera it was possible that the camera detected a varying amount of pixels of the color being detected due to variation in light intensity in various positions. Hence even though depth was maintained the paint brush size would vary according to position, width, distance from light source etc. So in the end we decided to not include the feature in our final submission.