Results
If all sound readings were taken perfectly and all other factors were ignored, it is possible to say that our system is accurate to within 2x10-5 ft due to the resolution of the counter we are using to monitor sound pulses. Although this accuracy can be achieved in the FPGA hardware, it is not possible to achieve this accuracy using our current method of detecting sound. Due to variations in the sound source and the microphone sensitivity, the time it takes to trigger a particular microphone is not constant. The average deviation in time count readings recorded in the system was 1737.378, which corresponds to 3.474756x10-5s or about 0.039 ft. The maximum deviation recorded was 2798.062, which corresponds to 5.596124x10-5s or about 0.0629 ft. Therefore, our actual accuracy is a little less than 0.1 ft for distance values.
The accuracy of the angle values relies heavily on the accuracy of the distance values calculated. As long as the microphones do not vary with accuracy to each other during a single reading, the angle calculated should be accurate. Furthermore, the accuracy of the angle calculated relies on the robot correctly facing the DE2 board. If all these external factors are ignored, the angle calculator has a worst-case resolution of 0.8953 degrees. This is for really small (angles less than 5 degrees) or really large angles (angles greater than 175 degrees). The resolution for angles close to 90 degrees is around 0.014 degrees. Our goal was to make the system accurate to within 1 degree; therefore, this setup meets the requirement.
The accuracy of the Cartesian coordinate calculation rounds the angle to the nearest degree in order to calculate the cosine and sine value. For the short distances that our robot operates over, this rounding does not make a large difference. At 10 ft away from the FPGA (the maximum distance that our robot currently moves from the FPGA), the resolution of the Cartesian coordinate calculation is within 0.001523ft, which meets are requirement of being accurate to within 1 ft.
We were very satisfied by the usability of the system. A lay operator could scan a room with only a brief explanation of how the system worked. While calibration is relatively time consuming it is not necesary for normal operation.
There are a few aspects of our design that worked adequately, but we had initially hoped to perform better. For one thing in order to ensure the accuracy we desired we need to take a large number of readings to produce one sample. This means that we need to spend more time pinging. While each ping can be processed quickly having to perform dozens for each sample slows the process down. Another issue was with the requirement to take our samples facing the Nios. It would improve the speed of movement if we could produce non-directional pings. Also while our system does not interfere with anyone elses project, it does produce quite a bit of noise pollution. We originally desired to make the system out of human hearing, but needed to lower the frequency to satisfy the microphone requirements.
Conclusions
Overall, this project met our expectations in terms of performance. We were able to reliably track a sound source as it moved throughout a room. Furthermore, we were able to direct the motion of the sound source based on data collected and analyzed. We were also able to capture data from the sound source regarding alcohol levels present in the room.
The most significant achievement of this project is that we developed a very cheap relatively robust way to figure out our robots location. It would be trivial to expand this system to include many more robots with many more sensors. This navigation could be useful for controlling swarms doing mapping, or performing cooperatively on a task.
There are many things that we would do differently if we were to repeat this project in the future. First of all we would have used different microphones that were less varying and could collect higher frequency sounds. This leads to the next difference, which would be to use a different sound frequency (preferably one that is not detectable by humans). One difference that would dramatically improve the behavior of our system would be to make the system dynamically adjust its sensor calibration based on room conditions. In other words, the system would include temperature and humidity sensors in order to detect the conditions of the air. The slope and y-intercept values of the microphone calibration lines would then be adjusted as appropriate (to compensate for the varying speed of sound and varying behavior of the electrical components in the sound receiving system).
The final change that would be made given more time or another opportunity to work on this project would be to use a more stable robot. Our current design is a tissue box with treads and Tupperware on top. Although this design worked for our testing purposes, it is not very sturdy. Furthermore, we believe that the accuracy of the robot behavior, especially turn behavior, would be improved with a more robust design (one where the treads do not constantly loosen due to wear on the tissue box).
Our design does conform to the applicable standards.
The only standard that applies to our robot is FCC due to the wireless communication between the robot and the sound receiving system. This communication was done with XBee modules, which are FCC approved. These devices are sold as FCC licensed devices; therefore, no other legal approval is required to use them in our design.
Overall, this project met our expectations in terms of performance. We were able to reliably track a sound source as it moved throughout a room. Furthermore, we were able to direct the motion of the sound source based on data collected and analyzed. We were also able to capture data from the sound source regarding alcohol levels present in the room.
The design for this project was strictly of our own creation. The only code that we did not engineer was for the Nios II processor, floating point to integer conversion, floating point divide, PLL, and reset delay functions. This code was created by the Altera SOPC builder and Altera MegaFunction wizard. We do not take credit for these functions. We only claim the code that surrounds these functions in our system as well as the code written for the Nios II processor.
In summary, in order to demonstrate our abilities to perform sound localization, we created a roving robot system. The robot acts as a slave to the FPGA where all actions performed by the robot are explicitly controlled by the FPGA. The FPGA performs functions in order to determine the location of the robot as well as what alcohol levels the robot is seeing at a particular location. From this data, it creates a diagram on a VGA screen to show where the robot has been and what it has seen. The goal of the FPGA system is to get the robot to a target specified by the user. All this was accomplished in this project to a reasonable accuracy. For this reason, this project was a success.