### Introduction

Karaoke Robot Judge is a karoke machine with a robot Simon Cowell as a judge. For this project, we designed a karaoke machine on the PIC32 with a robot judge made of two servo motors. The user can choose one of three songs to sing from the Python GUI. The background audio is played from the DAC while the vocal melody is displayed in red on the TFT. As the user sings, their voice is also displayed on the TFT in white, so they can see how accurate their singing is in relation to the actual melody. The robot listens to the user singing. If the user makes too many errors while singing, the robot turns away from the camera; otherwise, the robot will bop its head to the music.

### High Level Design

#### Project Idea

Simon Cowell is one of the most famous talent judges in the world. What would it be like to have him judge your singing in robot form? The idea for this project stemmed from our mutual love for music and the comedic element of having a robot judge your singing ability.

#### Background Math

The main calculations in our project occurred when doing Direct Digital Synthesis (DDS) in the Timer4 Interrupt, as well as calculating the displayed vocal melody in the display thread.

In order to output sound from the DAC, we used a Timer4 Interrupt with DDS. Since we wanted to be able to play chords, we had three separate DDS units. For each note, we used a linear ramp down such that the note's amplitude would slowly decrease to zero over the course of its duration. We did this by multiplying each note output by a scaler that was defined as (duration remaining)/(total duration of the note). In addition, we also scaled our sine table lookup such that there would not be any overflow with the DAC output since we were playing up to three notes at a time.

For the display thread, we used an FFT function to find the frequency with the maximum amplitude from the ADC samples. This was how we calculated the note the user was singing and displayed it on the TFT. For the actual vocal melody, we scaled the note by 0.128, which was (ADC samples)/(sampling frequency) = 512/4000, before drawing it on the TFT screen.

#### Logical Stucture

We implemented Karaoke Robot Judge in C for the PIC32 microcontroller. We used the protothreads library to implement several threads which yielded until user input or until they were triggered by events. For example, the display thread yielded until 512 ADC samples were gathered and sampling was set to one. The threads related to the Python GUI all yielded until there was user input. We used timers 2-5 in order to generate the DAC output, sample the ADC, and control the servo motors for the robot. Timer4 was used for the DAC output, Timer5 was used for ADC sampling, and Timer23 was used to control the robot. We used the graphics library and the TFT LCD display to draw the vocal melody, as well as the user's singing. The Python GUI was written in Python using the pySimpleGUI library. The GUI communicated with the PIC32 over serial on port COM4. The user could pick a song, play, pause/resume, and quit the program through the GUI.

#### Hardware/Software Tradeoffs

Since this project was completed remotely over Zoom, there were latency issues when it came to both the audio and the display on the TFT screen. For the audio, there was both latency with regards to the background music from the DAC being heard by the user, as well as the user's voice being heard by the ADC. Similarly, with the video latency, there was delay with regards to the timing between when the melody was displayed on the TFT and when the user would actually see it over Zoom. In addition, the audio and video latency were not always the same, and the latency would change throughout the Zoom call. We tried our best to compensate for the latency through trial and error with regards to display timing.

We also had some memory issues with regards to the PIC32. Since we had multiple MIDI files, we needed to store header files with large 2D arrays in memory. However, the memory on the PIC32 is limited. We stored the arrays in flash memory as constants instead of main memory because of the faster access time and to reduce the amount of main memory used. However, we still ran into memory issues when trying to store our third song. As a result, we removed some of the columns from the arrays in order to free up space, which resulted in slight changes in logic.

#### Standards and Copyright

Serial communication from the Python interface to the PIC follows the RS-232 standard, and communication from the PIC to the TFT follows the SPI standard.

The MIDI files we used were either under the Creative Commons Attribution License, which means they are free to use as long as the creator is attributed, or they were free to download. In addition, we made some of our own MIDI files, so we do not believe we are infringing on any copyright.

We attribute the background music for "Save Me" and "Lost Stars" to ATs Magic Shop on YouTube. We also attribute intellectual property to Bruce Land and Hunter Adams for the remote interface setup and FFT function, as well as to the authors of the various libraries we used.

### Program and Hardware Design

#### Overall Software Design

Our overall software design consisted of the Python GUI, the timer interrupts, and the protothreads. We had three main threads: serial, display, and robot. The serial thread took in user input from the Python GUI, such as the choice of song and whether or not to start it. When the user chooses a song from the listbox, the listbox thread saves the necessary values from the header file to play that song. The button thread captures which button was pressed and executes the corresponding logic, such as start or quit. The display thread used the ADC samples and the FFT function to compute the frequency with the maximum amplitude and displayed it on the TFT screen. In addition, it also displayed the vocal track based on the array in the header file. The logic for determining when the user had made too many errors was also in this thread. If the user made too many errors in a certain number of samples, the display thread would signal the robot thread. The robot thread controlled the movement of the servo motors. If the user was doing well, the robot would nod its head; otherwise, it would turn away from the camera. We also had a quit thread that simply disabled interrupts, cleared the TFT screen, and reset the necessary variables. The Timer4 interrupt controlled the DAC output logic. It used DDS to convert the MIDI notes in the array into sounds to be outputted by the DAC. The Timer5 interrupt collected the ADC samples and signaled the display thread to display them. The Timer3 interrupt set the duty cycle of the servo motors. The servo motors actually used a 32-bit timer (Timer23), so the timing was controlled by Timer2, but the interrupt was based on Timer3.

#### Python GUI

The Python GUI consisted of three buttons and a listbox. The buttons were "Start", "Pause/Resume", and "Quit", respectively. The listbox contained three different songs to choose from.

When the user selected a song, the listbox thread would set several variables that corresponded to the chosen song. For example, we set a pointer to the background track array, a pointer to the vocal track array, the time signature, the sizes of the arrays, etc. This allowed the program to know which arrays to read from and which values to use when computing the logic.

When the user pressed a button, the button thread would execute logic based on which button was pressed. If the "Start" button was pressed, the thread would enable interrupts and tell the program to start the track by setting the start variable to one. If the "Pause/Resume" button was pressed, the thread would first check whether or not the program was already paused. If the program was not paused, then it would disable interrupts and set the pause variable to one. Otherwise, it would reenable interrupts and set pause to zero. If the "Quit" button was pressed, the program would enter the quit thread. This thread would disable all interrupts, clear the TFT display, and reset all necessary variables back to their original value. This way the user could pick a new song to play.

#### Python Scripts

We had several Python scripts to convert the MIDI files into CSV files with our desired columns. We could then copy and paste these files as arrays in our header files on the PIC32. We used the py-midicsv library to directly convert the MIDI files into CSV files. This code can be seen in the miditocsv.py file. It simply converts the MIDI file into a CSV file and writes it to a new file. Since this file was a direct conversion from the MIDI file, it had a lot of extra information that we did not need for our program.

The next script we had was sort_csv.py. This file reads the converted CSV file and writes a new file that only has the Note_on_c, Note_off_c, and tempo events. This file also sorts the events based on their timing in MIDI clicks.

We then had a script called duration.py, which was used to calculate the duration of each note. Since the converted MIDI file did not explicitly include the duration of each note, we had to calculate it based on either when the velocity of the note became zero or when there was a Note_off_c event for that note. Again, the events were sorted based on their timing in MIDI clicks. If the MIDI file was the background music, curly braces were added at the beginning and end of each row to make copying and pasting easier. The result was then written to a new file.

The last script we needed was gaps.py. This file was only used for the vocal tracks. In order to display the vocal track properly on the TFT, we needed to know when there were gaps between the notes. As a result, we needed this file to manually insert gap events into the CSV file. Like the previous files, this one was also sorted by MIDI click timing. In addition, curly braces were added at the beginning and end of each row to make copying and pasting into the header files easier.

We also had a Python script called csvtomidi.py which converted a CSV file back to a MIDI file. This file was only used for testing purposes because we wanted to be able to modify the CSV file and make sure that the music still sounded okay without having to reprogram the PIC32.

#### Header Files

The csv files were put into header files as 2D static const unsigned int arrays. For the song tracks that were meant for the DAC, each row was structured as either {start time in midi clicks, midi note, duration in midi clicks}, or {time in midi clicks, midi tempo, 0}. For the vocal tracks that were meant for the TFT display, each row was structures as {midi note, duration in midi clicks}. In addition to these arrays, both headers saved the time signature as static const unsigned _Accum and size of the array as static const unsigned int. Additionally, the main song track saved the midi header value as a static const unsigned int, and the song vocal header saved the tempo and midi start time of the track as static const unsigned ints.

The header file midi_lookup was simply a static const _Accum array that held the frequencies of midi notes.

#### Interrupts

We used three interrupts: one for the ADC, one for the DAC, and one for the robot. The DAC interrupt had a priority of 3, the ADC had a priority of 4, and the robot had a priority of 5.

##### The DAC Interupt

The DAC interrupt for handling midi playback was handled by Timer4. This timer's period was set to 1000, or 40MHz/40kHz (clock rate / sample rate), to give a sampling rate of 40kHz. This was close to the standard audio sampling rate of 44.1kHz.

Within the interrupt, the entirety of the code was within an if statement that looked at the state of the variable start, with the exception of clearing the interrupt flag. This start variable was set in the button thread when the user pressed either start or quit. If start was 1, then the interrupt then ensured that the program was not going to access an array outside of its bounds. If the program was still within the bounds of the song array, it checked to see whether the midi time matched the start time of the vocal track. If it had, then it signaled the display thread to begin displaying.

To keep track of the time, multiple variables were used. Since the ISR was sampling at a much faster rate than the song array needed to be accessed, we had to convert from the midi clicks per second to the number of ISR samples per click. Every time the ISR encountered a row that contained tempo, samples_per_click was recalculated. The calculations are below:

bpm [quarter notes/min] = (60E6 [us/min]) / (tempo [us/quarter note])
(bpm [quarter notes/min]) / (60 [s/min]) = [quarter notes/s]
[quarter notes/s] * (time sig [clicks/quarter note]) = [clicks/s]
(2000 [samples/s]) / [clicks/s] = [samples/click]

The variable sample_count was used to keep track of the number of ISR samples. This was incrememnted every time the ISR ran given that start equaled 1 and the row was still within the bounds of the song array. midi_count was used for keeping track of midi time in midi clicks. To increment midi_count, sample_count was first checked to see whether it had exceeded the number of samples per midi click. If it had and the file was not at the very beginning of the file, it then checked to see whether the row before it had the same time as the current row. This was done because we had up to 3 notes playing at a time and we did not want to skip any notes. If no notes would be skipped, then midi_count would be incremented to be the next midi click.

If the midi time of the current row of the song array was the current midi_count, then the interrupt played the note. This was done by first checking to see whether the row contained a tempo or a note. The midi notes could not exceed 127, so a simple comparison of that value was enough to determine if the row contained a tempo. If it did, then the tempo calculations from above were done. If the row contained a note, the program checked to see which DDS unit was free by seeing if the DDS amplitude was 0. The program then used the midi_lookup table to convert from the note to a frequency and calculated the phase_incr variable. The duration of the note was calculated as explained in the Background Math section. Finally, the amplitude was set to 1.

Given that start equaled 1 and the row was still within the bounds of the song array, the DAC data was updated every time. Each phase accumulator was incrememnted by its phase_incr value, and the amplitude was scaled by the linear envelope. The sine table was then accessed at the phase accumulator value shifted by 24 and multiplied by the new amplitude to provide a new data value for each DDS unit. The sine table was scaled in main to never overflow, provided that the maximum value of each DDS unit was 1. This process created three digital sine waves, one for each unit. Each DDS unit's data was added together to be one data output. This value was added to 2048 so that its range was from 0 to 4096, then OR-ed with the DAC control bits. This was then sent over the SPI to the DAC. This was what allowed the PIC to play 3 notes at once. Once all three DDS units had 0 amplitude, the song was finished and the quit thread was signaled.

There were two major changes for this interrupt. Initially, we planned to be able to scale the notes based on their velocity. However, we ran out of space and had to take out the velocity column in the midi file, so all of the notes in our midi files instead had the same amplitude. Additionally, we originally planned on having a trapezoidal envelope instead of a linear one. However, after implementing the trapezoidal envelope, we discovered that we preferred the linear envelope's music box sound as opposed to the trapezoidal envelope's flute sound.

##### The ADC Interrupt

The ADC interrupt for sampling the singing was handled by Timer5. This timer's period was set to 10000, or 40MHz/4kHz (clock rate / sample rate), to give a sampling rate of 4kHz. This value was chosen because the range of highest note that the user sings does not exceed 2kHz. Thus, 4kHz was the smallest frequency we could sample at due to the Nyquist rule.

The setup of this interrupt was very similar to the lab 3 interrupt. The variable channel4 read the result of the channel 4 conversion of the ADC from the idle buffer on each ISR sample. If the ISR had not yet reached 512 samples, the sample was scaled by the hann window and then placed into an array. This made the samples periodic over the sample window. ADC_count, a variable that contained the number of ADC samples, was incremented.

##### The Robot Interrupt

The robot interrupt was handled by the Timer23, as mentioned before. Timer2 handled the timing, and its period was set to 80000, or 40MHz/50Hz (clock rate / sample rate), to give a sampling rate of 50Hz. Within this interrupt, the PWM outputs were set to the variables pan and tilt, which determined the position of the servos. robot_count, which will be discussed in the Robot Thread section, was incremented.

#### Threads

The threads that were implemented were the display thread, quit thread, robot thread, button thread, and listbox thread.

##### The Display Thread

The display thread was in charge of all of the TFT display. This thread yielded until the ADC had 512 samples, the program was not in the quit state, and the program was ready for sampling. It then disabled the interrupts for just the ADC. The scaled ADC samples were copied into a different array called fr and the interrupts were reenabled to allow the ISR to refill the array. The variable ADC_count was set back to 0 at this point. Next, it populated an array called fi with zeros. Both fr and fi were passed into the FFTfix function which computed the FFT of the audio samples. This code was adapted by the ECE 4760 class by Tom Roberts and Malcolm Slaney. The code is in the References section below. After the FFT was taken, the alpha max beta min algorithm was used to get the magnitude of the FFT.

|amplitude|=max(|Re|,|Im|) + 0.4∗min(|Re|,|Im|)

We only needed to compute the magnitude for the first half of the values in the array since the FFT is mirrored for real-valued signals. While we were computing the magnitude of the FFT for the array values, we were also checking for the frequency with the maximum amplitude.

This program used the entire TFT screen to display the vocal samples and midi vocal track and moved in a scrolling fashion. Every time the column that was being drawn reached the end of the screen, it was reset back to 0. The x-axis represented time and the y-axis represented frequency.

To display the midi melody line, the timing of the track needed to be scaled both to fit the TFT screen and to match the timing of the song backtrack. The ADC samples at 2000 samples/s and signals the display thread every 512 samples. Thus, the TFT updates every 3.9. Thus, (320 [columns]) / (3.9 [samples/s]) means that there were 0.48 s per window. By mutliplying this value by the number of midi clicks per second (calculated in the ADC ISR), this value was then converted to the number of midi clicks per TFT window. Every sample had a rectangle of variable length draw_length pixels in width drawn on the TFT. Additionally, every song had its own prescaler, which was set in the listbox thread and used for individual tweaking of the timing. In the end, a scaling factor mel_scale was set as `mel_scale = (_Accum)prescaler*(_Accum)draw_length/(clicks_per_sec*20.48)`. The variable decrement was also unique to each song and helped determine how quickly the melody iterated through its array. The variable draw_length was set to 5 pixels. If the note duration became less than or equal to 0, that note was finished displaying and a the next note was displayed.

The display worked by first clearing the columns that were being redrawn as well as 10 columns in front of it, then drawing either a gap or note 5 pixels at a time. The program then decremented the duration of the note by the decrement value, which was set within the listbox thread. The frequency of the note was multiplied by the value 0.128 (as mentioned in the Background Math section above), negated, and added to 230 to be seen on the TFT. The frequency was found through the midi_lookup table. A red rectangle 5 pixels tall and 5 pixels wide was drawn at the coordinates for the column and scaled note.

If the maximum amplitude of the FFT was not 0, meaning there was vocal input, then the frequency with the maximum amplitude was negated and added to 230 to be drawn at the same column as the melody line. A white rectangle of width and height 5 pixels was drawn at the coordinates for the column and scaled frequency.

The display thread also handled the logic for the robot judgement. We did not want the robot to be too sensitive, the program would look at the accuracy of 60 counts at a time. For each count, if the user either did not sing or was out of tune, the variable wrong_count was incremented. If wrong_count was greater than or equal to 30 (if over half of the counts were inaccurate), then the program would alert the robot by setting the variable wrong to 1. This also set the variable robot_count to 0. After 60 counts, both counters were reset. To determine if the user was out of tune, the frequency of the melody note was compared to the frequency of the maximum amplitude of the FFT. If the difference between teh two was larger than 200Hz, wrong_count was incremented. To allow for the user to take breaths, a variable called breath_count was incremented every time the frequency of the maximum amplitude was 0Hz and reset to 0 every time vocal input was detected. If this value was greater than 10, wrong_count was incremented.

We originally planned on having the TFT refresh with a new section of the vocal track rather than having it scroll along with the user's singing. However, clearing the screen took too much time and caused significant delays and would cause the display and song track to be mismatched.

##### The Robot Thread

The robot thread determined the position of the servos. It was signaled by the ADC ISR when the midi count equaled the start time of the melody track. If the variable wrong was 1, pan and tilt were set to the default values of 30000 and 90000, respectively. This made the robot look like it was turning away from the camera. The robot looked away for 75 ISR samples, which was done by looking at the value of the variable robot_count. After this amount of time, wrong was set back to 0 to allow the user another chance to sing correctly.

If the user was in tune, then the robot would bop its head. Pan was set to a default value of 75000 to have the robot face the camera. The tilt position was then incremented and decremented as needed to move the robots head up and down.

##### The Quit Thread

The quit thread was signaled when either the song was finished or when the user clicked the quit button. This reset all of the variables used in the all of the ISRs and threads back to their default values.

##### The Button Thread

This thread waited until a button on the GUI was pressed by the user. There were four three buttons: start, pause/resume, and quit. The start button set start to 1 to signal the DAC ISR to start the midi playback and set quit to 0. This also enabled global interrupts. If the pause/resume button was clicked, it would either disable or enable global interrupts, based on its current state. The quit button signaled the quit thread and set start to 0, causing the DAC ISR to stop the midi playback.

##### The Listbox Thread

The listbox thread was used to select between the three songs, Save Me, You Belong with Me, and Lost Stars. For each of these, the song and vocal pointers were set to their respective song and vocal arrays. Additionally, the time signature, size, vocal tempo, vocal size, header, and vocal start variables were all set to their respective song variables. The prescaler and decrement for the melody line display was also set here.

#### Hardware

The hardware consisted of the Big Board, which includes a port expander, ADC/DAC, TFT header-socket, programming header-plug (Pickit3), TFT display, and power supply (Figures 7-8). The ADC was configured to sample channel AN11. The PIC32 communicated with the program over the COM4 port using serial. The DAC used the pins RB4 for the SPI chip select, RB5 for the SPI MOSI, and RB15 for the SPI Sclock. Two micro servos (SG90) were arranged to allow for pan and tilt. These servos were wired to RPA2 and RPA3 to allow for pulse width modulatio (PWM) control. RPA3 was used for pan and RPA2 was used for tilt. The TFT used RB0 for the D/C, RB1 for SPI chip select, RB2 for reset, RB11 for SPI MOSI, and RB14 for the SPI Sclock. We used Zoom to provide audio input to the ADC as well as view the TFT and robot.

All of the hardware was set up in the main function.

To set up the DAC, we opened Timer4 with OpenTimer4 from the peripheral library and set up the interrupt as described in the Interrupt section with ConfigIntTimer4. The CS pin was set to high, and the the MOSI pin was set up to allow for peripheral pin select. SDO2, in PPS output group 2, was configured to RPB5. The SPI channel was then opened.

To set up the robot, we opened Timer23 and used the macros for Timer2 in OpenTimer23 and the macros for Timer3 in ConfigIntTimer23 to set up the timer. Next, we set up the out-put compare units in order to generate the PWM signals. We opened OC3 and OC4 with `OpenOC3(OC_ON | OC_TIMER_MODE32| OC_TIMER2_SRC | OC_PWM_FAULT_PIN_DISABLE , 60000, 60000)` and `OpenOC4(OC_ON | OC_TIMER_MODE32| OC_TIMER2_SRC | OC_PWM_FAULT_PIN_DISABLE , 60000, 60000)`. We then configured the pins to allow for peripheral pin select. OC3 was configured to RPA3 and OC4 was configured to RPA2.

We also setup the ADC in main. The ADC Most of the ADC setup was the same as the TFT_FFT_ADC demo code (in References). The existing code was changed to turn auto sampling off and to use Timer5. The sine, sinewave, and hann window tables, which were used in the DAC ISR, FFT function calculations, and ADC ISR respectively. The equation for the Hann window calculation was

w(n)=0.5[1−cos(2*pi*n/512)]

where n was the index of the sample in the array.

### Results

Overall our project performed well for all of our songs. We were able to see that the project was working both audially and visually by hearing the midi playback and seeing the melody line and FFT on the TFT display. Since the playback and sampling was all done in real time, there was some issues with lag due to Zoom. The worst case latency that we encountered over Zoom was around 40ms. Because of this, sometimes the vocal track display and the backtrack would not line up. Additionally, this would cause the display of the singing to be slightly delayed. As far as the robot judge, because of the way that we judged the accuracy of the singing, the robot would take around 1 second to properly react. While we did have to cast some frequencies to integers when they were decimal values, this was not an audible change.

There were no major safety concerns in this project due to the project being done entirely remotely.

There are not many usability concerns as the user interface is a simple GUI. Each button is labeled appropriately, making it easy to discern the purpose of each one.

### Conclusions

In the end, we were very happy with our project. We were able to meet the majority of our goals that were set in the project proposal. We were able to successfully make a karaoke machine with a robot that judged you in real time. We were able to not only play a midi file with three notes at a time, but we were also able to display a separate midi track on the TFT along with vocal input. By having a script that would convert any midi file to an array for a header, users have the capability to add their own songs if they so choose. Additionally, having a robot judge allowed for a more enjoyable experience as it added a game element to the karaoke.

In the future, we would like to match the timing of the song backtrack, the singing, and the TFT display of the vocal input and the melody line better. Much of this problem was due to the lag that was caused through having to connect to the lab desktop remotely as well as having to send and receive all audio and visual information through Zoom, which had changing latency. Additionally, we did not have the time to look into or implement any pitch detection algorithms, which would have improved the detection of the singing. Both of these together would allow us to make the robot judge more accurately.

There are several intellectual property considerations. All of our midi files were either under the Creative Commons Attribution License, which means they are free to use as long as the creator is attributed, or they were free to download. Additionally, the code to calculate the FFT was from code provided by Bruce Land. This code was originally written by Tom Roberts and Malcolm Slaney. Additionally, ADC setup was adapted from Bruce Land's code. Some sections of the report are adapted from our own write ups from Labs 3 and 4 of this course. The code for the GUI was also adapted by code from this course.

We made sure to follow the IEEE Code of Ethics while working on this project. There are no safety issues involved with our project, even with anyone who is not knowledgeable about this project. We only began work on this project after our idea was reviewed by the instructors and teaching assistants of the course.

### Appendix A

The group approves this report for inclusion on the course website. The group approves the video for inclusion on the course YouTube channel.

### Code Appendix

Link to our GitHub

The main program file is located at 'FinalProject/src/main.c'

#### Main code

##### main.c
```            ```
/*
* main.c
* gtz4, klj92
*
* This program is used for testing things that will eventually go into the
* final main.c program
*/

////////////////////////////////////
// clock AND protoThreads configure
#include "config_1_3_2.h"
// threading library
#include "pt_cornell_1_3_2_python.h"
#include

////////////////////////////////////
// graphics libraries
// SPI channel 1 connections to TFT
#include "tft_master.h"
#include "tft_gfx.h"

// midi stuff
#include "midi_lookup.h"
#include "saveme.h"
#include "saveme_vocal.h"
#include "youbelongwithme.h"
#include "youbelongwithme_vocal.h"
#include "loststars.h"
#include "loststars_vocal.h"

//== Timer 2 interrupt handler (for DAC) ======================================
// direct digital synthesis of sine wave
#define two32 4294967296.0 // 2^32
#define Fs 40000
#define WAIT {}
// DAC ISR
// A-channel, 1x, active
#define DAC_config_chan_A 0b0011000000000000
// B-channel, 1x, active
#define DAC_config_chan_B 0b1011000000000000
//
volatile unsigned int DAC_data ;// output value
volatile SpiChannel spiChn = SPI_CHANNEL2 ;	// the SPI channel to use
volatile int spiClkDiv = 2 ; // 20 MHz max speed for DAC!!
// the DDS units:
volatile unsigned int phase_accum_main1, phase_accum_main2, phase_accum_main3;
volatile unsigned _Accum amplitude1 = 0, amplitude2 = 0, amplitude3 = 0;
volatile unsigned int fcalc = two32/Fs;
// DDS sine table
#define sine_table_size 256
volatile _Accum sin_table[sine_table_size] ;
// the dds state controlled by python interface
volatile int dds_state = 1;
// the voltage specifed from python
volatile float V_data = 0;
// sine, sq, tri
volatile char wave_type = 0 ;

volatile unsigned int time;
volatile unsigned int note;
volatile int quit = 0;

// midi stuff
volatile int row = 0;
volatile int midi_count = 0;
volatile int duration = 0;
volatile int scaler = 0;

// ISR timing variables
volatile _Accum clicks_per_sec;
volatile int new_tempo = 0; // determine if tempo is being changed
volatile _Accum samples_per_click = 36;
volatile int sample_count = 0; // max value = samples_per_sec

// extra DAC stuff
volatile _Accum data1, data2, data3;
volatile int duration1, duration2, duration3;
volatile _Accum scaler1, scaler2, scaler3;
volatile int phase_inc1, phase_inc2, phase_inc3;

//====== Timer 4 interrupt (for ADC) ====================================
volatile _Accum channel4;	// conversion result as read from result buffer

// Array sizes
#define nSamp 512
#define nPixels 256

// FFT
#define N_WAVE          512    /* size of FFT 512 */
#define LOG2_N_WAVE     9     /* log2(N_WAVE) 0 */

volatile int ADC_count = 0;
volatile _Accum omega[512];
_Accum fr[512];
_Accum fi[512];
_Accum amplitudes[512];
_Accum max_amp;
int max_freq = 0;

// volatiles for the stuff used in the ISR
volatile unsigned _Accum DAC_value; // Vref output
volatile _Accum channel4;	// conversion result as read from result buffer

_Accum v_in[nSamp] ;

_Accum Sinewave[N_WAVE]; // a table of sines for the FFT
_Accum window[N_WAVE]; // a table of window values for the FFT
_Accum fr[N_WAVE], fi[N_WAVE];
int pixels[nPixels] ;
int column = 50;
int freq_scale = 8; // 2*2000Hz/512 rounded, this scaling covers most singing ranges
unsigned int (* song_p)[3];
unsigned int (* vocal_p)[2];
unsigned int size;
unsigned _Accum time_sig;
unsigned int vocal_tempo;
unsigned int vocal_size;
unsigned int header;
unsigned int vocal_start;

volatile int sampling = 0;
volatile int start = 0;
_Accum prescaler = 1;
_Accum decrement = 1;

int mel_col = 0;
int mel_row = 0;
_Accum mel_dur = 0;
int mel_note = 0;

int pan = 75000;
int tilt = 66000;
char direction = 1;
char wrong = 0;
volatile int robot_count = 0;
char breath_count = 0;

int wrong_count = 0;
int total_count = 0;

char pause = 0;

//volatile int rampup1 = 0, rampup2 = 0, rampup3 = 0;
//volatile int rampdown1 = 0, rampdown2 = 0, rampdown3 = 0;
//volatile int midi1 = 0, midi2 = 0, midi3 = 0;

// timer 4 interrupt, DAC
void __ISR(_TIMER_4_VECTOR, ipl3) Timer4Handler(void)
{
// maximum 3 notes at once?

// you MUST clear the ISR flag
mT4ClearIntFlag();
if (start == 1){
if (row < size){
// check midi time
time = *(*(song_p + row) + 0);

if (midi_count == vocal_start){
sampling = 1;
column = 0;
}

if(sample_count >= (int)samples_per_click){ // go to next click
if (row > 0){ // not at very beginning of file
if(*(*(song_p + row - 1) + 0) != time){ // if previous row does not have same time
// just continue incrementing
midi_count++;
sample_count = 0;
} // otherwise stay on same click
} // otherwise at beginning of file
}
else sample_count++;

if (time == midi_count){
note = *(*(song_p + row) + 1);
duration = *(*(song_p + row) + 2);
if (note > 127){
// recalculate tempo
clicks_per_sec = (_Accum)(1000000.0/(float)note)*time_sig;
// bpm (quarter notes/min) = 60E6(us/min)/tempo(us/quarter note)
// bpm(quarter notes/min)/60(s/min) = (quarter notes/s)
// (quarter notes/s)*(time sig (clicks/quarter note)) = clicks/s
samples_per_click = (_Accum)((480/header)*2000)/clicks_per_sec;
// (2000 samples/s)/(clicks/s) = samples/click
}
else{ // not tempo
// check to see which has the lowest amplitude
if (amplitude1 == 0) { // use DDS unit 1
phase_inc1 = (int)midi_lookup[note-21]*fcalc;
duration1 = duration+midi_count;
scaler1 = 1/(_Accum)duration;
amplitude1 = 1;
}
else if (amplitude2 == 0) { // use DDS unit 2
phase_inc2 = (int)midi_lookup[note-21]*fcalc;
duration2 = duration+midi_count;
scaler2 = 1/(_Accum)duration;
amplitude2 = 1;
}
else { // use DDS unit 3
phase_inc3 = (int)midi_lookup[note-21]*fcalc;
duration3 = duration+midi_count;
scaler3 = 1/(_Accum)duration;
amplitude3 = 1;
}
}
row++;
}
}

phase_accum_main1 += phase_inc1;
phase_accum_main2 += phase_inc2;
phase_accum_main3 += phase_inc3;

// scale the dds units
if (amplitude1 > 0) amplitude1 = (_Accum)(duration1-midi_count)*scaler1;
if (amplitude2 > 0) amplitude2 = (_Accum)(duration2-midi_count)*scaler2;
if (amplitude3 > 0) amplitude3 = (_Accum)(duration3-midi_count)*scaler3;

// amplitude and DAC data calculations
data1 = sin_table[phase_accum_main1>>24]*amplitude1;
data2 = sin_table[phase_accum_main2>>24]*amplitude2;
data3 = sin_table[phase_accum_main3>>24]*amplitude3;

DAC_data = (int)(data1 + data2 + data3);

if (row >= size){
if(sample_count >= (int)samples_per_click){ // go to next click
midi_count++;
sample_count = 0;
}
else sample_count++;
if (amplitude1 == 0 && amplitude2 == 0 && amplitude3 == 0) quit = 1;
}

// === DAC Channel A =============
// wait for possible port expander transactions to complete
// CS low to start transaction
mPORTBClearBits(BIT_4); // start transaction
// write to spi2
if (1)
WriteSPI2( DAC_config_chan_A | ((DAC_data + 2048) & 0xfff));
while (SPI2STATbits.SPIBUSY) WAIT; // wait for end of transaction
// CS high
mPORTBSetBits(BIT_4) ; // end transaction
//
}
}

// timer 5 interrupt, ADC
void __ISR(_TIMER_5_VECTOR, ipl4) Timer5Handler(void)
{
// clear the interrupt flag
mT5ClearIntFlag();
// read the ADC
// read the first buffer position
channel4 = (_Accum)ReadADC10(0);   // read the result of channel 4 conversion from the idle buffer
AcquireADC10(); // not needed if ADC_AUTO_SAMPLING_ON below

if (ADC_count < 512){
// scale the sample and put it in array
omega[ADC_count] = window[ADC_count]*channel4;
ADC_count++;
}
}

// === timer 23, robot interrupt =====================
void __ISR(_TIMER_3_VECTOR, ipl5) Timer3Handler(void)
{
mT3ClearIntFlag();
SetDCOC3PWM(pan);
SetDCOC4PWM(tilt);

robot_count++;
}

//=== song done =========================
static PT_THREAD (protothread_quit(struct pt *pt))
{
PT_BEGIN(pt);

while (1){
PT_YIELD_UNTIL(pt, quit == 1);
INTDisableInterrupts();
// clear display
tft_fillScreen(ILI9340_BLACK);
// reset variables
row = 0;
midi_count = 0;
sample_count = 0;
column = 0;
mel_dur = 0;
mel_row = 0;
sampling = 0;
ADC_count = 0;
amplitude1 = 0;
amplitude2 = 0;
amplitude3 = 0;
breath_count = 0;
total_count = 0;
wrong_count = 0;
wrong = 0;
pan = 75000;
tilt = 66000;
pause = 0;

}

PT_END(pt);
}

//=== FFT ==============================================================
void FFTfix(_Accum fr[], _Accum fi[], int m)
//Adapted from code by:
//Tom Roberts 11/8/89 and Malcolm Slaney 12/15/94 malcolm@interval.com
//fr[n],fi[n] are real,imaginary arrays, INPUT AND RESULT.
//size of data = 2**m
// This routine does foward transform only
{
int mr,nn,i,j,L,k,istep, n;
_Accum qr,qi,tr,ti,wr,wi;

mr = 0;
n = 1<>= 1; while(mr+L > nn);
mr = (mr & (L-1)) + L;
if(mr <= m) continue;
tr = fr[m];
fr[m] = fr[mr];
fr[mr] = tr;
}

L = 1;
k = LOG2_N_WAVE-1;
while(L < n)
{
istep = L << 1;
for(m=0; m> 1;
qi = fi[i] >> 1;
fr[j] = qr - tr;
fi[j] = qi - ti;
fr[i] = qr + tr;
fi[i] = qi + ti;
}
}
--k;
L = istep;
}
}

// === display thread ======================================================
static PT_THREAD (protothread_display(struct pt *pt))
{
PT_BEGIN(pt);

static int draw_length = 5;
static _Accum mel_scale;

while (1){
// wait until sample array is full
PT_YIELD_UNTIL(pt, (ADC_count >= 512 && quit == 0 && sampling == 1));
// disable interrupt
DisableIntT5;
// copy scaled samples into fr
static int jj;
for (jj = 0; jj < 512; jj++){
fr[jj] = omega[jj];
}
// signal ISR to refill array
ADC_count = 0;
// enable interrupt
EnableIntT5;

// populate fi with zeros
static int ii;
for (ii = 0; ii < 512; ii++){
fi[ii] = 0;
}

// do FFT
FFTfix(fr, fi, LOG2_N_WAVE);
// get magnitude and log
// The magnitude of the FFT is approximated as:
//   |amplitude|=max(|Re|,|Im|)+0.4*min(|Re|,|Im|).
// This approximation is accurate within about 4% rms.
// https://en.wikipedia.org/wiki/Alpha_max_plus_beta_min_algorithm
// alpha max beta min algorithm
static int kk;
for (kk = 16; kk < nPixels; kk++) {
// get the approx magnitude
// reuse fr to hold magnitude
fr[kk] = max(abs(fr[kk]), abs(fi[kk])) +
(min(abs(fr[kk]), abs(fi[kk]))*(_Accum)0.4);

// find frequency with max amplitude
if (fr[kk] > max_amp){
max_amp = fr[kk];
max_freq = kk;
}
} // end for

// Reset column location to beginning once it reaches edge of screen
if (column > 320){
column = 0;
}

mel_scale = (_Accum)prescaler*(_Accum)draw_length/(clicks_per_sec*20.48);

// draw melody line
if (mel_dur <= 0){
mel_dur = (_Accum)(*(*(vocal_p + mel_row)+1))*mel_scale;
mel_note = *(*(vocal_p + mel_row));
if (mel_note < 127 && mel_note > 0){
mel_note = (int)(midi_lookup[mel_note - 21]);
}
mel_row++;
}
// draw gaps
tft_fillRect(column, 0, 3*draw_length, 240, ILI9340_BLACK);
if (mel_note > 0){
tft_fillRect(column, -((int)(mel_note*0.128))+230, draw_length, 5, ILI9340_RED);
}
mel_dur = mel_dur - decrement;

// taking breaths or singing
if (max_amp == 0) {
max_freq = 0;
breath_count++;
}
else {
// Display vocal line on TFT
tft_fillRect(column, -max_freq+230, draw_length, 5, ILI9340_WHITE);
breath_count = 0;
}

total_count++;

// if not sing for too long or out of tune, then add to wrong count
if (abs(mel_note - max_freq) > 200 || breath_count > 10){
wrong_count++;
}

// wrong, make robot turn away
if (wrong_count > 30){
wrong = 1;
robot_count = 0;
}

// reset wrong variables
if (total_count > 60){
wrong_count = 0;
total_count = 0;
}

// shift over column
column = column + draw_length;

// reset max amplitude to zero
max_amp = 0;

// NEVER exit while
} // END WHILE(1)
PT_END(pt);
}

// === robot thread ===========================================================
static PT_THREAD (protothread_robot(struct pt *pt))
{
PT_BEGIN(pt);

while(1){
PT_YIELD_UNTIL(pt, sampling == 1);
// if wrong, make robot turn away
if (wrong == 1 || breath_count > 10){
pan = 30000;
tilt = 90000;
if (robot_count > 75){
wrong = 0;
}
}
else {
// robot bops head
static int ii;
pan = 75000;
if (tilt < 75000 && direction){
tilt++;
} else {
tilt--;
direction = 0;
if (tilt <= 60000){
direction = 1;
}
}
}
}

PT_END(pt);
}

// === outputs from python handler =============================================
// signals from the python handler thread to other threads
// These will be used with the prototreads PT_YIELD_UNTIL(pt, condition);
// to act as semaphores to the processing threads
char new_string = 0;
char new_button = 0;
char new_list = 0 ;
// identifiers and values of controls
// curent button
char button_id, button_value ;
// current listbox
int list_id, list_value ;
// current string
char receive_string[64];

// === string input thread =====================================================
// process text from python
static PT_THREAD (protothread_python_string(struct pt *pt))
{
PT_BEGIN(pt);
static int dds_freq;
//
while(1){
// wait for a new string from Python
PT_YIELD_UNTIL(pt, new_string==1);
new_string = 0;
// parse frequency command
if (receive_string[0] == 'f'){

}
//
else if (receive_string[0] == 'v'){

}
//
else if (receive_string[0] == 'h'){

}
//
else {

}
} // END WHILE(1)
PT_END(pt);
} // thread python_string

// === Buttons thread ==========================================================
// process buttons from Python for clear LCD and blink the on-board LED
static PT_THREAD (protothread_buttons(struct pt *pt))
{
PT_BEGIN(pt);
// set up LED port A0 to blink
mPORTAClearBits(BIT_0 );	//Clear bits to ensure light is off.
mPORTASetPinsDigitalOut(BIT_0);    //Set port as output
while(1){
PT_YIELD_UNTIL(pt, new_button==1);
// clear flag
new_button = 0;
// Button one -- start
if (button_id==1 && button_value==1){
start = 1;
quit = 0;
INTEnableInterrupts();
}
// Button 2 -- pause/resume
if (button_id==2 && button_value==1){
if (pause == 0){
pause = 1;
INTDisableInterrupts();
} else {
pause = 0;
INTEnableInterrupts();
}
}
// Button 3 -- quit
if (button_id==3 && button_value==1){
quit = 1;
start = 0;
}
} // END WHILE(1)
PT_END(pt);
} // thread blink

// ===  listbox thread =========================================================
// process listbox from Python to set DDS waveform
static PT_THREAD (protothread_listbox(struct pt *pt))
{
PT_BEGIN(pt);
while(1){
PT_YIELD_UNTIL(pt, new_list==1);
// clear flag
new_list = 0;
if (list_id == 1){ // save me
if (list_value == 0){
song_p = saveme;
vocal_p = saveme_vocal;
time_sig = saveme_time_sig;
size = saveme_size;
vocal_tempo = saveme_vocal_tempo;
vocal_size = saveme_vocal_size;
header = saveme_header;
vocal_start = saveme_start_vocal;
prescaler = 0.1;
decrement = 0.1;
}
else if (list_value == 1){ // you belong with me
song_p = ybwm;
vocal_p = ybwm_vocal;
time_sig = ybwm_time_sig;
size = ybwm_size;
vocal_tempo = ybwm_vocal_tempo;
vocal_size = ybwm_vocal_size;
header = ybwm_header;
vocal_start = ybwm_start_vocal;
prescaler = 2;
decrement = 0.5;
}
else if (list_value == 2){ // lost stars
song_p = loststars;
vocal_p = loststars_vocal;
time_sig = loststars_time_sig;
size = loststars_size;
vocal_tempo = loststars_vocal_tempo;
vocal_size = loststars_vocal_size;
header = loststars_header;
vocal_start = loststars_start_vocal;
prescaler = 0.1;
decrement = 0.1;
}
}
} // END WHILE(1)
PT_END(pt);
} // thread listbox

// === Python serial thread ====================================================
// you should not need to change this thread UNLESS you add new control types
static PT_THREAD (protothread_serial(struct pt *pt))
{
PT_BEGIN(pt);
static char junk;
//
//
while(1){
// There is no YIELD in this loop because there are
// YIELDS in the spawned threads that determine the
// execution rate while WAITING for machine input
// =============================================
// NOTE!! -- to use serial spawned functions
// you MUST edit config_1_3_2 to
// (1) uncomment the line -- #define use_uart_serial
// (2) SET the baud rate to match the PC terminal
// =============================================

// now wait for machine input from python
// Terminate on the usual
PT_terminate_char = '\r' ;
PT_terminate_count = 0 ;
PT_terminate_time = 0 ;
// note that there will NO visual feedback using the following function
PT_SPAWN(pt, &pt_input, PT_GetMachineBuffer(&pt_input) );

// Parse the string from Python
// There can be button and string events

// pushbutton
if (PT_term_buffer[0]=='b'){
// signal the button thread
new_button = 1;
// subtracting '0' converts ascii to binary for 1 character
button_id = (PT_term_buffer[1] - '0')*10 + (PT_term_buffer[2] - '0');
button_value = PT_term_buffer[3] - '0';
}

// listbox
if (PT_term_buffer[0]=='l'){
new_list = 1;
list_id = PT_term_buffer[2] - '0' ;
list_value = PT_term_buffer[3] - '0';
//printf("%d %d", list_id, list_value);
}

// string from python input line
if (PT_term_buffer[0]=='\$'){
// signal parsing thread
new_string = 1;
// output to thread which parses the string
// while striping off the '\$'
strcpy(receive_string, PT_term_buffer+1);
}
} // END WHILE(1)
PT_END(pt);
} // thread blink

// === Main  ======================================================

void main(void) {

// === set up DAC on big board ========
// timer interrupt //////////////////////////
// Set up timer3 on,  interrupts, internal clock, prescalar 1, toggle rate
#define TIMEOUT3 (40000000/Fs) // clock rate / sample rate
//    #define TIMEOUT (40000000/297)
// 2000 is 20 ksamp/sec
OpenTimer4(T4_ON | T4_SOURCE_INT | T4_PS_1_1, TIMEOUT3);

// set up the timer interrupt with a priority of 3
ConfigIntTimer4(T4_INT_ON | T4_INT_PRIOR_3);
mT4ClearIntFlag(); // and clear the interrupt flag

// control CS for DAC
mPORTBSetPinsDigitalOut(BIT_4);
mPORTBSetBits(BIT_4);
// SCK2 is pin 26
// SDO2 (MOSI) is in PPS output group 2, could be connected to RB5 which is pin 14
PPSOutput(2, RPB5, SDO2);
// 16 bit transfer CKP=1 CKE=1
// possibles SPI_OPEN_CKP_HIGH;   SPI_OPEN_SMP_END;  SPI_OPEN_CKE_REV
// For any given peripherial, you will need to match these
SpiChnOpen(SPI_CHANNEL2, SPI_OPEN_ON | SPI_OPEN_MODE16 | SPI_OPEN_MSTEN | SPI_OPEN_CKE_REV , 2);
// === end DAC setup =========

// === set up PWM ==========
#define robot_timeout (40000000/50)
OpenTimer23(T2_ON | T2_SOURCE_INT | T2_PS_1_1, robot_timeout);
ConfigIntTimer23(T3_INT_ON | T3_INT_PRIOR_5);
mT3ClearIntFlag();

OpenOC3(OC_ON | OC_TIMER_MODE32 | OC_TIMER2_SRC | OC_PWM_FAULT_PIN_DISABLE, 60000, 60000);
OpenOC4(OC_ON | OC_TIMER_MODE32 | OC_TIMER2_SRC | OC_PWM_FAULT_PIN_DISABLE, 60000, 60000);

PPSOutput(4, RPA3, OC3);
PPSOutput(3, RPA2, OC4);
// === end PWM setup ========

// === set up ADC on big board =====
// timer 2 setup for adc trigger  ==============================================
// Set up timer2 on,  no interrupts, internal clock, prescalar 1, compare-value
// This timer generates the time base for each ADC sample.
// works at ??? Hz
#define sample_rate 4000 // 5kHz
// 40 MHz PB clock rate
#define timer_match 40000000/sample_rate
OpenTimer5(T5_ON | T5_SOURCE_INT | T5_PS_1_1, timer_match);

// set up the timer interrupt with a priority of 2
ConfigIntTimer5(T5_INT_ON | T5_INT_PRIOR_2);
mT5ClearIntFlag(); // and clear the interrupt flag

// configure and enable the ADC
CloseADC10(); // ensure the ADC is off before setting the configuration

// define setup parameters for OpenADC10
// Turn module on | ouput in integer | trigger mode auto | enable autosample
// ADC_CLK_AUTO -- Internal counter ends sampling and starts conversion (Auto convert)
// ADC_AUTO_SAMPLING_ON -- Sampling begins immediately after last conversion completes; SAMP bit is automatically set
// ADC_AUTO_SAMPLING_OFF -- Sampling begins with AcquireADC10();
#define PARAM1  ADC_FORMAT_INTG16 | ADC_CLK_AUTO | ADC_AUTO_SAMPLING_OFF //

// define setup parameters for OpenADC10
// ADC ref external  | disable offset test | disable scan mode | do 1 sample | use single buf | alternate mode off
#define PARAM2  ADC_VREF_AVDD_AVSS | ADC_OFFSET_CAL_DISABLE | ADC_SCAN_OFF | ADC_SAMPLES_PER_INT_1 | ADC_ALT_BUF_OFF | ADC_ALT_INPUT_OFF
//
// Define setup parameters for OpenADC10
// for a 40 MHz PB clock rate
// use peripherial bus clock | set sample time | set ADC clock divider
// ADC_CONV_CLK_Tcy should work at 40 MHz.
// ADC_SAMPLE_TIME_6 seems to work with a source resistance < 1kohm
#define PARAM3 ADC_CONV_CLK_PB | ADC_SAMPLE_TIME_6 | ADC_CONV_CLK_Tcy //ADC_SAMPLE_TIME_5| ADC_CONV_CLK_Tcy2

// define setup parameters for OpenADC10
// set AN11 and  as analog inputs
#define PARAM4	ENABLE_AN11_ANA // pin 24

// define setup parameters for OpenADC10
// do not assign channels to scan
#define PARAM5	SKIP_SCAN_ALL

// use ground as neg ref for A | use AN11 for input A
// configure to sample AN11
SetChanADC10(ADC_CH0_NEG_SAMPLEA_NVREF | ADC_CH0_POS_SAMPLEA_AN11);
OpenADC10(PARAM1, PARAM2, PARAM3, PARAM4, PARAM5); // configure ADC using the parameters defined above

EnableADC10(); // Enable the ADC

// === build the sine lookup table =======
// scaled to produce values between 0 and 4096
int ii;
for (ii = 0; ii < sine_table_size; ii++){
sin_table[ii] = (_Accum)(512*sin((float)ii*6.283/(float)sine_table_size));
}

// populate Sinewave and Hann window arrays
for (ii = 0; ii < N_WAVE; ii++) {
Sinewave[ii] = (_Accum)(sin(6.283 * ((float) ii) / N_WAVE)*0.5);
window[ii] = (_Accum)(0.5 * (1.0 - cos(6.283 * ((float) ii) / (N_WAVE - 1))));
}

// === setup system wide interrupts  ========
INTEnableSystemMultiVectoredInt();

// === TFT setup ============================
// init the display in main since more than one thread uses it.
// NOTE that this init assumes SPI channel 1 connections
tft_init_hw();
tft_begin();
tft_fillScreen(ILI9340_BLACK);
//240x320 vertical display
tft_setRotation(1); // Use tft_setRotation(1) for 320x240

// === config threads ========================
PT_setup();

// === identify the threads to the scheduler =====
// add the thread function pointers to be scheduled
// --- Two parameters: function_name and rate. ---
// rate=0 fastest, rate=1 half, rate=2 quarter, rate=3 eighth, rate=4 sixteenth,
// rate=5 or greater DISABLE thread!

pt_add(protothread_display, 0);
pt_add(protothread_quit, 0);
pt_add(protothread_buttons, 0);
pt_add(protothread_listbox, 0);
pt_add(protothread_python_string, 0);
pt_add(protothread_serial, 0);
pt_add(protothread_robot, 0);

// === initalize the scheduler ====================
PT_INIT(&pt_sched) ;
// >>> CHOOSE the scheduler method: <<<
// (1)
// SCHED_ROUND_ROBIN just cycles thru all defined threads
//pt_sched_method = SCHED_ROUND_ROBIN ;

// NOTE the controller must run in SCHED_ROUND_ROBIN mode
// ALSO note that the scheduler is modified to cpy a char
// from uart1 to uart2 for the controller

pt_sched_method = SCHED_ROUND_ROBIN ;

// === scheduler thread =======================
// scheduler never exits
PT_SCHEDULE(protothread_sched(&pt_sched));
// ============================================

} // main
// === end  ======================================================
```
```

### Parts List

Part Cost
PIC32MX250F128B \$5
TFT LCD \$10
Big Board \$10
Servo Motors \$3
Total \$28

### Work Distribution

Kristin wrote the Python scripts and made the Python GUI. For the report, she wrote the introduction, high level design, overall design, GUI design, Python script design, and the appendices. Both group members contributed equally to the source code for the project.

Grace wrote the interrupts, threads, hardware, results and conclusion section of the report. She also contributed to the source code for the project.