Design

Home
Introduction
Design
Hardware
Software
Results
Conclusions

 

Project Idea

Our first idea for this project was to take an old keyboard and rewire it to use the microcontroller to generate sound to our own set of speakers.  Our TA then suggested we decode the output of a MIDI keyboard and produce an appropriate sound.  Our project is specifically designed for the Studio 610 plus keyboard by FATAR.  This was the only MIDI device we had access to so we could not make our MIDI decoding scheme any more general.  Our synthesizer is not guaranteed to work unless it is hooked up to this specific keyboard.

 Project Structure

The overall setup of our project goes like this:

Keyboard Press ----> MiDI output ----> optoisolator ----> UART ----> code to pick correct note ----> DAC ----> low-pass filter ----> TV-speaker.

When a key is pressed on the keyboard, many different bytes of important data are sent out of the MIDI port.  Most MIDI messages consists of three bytes.  The first byte is a status byte and tells us what action is going on.  Things such as note on and note off are examples of status bytes.  The second byte recieved when a key is pressed is the value of the key that was pressed.  Having this information allows us to be able to look up the correct note frequency corresponding to that key press.  The description of how we make sound is found in the next section.  The third byte is the velocity byte.  We need this to detect when a  key is released.  When a key is released on this keyboard, the velocity byte goes to zero.  When we see a zero in the velocity byte corresponding to a note we are currently playing, we then turn off that note. 

After the proper frequency is selected by decoding the MIDI, we output the 8-bit sinewave we have to the 8-bit digital to analog converter (DAC0808).  The DAC0808 turns 8 different digital signals into one analog current.  We then use an current to voltage amplifier op-amp to produce an analog voltage that can be plugged into a speaker.  We also used a low-pass filter to clean up some of the higer unwanted harmonics in our signal.  For a speaker, we chose to use the small TV's speaker as this already had a built-in audio input. 

 

 

 

 

Sound generation

OUR UNDERSTANDING OF HOW SOUND IS PRODUCED
The task of generating sound, wasn't extremely difficult to program, but conceptually it took a while to understand. We spent a 2 full lab periods, just setting up the correct hardware and having our synthesizer output a sound, but it actually took at least a week or too, before we felt comfortable with how the sound was actually being produced. The underlying concept, is the use of the Timer1 on compare match interrupt to control the frequency at which the sound bytes are being outputted.

WHAT IS REALLY HAPPENING
The whole process starts when we decide we want to play a note. Once we have a note, we lookup the corresponding frequency in a notetable, which has 61 different frequencies stored from C2 through C5 and then the top octave repeats C4 through C5 (the reason for this is address in another section). With the frequency, we can set the OCR1A register to some value dependent on the frequency. This value is determined by the clock speed. For our project, we set this value to, OCR1A = 1000000L / frequency. Once we had the OCR1A set, and the Timer1 on compare match interrupt turned on, we just had to turn on Timer1 so thee the code would interrupt at a set frequency, thus giving us a place to output bytes of sound. In the interrupt, we walk through a table of notes for a given wave (the most generic was the sine wave). Each time the interrupt is called, we get a new value from the wave's corresponding table and output it to our DAC. After that, the hardware takes over and determines what voltages to output to the speaker. The waves themselves have 16 unqiue values, giving us decent resolution. If at any point we want to stop playing a note, we would just turn off Timer1 and if we wanted to generate a different note, we would have to modify the OCR1A in the same manner as above.

Hardware/software Tradeoffs

We use both hardware and software to implement our synthesizer.  Our software is the real heart of our project which does all of the real computing.  Our hardware is mostly used for to convert the output of the Keyboard into a readable signal for the MCU, and also to convert the output of the MCU into a playable signal for a speaker. 

Home | Introduction | Design | Hardware | Software | Results | Conclusions

 Copyright 2003
For problems or questions regarding this web contact gsh7@cornell.edu.
Last updated: 05/01/03.