Introduction

High Level Design

Program/Hardware Design

Results

Conclusions

Appendix A: Code

Appendix B: Schematics

Appendix C: Parts Listing 

Appendix D: Specific Tasks

References

Pictures

 

Xuemin Hang: xh24@cornell.edu

Marcel Xu: mx23@cornell.edu

 

 

 

HIGH LEVEL DESIGN

Explanation of the Project

Selection of Musical Instruments

The different instrument sounds can be selected via the PC interface (HyperTerm). Each instrument can be selected at any time by typing its letter (as listed in the “Instruments Menu” below) in HyperTerm. The operation of both the PC interface and the music keyboard interface are fully concurrent, i.e., the keyboard operates correctly even when the instrument to be played is being selected at the PC interface.

 

INSTRUMENTS MENU      

            

WOODWIND:

Piccolo               ‘p’       

Flute                   ‘f’ 

            Clarinet               ‘c’       

            Oboe                   ‘o’

 

            BRASS:

            Horn                     ‘h’ for long note, ‘i’ for fading off

            Trombone             ‘t’ for long note, ‘u’ for fading off

            Trumpet                ‘r’ for long note, ‘s’ for fading off

 

            STRING:

            Plucked String      ‘z’ sounds like a zither

 

            PERCUSSION:                                              

            Steel Drum            ‘y’ or snare drum

 

 

The specific menu options in HyperTerm are also provided on our project setup.

          

Educational TV Games

The first game, called “Music Test 1”, displays a single random note at a fixed time interval on the music stave. The corresponding note has to be played on our music keyboard in order for the player to score a point. The game lasts for one full minute and a countdown timer displays the remaining time in seconds. Three speed levels are available to suit the player’s reaction time and sight-reading ability.      

The second game, called “Music Test 2”, displays a two-note chord made up of random notes at a fixed time interval on the music stave. The corresponding two notes have to be played on our music keyboard in order for the player to score a point. Simply playing one note of the chord does not earn the player any point. This game also lasts for one full minute and a countdown timer displays the remaining time in seconds. Three speed levels are also available to suit the player’s reaction time and sight-reading ability.     

The TV options are selected via two pushbuttons, XX and YY. Button XX selects between displaying sharps or flats if in music-display mode, and selects between difficulty levels 1, 2, and 3 for each game mode. Button YY selects between the display and game modes on the TV. The detailed menu options are also written next to the pushbuttons on our actual project setup.

 

Logical Structure

The high-level structure of our design is illustrated in the following diagram:

 

 

The music keyboard signals are connected as inputs to both MCUs whose ports are configured as input with pull-up enabled. When a button on the music keyboard is pushed, the corresponding port pins on both MCUs are pulled low (to ground). The corresponding music note is digitally synthesized and output by MCU1 to a DAC which outputs an analog signal to the TV’s audio input. At the same time, the corresponding note is also drawn on the stave and the video signal is output by MCU2 to the TV.

MCU1 also takes in inputs from HyperTerm on the PC to control the instrument sound that is played. The USART Transmitter is used for the communication link between PC and MCU1.

 

Hardware/Software Tradeoffs

The music keyboard is made up of individual pushbuttons. To make a full two-octave keyboard, 24 pushbuttons are needed. Connecting the 24 pushbutton signal lines to the MCU would use up 3 of the 4 i/o ports on each MCU, leaving only 1 port available for output to the TV. Since MCU1 (for music synthesis) requires all 8 bits of the output port to produce an 8-bit digital output to the DAC, there would be no more i/o port pins available for other additional functions such as selecting between instrument sounds. Therefore, the hardware constraint of having a limited number of port pins resulted in the sacrifice of two black keys on the bottom end of the two octaves. Our resulting keyboard is as illustrated below:

 

 

 

 

 

 

 

 

The two “freed” port pins are then used for USART communication between MCU1 and the PC, and for additional TV control inputs from pushbuttons called “XX” and “YY” to MCU2. The USART communication link between the PC and MCU1 allows for input commands via HyperTerm on the PC to the MCU to select between instrument sounds. Button XX and YY control the display options (sharps or flats) on the stave, the three speed levels for the games and the modes for displaying the notes played or for playing the educational games. The sacrifice of the two black keys is justified by the numerous additional options that the two “freed” port pins can provide. 

Another tradeoff in our project is that to implement both the music synthesis and the TV display concurrently, we have to make use of two MCUs, which also means having to solder our own prototype board since we cannot use more than one STK500. The main reason for making use of two MCUs is that the TV generation routine is timing-critical and the interrupt service routines for the video content-lines have to be entered from sleep mode. With the 2 game functions being implemented during the non-content lines (once per frame), there is insufficient time (and port pins!) left for doing the digital music synthesis to produce chords and different timbre options on the same MCU. Therefore, it is inevitable that we have to use two MCUs. However, one advantage is that programming the music and TV display on different MCUs allows the video and audio features of the project to be tested separately before being integrated together.

 

Background Physics, Math and Music Synthesis

Musical Instruments and Harmonics

Musical instruments are usually made up of vibrating air columns, vibrating strings or conical air columns. The lowest resonant frequency of a vibrating object is called its fundamental frequency. Most instruments vibrate at all harmonics of the fundamental (A harmonic is defined as an integer multiple of the fundamental frequency).

The nth harmonic = n * the fundamental frequency.

 

Pitch

The pitch of a musical note depends on its frequency. An octave is a music interval defined by the ratio 2:1 regardless of the starting frequency. The range of standard frequencies (in Hz) used on our keyboard are highlighted as shown in the following table: 

 

Octave=1

Octave=2

Octave=3

0

A

110

220

440

1

A#/Bb

116.541

233.082

466.164

2

B

123.471

246.942

493.883

3

C

130.813

261.626

523.251

4

C#/Db

138.591

277.183

554.365

5

D

146.832

293.665

587.33

6

D#/Eb

155.563

311.127

622.254

7

E

164.814

329.628

659.255

8

F

174.614

349.228

698.456

9

F#/Gb

184.997

369.994

739.989

10

G

195.998

391.995

783.991

11

G#/Ab

207.652

415.305

830.609

 

Wavetable Synthesis

Wavetable synthesis is quickly replacing FM synthesis in better quality sound cards today. Wavetable synthesis creates realistic, high-quality sound by using actual recordings of real musical instruments. The synthesis technique is similar to the digital sine wave generation that we did in ECE 476 lab 2, but instead of using a table of sine values, a wave lookup-table containing digitized samples for a single period of a particular waveshape is used. In addition, as the musical note evolves, the waveshape is changed dynamically, thus generating a quasi-periodic function in time.

For our project, in order to produce sounds that resemble real musical instruments, we do a modified version of Wavetable Synthesis. Instead of taking a sample of real music sound and storing it in a wavetable, we create our own instrument database by studying characteristic waveforms of various instruments and experimenting with the theoretical harmonic equations in Matlab. We also modulate the wave envelope for different instruments sound effects.

The phase increments needed for the frequency synthesis are calculated in the same way as in lab 2:

                                                phase increment =  frequency * (2^24)*40/16MHz

 

Standards

RS232

RS232 is a physical interface standard specified by the Electronics Industry Association (EIA) for serial transmission of data between two devices using cables, normally carrying between ±5V and ±12V on both data and control signal lines. The standard allows for a single device to be connected (point-to-point) at baud values up to 9600 bps, at distances up to 15 meters. More recent implementations of the standard may allow higher baud values and greater distances. The RS232 standard defines the pin and plug in terms of size, shape and number of pins. The prefix "RS" means “recommended standard”. Presently, the standards are now generally indicated as "EIA" standards to identify the standards organization. Our project uses the RS232 standard for the USART communication link between the PC and MCU1.


NTSC video

NTSC stands for National Television System Committee, which devised the NTSC television broadcast system in 1953. The NTSC standard has a fixed vertical resolution of 525 horizontal lines stacked on top of each other, with varying amounts of "lines" making up the horizontal resolution, depending on the electronics and formats involved. There are 59.94 fields displayed per second. A field is a set of even lines, or odd lines. The odd and even fields are displayed sequentially, thus interlacing the full frame. One full frame, therefore, is made of two interlaced fields, and is displayed about every 1/30 of a second. The USA is an NTSC country. The NTSC standard is an applicable standard for our video display.

 

Existing Patents

With regard to intellectual property, YAMAHA owns many electronic music and related patents dating back a quarter of a century. Among its international patents, Yamaha owns US Patent Nos. 4,539,883, 4,584,921, 4,967,635 and 4,974,485, all dating back to the 1970s and early 1980s, covering what Yamaha describes as "core wavetable technologies believed to be in widespread use today."