By Pamela Chuang (plc26), Anita Gajjala (ag277), Kim Truong (ktt6)

Objective

The goal of this project is to design a useful and fully functional real-world product that efficiently translates the movement of the fingers into the American Sign Language.

Background


The American Sign Language (ASL) is a visual language based on hand gestures. It has been well-developed by the deaf community over the past centuries and is the 3rd most used language in the United States today.

Summary

Our motivation is two-fold. Aside from helping deaf people communicate more easily, the SLC also teaches people to learn the ASL. Our product, a sign language coach (SLC), has two modes of operation: Teach and Learn. The SLC uses a glove to recognize the hand positions and outputs the ASL onto an LCD. The glove detects the positions of each finger by monitoring the bending of the flex sensor. Below is a summary of what we did and why:


1) Build flex sensor circuit for each finger. Sew flex sensors and accelerometer onto glove to more accurately detect the bending and movement of thecomponents.


2) Send sensor circuit output to MCU A/D converter to parse the finger positions.


3) Implement Teach mode. In Teach mode, the user “teaches” the MCU ASL using hand gestures. To prevent data corruption, A/D converter output and the associated user specified alphabet are saved to eeprom, which can only be reset by reprogramming the chip.


4) Implement LEARN mode. In Learn mode, the MCU randomly chooses a letter it has been taught and teaches it to the user. The user ”learns” by matching his hand positions to that which the MCU associated with the letter. Using the LCD, he can adjust his finger positions appropriately. The finger positions are matched to the appropriate ASL using an efficient matching algorithm.

Rationale and Sources of Our Project Idea

There is currently an abundance of software in the market that is used to help teach people sign language.This software is very effective; however, it is very hard to know if you are doing a sign correctly and the same way every time.Current products in the market are not very interactive.In order to check yourself, you have to look at a chart of hand positions on the computer screen to check if you have signed a letter correctly.We wanted to make a project that would help people practice and learn sign language without having to look at a screen every time to check the sign for precision.

For our project, we decided to move away from such software and use a glove in order to implement an interactive sign language teaching program.Our project is called the “Sign Language Coach We believe that, with the use of a glove, signs can be made with more accuracy and better consistency. Having a glove also would enable students to practice sign language without having to be next to a computer. Our project would be portable to all those who wanted to practice sign language.

The concept of a sign language glove began with a high school student who won the Intel Competition in 2001.His idea spurred so much interest that sign language gloves are still being researched and developed for the purpose of translation.The sign language glove seems to be a very useful tool to aid in communication with the deaf.A professor at George Washington University has received a grant from the government in order to research the capabilities of the glove.We believed that such a promising translator could also be an effective teacher to those who want to learn sign language.

Logical Structure

Our glove is similar to the other gloves that have been researched and developed. We have created a glove that learns different signs and saves these signs into the EEPROM of the microcontroller. Our implementation of the glove only deals with the 26 letters of the English alphabet that can be directly translated into American Sign Language (ASL).The part of our project that is different from other gloves is that after programming these letters into the microcontroller, letters are chosen at random for the student to practice and learn.The LCD display is used as a reference for how much more or less you need to bend each finger to correctly sign a letter.The student must then adjust their hand position to match the prompted letter within some specified range in order to be able to move on to the next letter.

In order to use our product, the user must connect the Atmel 32 Microcontroller to the computer and use Hyper Terminal to program in the different hand positions of the alphabet.There is a black flip switch that should be turned on in order to signify TRAIN mode. A yellow LED will light up in order to signify that the student is in the right mode.In order to input the position, a letter must be pressed on the keyboard followed by the ENTER key.Following that, the position of the letter must be held for approximately 10 seconds.The user is expected not to know ASL and can use a table of sign language letters for reference (thereby only having to use the computer once) or call in an American Sign Language expert to help the student perfect the letters of the alphabet.

After all the letters are programmed in, the black switch can be flipped and the yellow LED will be off thereby putting the microcontroller into PRACTICE mode. At this point in time, the microcontroller can be removed from the computer, and the unit can be taken anywhere. The user can then start practicing positions by looking at the LCD display as a reference.Using the LCD, the user will be able to adjust his or her fingers in order to try to match the letter that appears on the screen. Once the position of the hand matches the letter on the screen, “MATCH!will appear on the LCD and the next letter will appear on the LCD.

Hardware/Software Tradeoffs

There were a lot of trade offs between hardware and software.One of the major tradeoffs was that software has little portability.It is very necessary to have a computer nearby in order to run software.We avoided the use of the Hyper Terminal in PRACTICE mode because of the lack of portability.Using software can increase a lot of the capabilities of a sign language teacher.The software programs currently in the market are easily able to have animations of different words to help teach the student with a multitude of sound capability.By using mostly hardware, implementing words and sound into our program proved to be much more difficult.With software, a lot more storage capacity is available to program in many different features.We were unable to implement sound because of the lack of memory on the Atmel 32 Microcontroller.

Relevant Standards

There are very few standards that are related to our project.The one more indirect standard related to our project is the RS-232 Serial Standard that we use to program our device with the different letters using the Hyper Terminal.The rest of our project is based more upon the flex sensors and accelerometer and their outputs to the analog to digital converter which currently do not have any standards directly related to them.

Relevant Patents

Jameco has a patent on flex sensors which we used 5 of in our project; however, this does not conflict with the interest of our project.We are simply using their product in order to implement our glove.

Program Details

Tasks

This is an outline of the important tasks implemented in our code.

  • task1()

     Runs every 10ms

     Determines mode (Program or Train)

      • Program
        • User specifies a letter to be programmed
        • Read in flex sensor circuit outputs
        • Wait 3 seconds
        • Collect 6 samples discard first average remaining 5 samples
        • Store average of samples into EEPROM for each letter
      • Train
        • Randomly generate a letter to be prompted to LCD
        • Read in flex sensor circuit outputs for five fingers
        • Read in accelerometer sensor output for movement
        • Try to match hand position to programmed letters
          • If found
            • Display percentage differences to LCD
            • Display match
          • If not found
            • Display percentages
  • task2()

     Runs every 1000ms or 1s

     Used for transmit between MCU and hyperterm

  • task3()

     Runs every 10ms

     Used for debouncing mode switch

  • task4()

     Runs every 1000ms or 1s

     Used for to delay system by 1s

Important Functions

This is an outline of the important functions implemented in our code.

void gets_int(void) 
span
  style='font:7.0pt "Times New Roman"'>            Receive
            Sets up ISR, which then does all the work of getting a command

void puts_int(void)

  • Transmit
  • Sets up ISR, which then sends one character
  • ISR does all the work
void calc_percent(int my_letter, int pos[5])
  • Calculates finger % differences between signed letter and prompted letter
  • % for each finger = ((actual current)/actual) * 100, where “actual represents the values stored in EEPROM
void plusminus()
  • Displays +/- to LCD to indicate which way to flex each finger
  • Negative (-) % means bend fingers to get correct positions
  • Positive (+) % means straighten fingers
int match(int pos[])
  • Matches finger positions to appropriate letter in master alpha matrix
  • Returns int of letter matched (a=0, z=25) or NOTFOUND (-1) if not found

void set_values(letter l, int arr[])

  • Stores programmed finger positions into master alpha matrix
  • Also records order of letters saved to EEPROM in index_set

Major Issues


Below are several main issues, which we encountered during the programming of the SLC, and how we resolved them.

TEACH mode. Instead of hard-coding the five finger positions for each ASL letter, we implemented a “Teachmode in which the MCU stores the user’s finger positions and associate it with the user specified ASL letter. The issue with this mode is that the user’s finger positions are not perfectly still during the storing process. To resolve this issue, we allow the user 3 seconds of preparation time. In addition, we take four consecutive samples of his hand positions and average the values of each finger position to more accurately detect the sign language.

Jameco has a patent on flex sensors which we used 5 of in our project; however, this does not conflict with the interest of our project.We are simply using their product in order to implement our glove.

printf("Set up your hand position...\n\r");
printf("You have 3 seconds to prepare your hand...\n\r");
task4(); //delay 1 seconds
task4(); //delay 1 seconds
task4(); //delay 1 seconds


printf("Begin calibrating...\n\r");
//parse ADC0-ACD4
for (j=0; j < 6; j++) begin
     for (i=0;i
<5;i++)begin >          ADMUX=0x60+i;
          Ain = ADCH; //get the sample
          ADCSR.6=1; //start another conversion
          while(ADCSR.6==1);
               pos[j][i]=Ain;
          end
          task4();
          end
          printf("End Calibrating...\n\r");

Re-TEACH. If the user needs to modify the hand position of a certain letter, he can overwrite it by re-recording his hand position and the associated letter. However, once he finishes teach mode and programmed the chip, he cannot re-teach the ASL to the MCU because the data is stored in eeprom. This means that the data can only be erased by reprogramming the chip, which prevents unintentional data corruption.

Matching Algorithm. We need an efficient algorithm which parses the five finger position and matches each position to the appropriate ASL. The finger position for each letter is stored in a 2 dimensional array of 26 x 5 because there are 5 finger positions for each of the 26 letters of the alphabet. We cannot parse through each element of the array, which will take a total of 130 parsings per hand position. If we analyze this algorithm, it requires, in Big-O notation, O(n) time, where n is the number of elements in the 2-Darray. Clearly this is inefficient. If we use a hash-table, which require constant time to fetch the data, we run into multiple collisions.

Hardware Details


Flex Sensor Circuit. We built the prototype board for the glove-MCU interface. The circuit consisted of five modules, one for each finger. Each module (see Figure below) contained a flex sensor feed into a LM7111 operational amplifier to achieve a desired gain and resolution. We do this by choosing the appropriate resistor values. The output of each op amp is the voltage divider Vout = Vin(R2/(R1+R2)), which we chose R1 to be 1k and R2 is 33kohms, the input voltage will be 5V. So the output voltage will range from 2.8 to 4V.


LCD. In order to make the product a stand-alone produce, we used an LCD to interface with the user. The LCD is displays the teaching status in Teach mode and the finger positions of a certain letter in Learn mode.

Things We Tried

1. Sound. We tried to use speaker to pronounce the sound of each letter. However, the sound requires a filtering circuit because the quality is bad. Furthermore, using TV as our speakers is a significant memory cost.

2. Matching user hand position. In addition to matching the user hand position to those of a specified letter, we also tried to match the user's hand position to all the letters of the ASL. So even if the user does not match the specified letter, we also tried to match it with any of the other letters. In doing so, the program is slowed because the algorithm is not optimized.

3. LCD We initially used a small LCD, which did not have enough space for our interface. As a result, we upgraded to a larger LCD 4x16.

4. Movement We need to detect the letter ‘J' and ‘Z', which requires movement in addition to hand position. As a result, we added the accelerometer to detect the movement of the glove.

 

Speed of Execution

The two modes of operation (train and practice) vary in execution speed.  In train mode, the system takes roughly ten seconds to properly calibrate each sign.  In learn/practice mode, the execution speed depends on the user's ability to correctly sign a letter. 

Train mode

In program mode, the system waits three seconds for the user to adjust their hand to properly program a letter.  Then, the system collects six samples at a rate of 1 sample/second in order to properly calibrate a sign.  Programming each letter takes a total of roughly 10 seconds.

 

Practice Mode

The execution speed in practice mode depends on the user's ability.  Each time a user properly signs the prompted letter, a new letter is displayed on the LCD after two seconds.  Since our code is fairly optimized for speed, the time it takes the system to prompt a new sign mostly relies on how fast the user correctly signs the current prompted letter. 

Issues Resolved to Optimize Code

Several issues were resolved along the way in order to optimize the performance of the Sign Language Coach.  These issues were mainly related to the search algorithms we implemented in the code.

One piece of code we optimized was the search algorithm used to determine the letter that matches the current hand position.  Instead of checking to see that the current finger positions matched each of the programmed positions for all letters, we devised an improved scheme.  First, the current thumb position is checked against all of the stored thumb positions.  Only the letters with matching thumb positions are stored in an array called alpha_index.  Next, the index finger positions of the letters in alpha_index are parsed and checked against the current index positions.  Only the thumb and index finger positions that match are left in alpha_index.  This is done for all the fingers until the matched letter is found.  Using this methodology improved the processing speed of the search algorithm. 

In train or practice mode, the user is prompted with a letter to sign.  This letter is randomly chosen from the array of letters programmed to the EEPROM.  The initial design of the unit stored each programmed letter to its mapped integer location in an array (letter A to 0, letter Z to 25).  The random letter, my_index, was always randomly generated to be between 0 and 25, inclusive.  If all 26 letters of the alphabet are not programmed, it is possible that my_index will correspond to an empty entry in the EEPROM letters array.  If this is the case, then a new my_index needs to be generated.  This creates a problem when only few letters are programmed to the EEPROM.  With few letters in the array, there is a low chance that the first generated my_index will correspond to a valid letter in EEPROM.  Regenerating my_index requires a lot of processing time and slows the execution of train mode.  The code for my_index and writing to EEPROM therefore needed to be modified.  Now, the programmed letters are stored in an ordered list.  My_index is randomly generated out of the number of letters written to EEPROM instead of 26.  This ensures that my_index will only need to be generated once, therefore reducing unnecessary processing time.

We also removed the i style='mso-bidi-font-style:normal'>Looks likefunctionality from our code to enhance speed.  This functionality displays what letter the current hand position looks like.  Implementing this feature requires continuous searching and updating to the LCD.  With only a few letters programmed to the EEPROM, the “looks likefeature is doable since only a small matrix of finger positions and letters needs to be searched.  However, when the entire alphabet is programmed, the feature noticeably slows down the execution of practice mode.  For now, the i style='mso-bidi-font-style:normal'>looks likefunction is removed. 

Accuracy

The issue of accuracy came into calibrating hand positions, calculating finger percentage differences, and determining hand movement.  When calibrating hand positions, we needed to account for the time required to set up a sign.  The user is given three seconds to adjust their hand, and then six samples of finger positions are collected in six seconds.  The first of these six samples is discarded in case it takes a bit longer for the user to adjust his or her hand.  The next five samples are averaged in order to eliminate any finger variation during program time.  The average finger values are what are stored in the EEPROM for each letter.  This way, all letters need to only be programmed once.  They can be reprogrammed if the person training has made a mistake.

Since it is difficult to sign a letter the exact same way from time to time, we specified a range of acceptable values for a match.  Each finger position can vary anywhere from 10 (almost fully bent) to 65 (almost straight).  We specified our RANGE value to be 5, meaning that if the current finger positions for a letter are all within +/- 5 of the programmed positions, then a match will be recorded. 

The percentages comparing the current hand position against the prompted letter are also calculated in task1() which runs every 10ms.  Since the percentages are constantly being calculated, the percentages updated to the LCD seem reasonable enough to guide the user how to adjust his or her hand.  Any percentage that is calculated to be greater than 100% is written to be a double dash.

In order to distinguish letters with movement (j, z) from similar letters without movement (i, d), we checked to see if there was any hand movement while signing.  This limits our device to only be compatible with American Sign Language.  We did not check for the specific curvy movement required to sign “jor the zig-zag movement required to sign “z  So, if the user were to sign the correct hand position for “jbut were to make the zig-zag movement for “z the sign would still register as a match for the letter “j  Though we realize that this approach is not very accurate, we were unable to perfect it due to time constraints.  

Safety

The circuitry on the board and the CPU have no possiblity of shocking the user. The only safety concern is the glove because it is the only component that directly interfaces with the users. The glove has five flex sensors attached to each finger. Because the connections are on the outside, there is never a direct electrical path that contacts the human skin. In a professional application, the glove would be made out of special material that insulates and provides electrical resistance from the sensors. However, as the project's budget is constrained to $50, we could only afford a regular glove. Still, the glove is sufficient to isolate the skin from the exterior electronic components, thereby preventing a direct electrical path to the human body.

Interference

There seems to be no interference of our unit with other designs.  The hardware does not include any circuitry that would generate interference, and the CPU generates little noise.  All the communication is done through wires, therefore there is little information that can be lost or corrupted.

Usability

The sign language glove is meant for those willing to learn how to sign letters.  The unit allows the user to program his or her own hand positions for each letter, given a visual sign language chart.  The unit is simple enough to be used by anyone.  In program mode, the LCD clearly directs the user how to program a letter by prompting the user with “set hand positionand displaying “donewhen programming is complete.  In train or practice mode, the first line on the LCD displays a random letter for the user to try signing.  The next line displays *T *I *M *R *P, where * is either or representing which way the user needs to flex each finger to match the prompted letter.  A indicates that the user should bend the finger, and a indicates the user should straighten the finger to form a match.  The last line finally displays the word “MATCH!if the user's hand position matches the prompted letter.

 

 

Though the unit is simple enough to be used by anyone, there are some limitations to who can use the glove.  The glove purchased for the project is a women's sized glove.  We wanted a snug fit to prevent the user's hand from moving inside the glove and also to keep the sensors directly aligned with the finger bones.  Choosing a tighter-fitting glove restricts our users to those with small hands!

Expectations

Our original expectations for the Sign Language Coach unit outlined in our proposal are listed below:

A.      Basic Functionalities

§          Accurately detect finger movements.

§          Recognize sign language of the most frequently used letters of the alphabet such as {a,e,i,o,u,r,s,t,l,n}.

§          Designate a hand position that indicates the transition from end of the current letter to the begining of the next letter.

B.     Enhancements (Time-Permitting)

§          Extend sign language to the entire alphabet.  This will require the addition of accelerometers.

§          Recognize the transition from the current letter to the next letter without an indicative hand position.

C.      Complex Features (Time-Permitting)

§          Concatenate sign language letters to speak words and sentences.

This project met our expectations.  It accomplishes more than what we initially planned for it to do, including being able to recognize all the letters of the alphabet.  Still, there are many add-ons that are possible for our project.  We could have done many things differently and can do a lot to improve our product.  Some improvements include implementing a free mode in addition to the existing train and program modes, where the system will recognize any letter being signed.  We also limited our project to the alphabet; implementing word vocabulary would have been a great addition to the project.  We could also have improved our unit by adding sound.  Using a more powerful microcontroller with more memory was required for the system to successfully “speakthe signed letters.  Our small 16 by 4 LCD also provided some limitations; a larger LCD would have been useful for creating a more user-friendly display.  Another improvement could have been made to the physical connections of the flex sensors to the microcontroller.  The long wires are very cumbersome, and the use of cable wire would have looked cleaner.  However, given the time constraint that we had and the limitations of the Atmel 32, we believe our project to be a success.

Intellectual Property Considerations

Our glove looks very much like other gloves that have been created.  These gloves serve many different purposes, such as creating MIDI music and controlling music generating programs.  Such gloves use flex sensors that are attached to the glove in the same way as our product.  We used sewing in order to secure the flex sensors to the fingers.  There does not seem to be much room to patent our product, because there are many more sophisticated products that are rumored to be patented very soon.  These gloves have been created by professors for their long-term research.  Time and money constraints for us, however, have prevented us from further developing our product.

 

Ethical Considerations

Our project follows the entire IEEE Code of Conduct. 

1. To accept responsibility in making decisions consistent with the safety, health and welfare of the public, and to disclose promptly factors that might endanger the public or the environment;

First and foremost, our project is very safe.  There are no connections made directly to the person that uses the glove.  The device operates at a very low voltage and is very unlikely to hurt someone.  The nature of the device is one such that it is unlikely that the public or the environment could be endangered in any way.

2. To avoid real or perceived conflicts of interest whenever possible, and to disclose them to affected parties when they do exist;

We also realize that the sign language glove has been created in many different ways and do not intend to pursue any patents as there have already been many applications for patents for this product. 

3. To be honest and realistic in stating claims or estimates based on available data;

We are honest about the set time that we had to do this project and know that our glove can not detect motion and the hand position of the user as well as many of the gloves being researched.  This is okay, because we declare that we are trying to extend the technology that already exists and create a prototype for other applications of the same technology.

5. To improve the understanding of technology, its appropriate application, and potential consequences;

Our project has been created so that people will be able to explore different ways to learn sign language and hopefully increase the development of the sign language glove for education.  This project helps to increase the understanding of the sign language glove and its capabilities. 

6. To seek, accept, and offer honest criticism of technical work, to acknowledge and correct errors, and to credit properly the contributions of others;

We could not have done or have thought of this project without the help of Professor Land and the TAs.  We get criticism from Professor Land and from the TAs everyday.  We seek it and embrace it.

Appendix A: Budget

 

Item

Price

Mega 32

$8.00

Last year's board

$2.00

Glove

$2.00

Accelerometer

$0.00

LCD

$0.00

MAX233ACPP RS232
driver + RS232 connector

$8.00

Opamps

$6.55

Breadboard

$12.00

9 V Battery

$4.00

Flex Sensors

$0.00

 

 

Total

$42.55

 

Thank you to Freescale for its generous donation of an accelerometer. 

Also, thank you to Spectra Symbol for generously sampling five flex sensors.

 

 

Appendix B: Work Breakdown Structure

 

Theory

            Pam, Kim, Anita

Hardware

            Building/soldering protoboard Pam

            Designing and building flex sensor circuit Pam, Kim, Anita

            Designing of glove Anita, Kim

Connections between flex sensor circuit, LCD, MCU Pam, Kim, Anita

Software

            General Algorithm Pam, Kim, Anita

            Coding Pam, Kim, Anita

Website

Introduction - Kim

High Level Design - Pam

Program/Hardware Design - Kim

Results - Anita

Conclusions - Pam

Appendices Anita, Pam

 

 

Appendix C: Microcontroller

 

Appendix D: Flow Chart for Practice Mode

 

Appendix F: Flex Sensor Circuit

 

Appendix G: LCD Connections

 

Atmel Mega32 DataSheet

Bend Sensor Images SI, Inc

The MIDI Glove Team

Electronic Component: The Flex Sensor