Skip to main content

more options

"A touchpad system which detects user-inputted characters and displays them to an LCD screen."

project soundbyte

Our project implements a touchpad input system which takes user input and converts it to a printed character. Currently, the device only recognizes the 26 letters of the alphabet, but our training system could be easily generalized to include any figure of completely arbitrary shape, including alphanumerics, punctuation, and other symbols. A stylus is used to draw the figure/character on the touchpad, and the result is shown on an LCD display. Pushbutton controls allow the user to format the text on the display.

We chose this project because touchscreens and touchpads are prevalent today in many new technologies, especially with the recent popularity of smartphones and tablet PCs. We wanted to explore the capabilities of such a system and were further intrigued by our research into different letter-recognition methods. Finally, we have had previous course experience in signal processing, computer vision, and artificial intelligence; we feel that this project was an excellent way to synthesize all of this knowledge.

Upon completion, we decided to extend our project by interfacing it with a project created by another group. Jun Ma and David Bjanes created a persistence-of-vision display; we use a wireless transmitter to send text which is then shown on the display. The same pushbutton controls may be used to format the text on both the LCD display as well as the POV display.

High Level Design

Rationale

The rationale for the design is primarily to demonstrate the ATmega644's processing ability - the classification process requires a large number of matrix multiplications as well as many accesses to Flash memory. Furthermore, in order to get an accurate result we had to run the ADCs at a rate of at least 1 kHz. This limited the number of samples that we had to compute the output.

We also attempted to make the interface as user-friendly as possible. This was achieved by creating a placeholder for the touchpad so it would not slide around when in use, adding a cursor to the LCD, and giving the user the ability to delete misclassified characters via a pushbutton. We also used a Nintendo DS stylus so that the user does not need to use a fingernail to press on the touchpad, which can be difficult and possibly dangerous.

The success of our design could lead to further devlopment along the lines of signature verification or other handwriting analysis tools.

Logical Structure

Our program is fairly linear and follows a rather logical structure. Steps of operation are as follows.

  1. User writes a character on the touchpad when the MCU indicates that it is ready.
  2. The MCU captures the data and cleans it up (see Software section for more details).
  3. The captured drawing is passed through the neural network. This requires many sequential accesses to Flash memory.
  4. The neural network returns the classified character and it is printed on the LCD display.
  5. Repeat from Step 1.

At any time during this process, the user can also interact with the three pushbuttons to change the contents of the LCD display.

Background Math

The crux of the software surrounding this project was the A-Z letter classifier. Many possible methods were discussed and tried in MATLAB before settling on the final design. The classifier needed to be:

  1. Fast. It is unacceptable if the user has to wait any more than a few seconds after inputting each letter.
  2. Reliable. It must be relatively sure of its classification decision when it makes one - we do not want small fluctuations in the input to cause incorrect classification.
  3. Robust. There do not exist any reasonable worst-case inputs which cause the classifier to perform poorly.

Cross-Correlation

We first discussed the idea of cross-correlating the input with an alphabet of prototypical examples. The winner would be the one most highly correlated. This had speed issues, since cross-correlation of an input with an example is costly - O(n4) - and would have to be done 26 times to get an answer. Additionally, we had seen little literature on this method. A desire to implement something more interesting drove us away from this solution.

SVM

We put in significant effort to decide the value of the second algorithm we considered - a support vector machine (SVM). This method splits the cleaned and normalized (normalized this time to 100-by-80) drawings into 80 boxes - 10-by-8. Each box has a value which is equal to the average number of pixels filled in that box. Each value represents the location of the drawing along the dimension for that pixel in an 80-dimensional space (1 dimension per box).

More values can be created which are functions of the other values to raise the dimensionality of the problem. This shapes the location of the points in space to cause the classifier to perform better. We experimented with this, but could not succeed in finding improvements by using it.

Using these steps, a prototypical alphabet of 26 80-dimensional vectors was created by applying the method above to all training sets and averaging them. Classification of a new input then amounts to turning the drawing into a point in the 80-dimensional space and finding the nearest point representing a letter.

This method is extremely fast, requiring only 26 calculations of 80-dimensional distance. It was even relatively accurate - approximately 91% of all letters from the test set were correctly classified under a set of cross-validation tests. However, we were very uncomfortable with the reliability of the output. Closely examining the results, we found that many letters which were correctly classified were only just barely determined to be the winners. A graphical depiction of the SVM's performance is shown below.

To win, the letter must be the darkest blue box in its row. It is clear that there are many rows in which it's a pretty close call. We might have gone with SVM in the final design had we not seen the much more impressive results from the next classification method we considered - the neural network.

Neural Network

A neural network is the classification tool we chose for our final design. Almost all analysis was done in MATLAB until the final step. Neural networks are composed of "perceptron" nodes. A perceptron node specifies a weight vector containing n values, where n is the number of inputs to the perceptron. Additionally, it has a "squashing function" g(x) to restrict the range of the output. The perceptron's output when given an input vector x is:

A single perceptron can be trained with examples with known classifications to allow it to output "yes" or "no" for a particular input, where "yes" corresponds to some output range and "no" corresponds to its complement.

Twenty-six perceptrons together can each be a "yes"/"no" answerer for each letter. However, instead of interpreting the output as a binary value, the actual output value can be examined and the perceptron with the highest output value can be the winner - that is to say, the letter corresponding to that node is what gets selected.

Twenty-six perceptrons alone constitute what is called a single-layer perceptron, and is able to classify problems which are linearly separable in the input space. With a 15-by-12 array of (180) inputs, this means that inputs are points in a 180-dimensional space. For a particular letter, if it is possible to draw a hyperplane in this space to separate all points corresponding to examples of the letter from all points corresponding to examples other letters, then the problem of classifying that letter is linearly separable. Unfortunately for most interesting problems, this is never the case.

The solution is what is called a "multi-layer perceptron" - a single layer perceptron whose outputs feed into another single layer perceptron. The layer which accepts inputs and feeds into the output layer is often called the "hidden layer", and the number of nodes which appear in this layer is a design parameter that can be selected. These types of networks are extremely powerful; they have the ability to fit any function of their inputs, not just linear ones. They can find all sorts of correlations in the input features by seeing lots of examples. They must be trained with an algorithm called backpropagation - a great description of this can be found in the References section.

Neural networks like these are not magical tools which can solve any classification problem, however. The iterated training process allows the network to settle into a state where it minimizes the number of incorrect classifications it makes. With a small training set, though, the network begins to learn features specific to that training set. This is called "overfitting" - the network fails to generalize in this case and doesn't correctly classify anything but those inputs which are extremely close to the training set members. A tradeoff exists between how confidently the network is able to classify and how well it is able to generalize. The correct balance between the two can be found with proper selection of the number of hidden nodes, as well as the duration of training.

Hardware/Software Tradeoffs

The implemented neural network performs best when the input is of size 15x12. This size could have been enforced at the touchpad level, but we opted for a 100x80 sample grid instead so that we could accept inputs of any size. This requires a little more calculation when cleaning and normalizing the input drawing, but allows our system to be much more robust in terms of variability in user interaction.

Standards

We use the Serial Peripheral Interface (SPI) standard when accessing Flash memory.

Patents, Copyrights, Trademarks

All applicable patents regarding touchscreens have expired, as their operation is fairly simple. Some newer patents involve multitouch and more advanced features but we do not make use of them. The neural network implementation strategy is in the public domain and as such is not copyrighted.

Hardware

Full circuit schematic can be found in Appendix B.

Microcontroller

ATmega644 8-bit MCU.

We use the Atmel ATmega644 microcontroller. The chip is mounted on a custom printed circuit board designed by Bruce Land for ECE 4760, allowing us access to four GPIO ports. Furthermore we built a UART to USB board to enable serial communications. This connection is used to gather neural network training data from the microcontroller, and also to transfer trained neural network weights from MATLAB to the MCU.

Touchpad

4-wire analog touchscreen.

A Nintendo DS touchscreen serves as a touchpad. This is a resistive touchscreen consisting of a thin plastic film on top of a glass base. Both surfaces are covered in a resistive coating, and applying pressure with a stylus will create a connection between the two at the point of contact - one can think of this device as a crude potentiometer. There are four pins labeled Y1, X2, Y2, and X1; they are connected to rails at the edges of the touchpad. By applying the voltages in the following table (Vcc high, 0V low), we can read out the x- and y-value of a stylus press. Measurement is performed using the mega644's onboard analog-to-digital converters, located on pins A0 and A1. A Nintendo DS stylus is used as the input device, though any sufficiently pointed object (including a fingernail) will suffice.

Y1 X2 Y2 X1
x-axis measurement measure low unused high
y-axis measurement low measure high unused
mega644 Pin Mapping A0 (ADC input / output) A1 (ADC input / output) A2 (output) A3 (output)

User Interface

Pushbutton control.

Aside from the touchpad, the user interface consists of an LCD screen connected to PORTC to display inputted characters as well as three pushbuttons to aid in text formatting. The three buttons implement the following operations and are connected to pins A4 through A6.

  1. Backspace
  2. Space
  3. Clear

Finally, two LEDs are used to indicate to the user when the device is ready for input. The green LED is activated in this case. Once the user writes a character, the red LED lights to indicate that the system is processing the input. Any character (or partial character) received at this time will cause the both the current and next character to register incorrectly. Once the classified character is printed to the LCD, the green LED lights again to indicate that the system is ready for another character.

SPI Flash Memory

AT45DB321D SPI Flash memory.

The weights for our neural network take up 88 kB of space - this is more than is available in the mega644 SRAM, PROGMEM, and EEPROM combined. Thus our project requires the use of external memory. We chose the Atmel AT45DB321D Flash memory for various reasons:

  • Ease of use: we use SPI to communicate with the memory.
  • Fast access time: the chip supports up to 66 MHz speeds, far above the mega644's clock speed.
  • Availability of documentation: Atmel's datasheet is extensive and shows exactly how to interface with the memory. Furthermore, FaceAccess, a project from the Spring 2011 iteration of ECE 4760, also used the same chip and examination of their code proved useful in learning how to properly write to and read from the memory.

Voltage regulation for SPI Flash.

The AT45DB321D is a serial-interface sequential access memory with 32 Mbits of total storage, split into 8192 pages each containing 528 bytes. SPI is a 4-wire interface used for the transmission of data. The MCU is set up as the master, while the Flash memory is the slave. The other three wires are SCLK(clock), MOSI(master out, slave in), and MISO(master in, slave out). We use pins B4 through B7 for SPI, and additionally connect B3 to RESET. This is used to reset the chip's internal state machine, and is useful to guarantee consistent startup operation.

The memory requires a Vcc between 2.7V and 3.6V. We chose 3.5V so that logic high from the chip will also be read as logic high at a mega644 input pin. This voltage is generated using the circuit shown.

Software

Full source code can be found in Appendix A.

Touchpad Acquisition

The first step to building the system was to create a program which sampled the touchpad with an ADC and collected each sample into an overall drawing. The touchpad requires 2 different pin configurations depending on whether the x-position is being read or the y-position is being read. Because the goal to is read both x- and y-values almost simultaneously, this configuration has to be switched on the microcontroller extremely fast. A small delay must be inserted after changing this, however, or else pins won't have settled to stable values and the ADC will read out incorrect data.

A timer interrupt is configured to ask the ADC for a sample every half millisecond. As it collects each sample, it decides which pixel it maps to in a 100-by-80 pixel drawing. The edges of the touchpad seemed unreliable, so we chose to ignore samples in those regions.

If the drawing were a 100-by-80 matrix of uint8_t values, it would not be able to fit inside the 4K of SRAM the microcontroller uses. Instead, we developed a special format for the matrix in which each byte represented a column of 8 pixels. This made it so that a 100-by-80 matrix was only 8,000 bits, just under 1 kB. The functions involved in storing a drawing this way can be found in drawing.c and normdrawing.c.

We also wrote the program such that when the touchpad hasn't sensed a press for some time, it decides that the drawing is complete and proceeds to process it.

Using the above, we defined a serial protocol between the MCU and MATLAB in which the microcontroller prints an 'N' followed by each row of the captured drawing, followed by an 'E'. The above is all combined in AcquireAlphabet.c. When run simultaneously with getpic.m in MATLAB, MATLAB will collect each input drawing and save it to a cell matrix. Note that the code in getpic.m is specifically designed for collecting alphabets, but can easily be changed to collect sets of anything.

SPI Flash Memory Interface

Regardless of the classification method used for this project, accurate results will require lots of previously gathered data. For example, one might imagine an algorithm which utilizes a set of small bitmaps of prototypical examples of each letter for comparison. Perhaps a more complicated mathematical method involving letters as points in a higher dimensional feature space could be used, and that would require the data necessary to transform into the feature space and compare distances to the points that each letter represents. It's surprising how quickly one can find themselves hitting the limits of ATmega644's 4K of SRAM and 64K of Flash program memory - to rectify this, we purchased a 32 Mbit external memory Flash from Atmel.

The data that our classification algorithm requires involves two large matrices of "weights" for a neural network, each a 4-byte float. For our design, this requires approximately 88 kB of data. The values in the matrices are predetermined by an algorithm in MATLAB and stored on Flash so that the TouchpadLeterRec.c program can read directly from Flash while performing computations.

This was a four-step process, and was arguably the most challenging part of the whole project.

  1. Give MATLAB the alphabet example data collected earlier. As the code on the MCU does, clean up the examples of stray/erroneous pixels and normalize the letter to fill a 15-by-12-pixel box. Use this "scaled" version of the alphabet data to train a neural network classifier.
  2. Run a program on the MCU which accepts incoming values over serial and writes a page to Flash memory every time it has collected a full 528 bytes.
  3. Simultaneously execute a MATLAB program which sends over serial the weights of the neural network created in (1).
  4. Finally, test that all values were written correctly to Flash with a program on the MCU which reads them sequentially and outputs them over serial to a PC running PuTTY. Note that this step is merely for debugging purposes.

Step 1 is explained in further detail below.

Step 2 required several important software design decisions. A system must be defined to decide how to organize weight values in Flash memory. Because the Flash memory has a function which allows for sequential reading of values beginning at an arbitrary address, it makes most sense to place them in memory in the order in which they will be processed by the classification algorithm. The starting addresses of the data stored in Flash were also decided upon and hard-coded into the program at this time.

Step 3 is pretty straightforward, but some unexpected issues were encountered. MATLAB is able to send values over serial much faster than the MCU can process them, so delays in MATLAB must be forced for the MCU to function properly. Additionally, writing a page to memory is a slightly more time-consuming operation, and therefore requires an extra long pause. MATLAB must know when it's sent a page's worth of values (528 B) and take this longer pause. After some testing, we found that 50 ms between values and 1 s between pages worked fine - these values probably could have been reduced for speed, but the data upload only had to happen once.

Step 4 is worth mentioning because it was essential to debugging. We went through many iterations of development (both hardware and software) before we finally saw the float values we wanted.

Note that there is a great deal of memory still available on the Flash chip. Though the final product is nice when packaged as-is, this extra memory leaves it quite open to extension and improvement.

Drawing Cleaning and Normalization

It is unrealistic to expect a user to draw a letter with the same size and location on the touchpad every time. However, for a classification algorithm which uses pixels (1/0) as inputs, this is absolutely essential. It must be able to say, for example, that when a "T" is drawn, something close to this specific set of pixels is usually filled. To solve this problem, approximate bounds must be determined for a tight box around the letter. There is no easy algorithm to do this, other than the one which finds absolute rightmost, leftmost, topmost, and bottommost pixels in the drawing. This would be fine, if the drawings were reliably neat. The touchpad is not perfect, however, and sometimes registers pixels which are not actually touched, probably due to small imperfections in its pressure contacts or a residual signal caused by switching rapidly between measuring the pad's x and y values.

This not only presented problems for finding a bounding box, but also for classification; surely an algorithm to classify a letter would perform worse if it were spotted with pixels that did not belong to the letter. The raw input from an example "T" is shown below. On the right, a bad bounding box is shown.

If one defines pixels as "connected" when they are normally adjacent or diagonally adjacent, then a drawing can be defined as several sets of connected pixels. "Cleaning" a drawing then amounts to removing pixels which are members of sets of size ≤ to some number k. We chose to remove connected sets of size k=4 or fewer. A recursive algorithm was developed in NNClassifier.c to achieve this in O(n2 k) time and O(k) memory. A cleaned version of the above "T" and the resultant bounding box is shown below.

The last step necessary is to scale every letter to the same pixel size. It is much easier to write a classifier that is guaranteed an input of size 15-by-12 pixels, for example, rather than variable size. Since we chose to implement a neural network with each of the pixels as inputs, this step was essential. The scaling algorithm is relatively straightforward - break the drawing into 180 boxes (15-by-12) and fill the pixel in the scaled output if 10% or more of the pixels in the box are filled. The box bounds are typically not at integer values, so we round the divisions. The final normalized "T" is shown to the right.

Character Classification

In the design of our neural network, we chose to have 108 hidden nodes. In general, we found that more was better (to a point), but the classification algorithm in C requires 207 flash memory accesses and floating-point multiplications per hidden node. In fear of having the classification algorithm perform too slowly, we chose a value which resulted in satisfying performance.

The correct duration of training of the network can only be determined by training it a little bit, seeing how it performs in a cross-validation test, training it a bit more, seeing how it performs, etc. A MATLAB script was written to do this and return a plot of performance v. training iterations. The result is shown below.

At first, the network's performance improves drastically as it quickly learns how to better classify inputs. Soon, however, a minimum is hit, after which point the network begins to demonstrate the overfitting phenomenon. Using this graph, it was clear that training for about 10000 examples was the way to go.

An example of the net's performance on someone with very neat handwriting is shown below on the left. On the right below is the net's worst-case performance - output for the alphabet produced by the person with the messiest handwriting among the people polled for help.

On the left below is the net's performance on someone with more average handwriting. Additionally, on the right below is a plot of the percentage of failures across all test alphabets on each letter. Note that this includes many examples of both neat and messy handwriting, so these results are as expected.

If training of the neural network had to happen on the microcontroller, it would be impossibly slow. The major advantage of neural networks in general, though, is that, once trained, only relatively simple calculations are needed to calculate node outputs and classify input examples. Fortunately for us, this meant that all of the computationally hard parts could happen on a PC in MATLAB, and the final result would just be a set of weight vectors defining the net. These vectors could then be loaded onto our external flash memory to allow the final product to perform all necessary calculations in C entirely on the MCU.

Results

Speed of Execution

Our device performs remarkably fast. The touchpad is sampled at a rate of twice per millisecond, giving us a high amount of accuracy when recording the user input. Access to Flash memory occurs at 8 MHz, alleviating many of our concerns concerning the large size of our neural network. Thus the time elapsed between when the user lifts the stylus from the screen and when the classified character is displayed to the LCD is no more than a single second.

The pushbuttons are both responsive and accurate; we were able to check for button presses every 20 ms and all buttons are appropriately debounced to avoid the detection of multiple keypresses.

Accuracy

Theoretically, the analysis in MATLAB suggests that the accuracy of letter classification on average will be approximately 92%. On the alphabet test sets gathered, 2.15 letters were missed per person, on average. However, analysis in MATLAB does not guarantee results on the MCU.

A more qualitative approach was taken to judging the accuracy of the system. Once the final product was complete, people were asked to try it out by writing small sentences. Indeed, letters were missed about 10% of the time for the average drawer. However, when letters were missed, it was a reasonable mistake: 'N' for 'H', or 'G' for 'Q', for example. Even with this, we never felt that the user experience was compromised. In a way, an error signals to a user that they need to slow down and write a little more neatly.

The size of the letter written did not affect accuracy, except in the limit of very small letters. This makes sense, since the resolution of our screen is 100-by-80 and drawing small letters will make them impossible to resolve. Similarly, the location of the written letter did not affect accuracy.

The one downfall to the final product is that it isn't very good at recognizing different forms of letters. The letter 'I' must have the top and bottom lines, and the letter 'J' must have a hat, for example. We predicted that this could be an issue and attempted to prevent it by collecting training data for the neural network from various sources, with letters written in various forms. While we gathered data from 16 different people, the variance in their handwriting just wasn't large enough to train the network to handle the full spectrum of all different methods of drawing letters. One cool thing, however, was that many of the training sets were our own examples, and as a result the network learned to understand very well our handwriting, with accuracies around 99%. In a personalized system, then, accuracy would be very near perfect.

Safety

The device is extremely safe to use. Total current drawn through the device is at a level safe to humans, and furthermore all exposed leads are hidden because the solder board is screwed into a plywood backing.

In terms of safety of execution, we implemented an LED system to alert the user when his/her input will not be valid. When the green LED is on, the user is free to write on the touchpad. When the red LED is on, the MCU is busy and touchpad input will not be properly registered.

Interference

Our device does not emit a wireless signal and thus does not interfere with other devices. It is connected to a wall outlet (or 9V battery) via the power supply but we do not expect our device to generate noise on the power line.

The extension to the device involving the POV does emit wireless signals; information about this can be found in the project website for Persistence of Vision Display.

Usability

The device is easy to use, requiring only the ability to write on the touchpad with the stylus. It is also extremely easy to interface with other devices, as can be seen in our extension. Furthermore, our device could be used to perform tasks like signature verification and user recognition were we to expand our classification suite.

Conclusions

Analysis

Our design far exceeded our initial expectations. We were able to achieve near-perfect character recognition and users who tested our board reported that it was both accurate, quick, and easy to use. Though not as fast, our touchpad would serve as an excellent and novel alternative input system to a traditional keyboard.

With more time, we would implement more advanced classification algorithms to detect similarity between single figures. Such a feature could be used to perform tasks like signature verification. Furthermore, we would expand the classification space to include all alphanumerics as well as punctuation marks and symbols, increasing the user's freedom with what he/she can write to the LCD.

We conformed to the specifications of the SPI standard by ensuring that the correct wire interface was used and by monitoring the bitwise transmission and reception of data to check for accuracy.

Intellectual Property Considerations

The files lcd_lib.c and uart.c, as well as their header files, were reused from outside sources. Furthermore, our memory access code was heavily aided by code generated by the FaceAccess group from 2011. All parties are appropriately credited in the References section.

We are not reverse-engineering a specific design and as such do not need to worry about patent and trademark issues.

We did not have to sign a non-disclosure agreement to obtain a sample part.

We do not currently plan to patent or publish our project but may consider doing so if we can further improve its speed (currently limited by hardware).

Ethical Considerations

Throughout the entirety of this design project we were consistent with the IEEE Code of Ethics. We ensured that our device was safe to use by clipping down and hiding exposed leads. Furthermore we took great pains to follow proper and safe lab procedure while building the device. Our project does not discriminate and can be used by nearly everyone. Our device currently cannot be used by blind people as it uses only visual indicators, but future development might see the addition of a speaker to alert the user that the device is ready and text-to-speech to read out the inputted character. Unfortunately, time constraints did not permit us to add this functionality but we acknowledge that there may be a few individuals who are unable to use the touchpad.

We made sure to give due credit to all references and when asked for help, gave it to the best of our ability. All documentation is as accurate as possible and throughout the entire design and testing process we strove to constantly work towards the goal of improving the understanding of technology. The lab environment was one in which helpful criticism and aid was freely exchanged and we both accepted and gave out many suggestions over the course of the 5-week project.

Legal Considerations

Our device does not emit wireless transmissions of any sort and is self-contained with the exception of power. Thus, we do not foresee any legal issues surrounding the operation of our device.

Extensions

Wireless transmitter used to communicate with POV display.

As an extension to our project, we interfaced with Jun Ma's and David Bjanes's ECE4760 project: the Persistence of Vision Display (POV). In their project, they program an MCU to send a message to a floating LCD display over a UART wireless communicator. The display alone is a magnificent piece of work. However, the necessity of having to reprogram the chip to send a new message left room for extension on their end, and this is where we came in.

Jun and David provided us with the code to interface with the wireless communicator. Additionally, they defined for us a packet protocol: each packet specifies a row and column for a single pixel on their 90-by-14 pixel display, as well as a color for it (or no color, i.e. off). We wrote a library, letters.c, for converting letters into pixels and sending the appropriate set of packets to control the POV display, and integrated it into our code. Just as we implemented a space, backspace, and clear on our LCD, we did the same for their POV display.

There is a noticable speed decrease in our system when time is spent sending packets to the wireless communicator, though it is not unreasonable. Still, in order to maintain the standalone nature of our final product but still have the abillity to interface with the POV display, we added a hardware switch to the design. Its output went to a GPIO, pin A7, which in software controlled whether or not time would be spent sending packets to the POV.

The result: Persistence of Vision Display with Real-Time Touchpad Input.

Appendices

A. Source Code

Download all files: touchpad_code.zip.

Source Files

  • TouchpadLetterRec.c (8 KB) - Main file for final product. Accepts letter drawings on touchpad, classifies them, and prints them to LCD.
  • FlashMemLoad.c (4 KB) - Loads the weights for a neural network from MATLAB over serial UART and stores them in Flash memory. Pairs with flash_send_weights.m.
  • FlashMemTest.c (2 KB) - Allows for manual inspection of the correct operation of FlashMemLoad.c.
  • AlphabetAcquire.c (8 KB) - Captures a user-drawn input figure and sends over serial to MATLAB. Pairs with getpic.m.
  • NNClassifier.c (11 KB) - Contains methods for neural network classification using the weight values stored in flash memory. Additionally, contains drawing cleanup methods.
  • spiflash.c (5 KB) - All functions related to communication with the Flash memory over SPI.
  • drawing.c (3 KB) - Functions related to the manpulation of a large bitmap.
  • normdrawing.c (2 KB) - Functions related to the manipulation of a small bitmap.
  • lcd_functions.c (3 KB) - Higher-level functions used to change the LCD display in commonly needed ways.
  • uart.c (5 KB) - All methods necessary for communication via serial UART, written by Joerg Wunsch.
  • lcd_lib.c (9 KB) - Basic LCD functions, written by Scienceprog.com.
  • letters.c (6 KB) - Functions related to POV display formatting.
  • transmit.c (2 KB) - Functions used to transmit data over wireless to POV display.

Header Files

MATLAB Scripts

  • getpic.m (2 KB) - Receives a set of input drawings over serial UART and saves them as a single alphabet. Runs in tandem with AlphabetAcquire.c.
  • loadalphs.m (2 KB) - Loads alphabets saved in MAT files which were acquired using getpic.m.
  • show.m (1 KB) - Shows a single input drawing as a plot.
  • showall.m (1 KB) - Shows a set of drawn alphabets as a plot.
  • thr.m (1 KB) - Cleans an input drawing of small disconnected segments.
  • thralph.m (1 KB) - Applies thr.m to an entire alphabet.
  • normalize.m (1 KB) - Automatically resizes the contents of an input drawing so that its contents fill the bounds of the drawing.
  • thrnormalall.m (1 KB) - Applies thralph.m and normalize.m to all loaded alphabets.
  • thrnormalalph.m (1 KB) - Cleans and normalizes a single alphabet.
  • change_size.m (1 KB) - Resizes a drawing with up/down sampling.
  • change_size_all.m (1 KB) - Resizes all drawings in an alphabet with change_size.m.
  • convertToSVM.m (1 KB) - Creates an n-dimensional support vector from an input drawing.
  • SVMExamples.m (1 KB) - Uses alphabet training data to create prototypical SVM examples for each letter.
  • SVMClassify.m (2 KB) - Creates an SVM from training data and classifies each letter in an alphabet test set.
  • SVMPerf.m (1 KB) - Runs a cross-validation test on all alphabets and outputs average performance stats.
  • get_distance.m (1 KB) - Kernel function for SVM.
  • NeuralNet.m (9 KB) - Object-oriented definition of a neural net and functions to operate on it.
  • MakeNNet.m (2 KB) - Creates a neural net with the provided training data.
  • showNetPerformance.m (2 KB) - Runs the neural network on an alphabet and graphically depicts its classification decisions.
  • overfitting.m (3 KB) - Performs an analysis of overfitting on a neural network to determine the optimal training duration.
  • NNetFullEval.m (2 KB) - Runs a full evaluation of a particular set of design parameters for a neural network using the training data provded through a set of cross validated tests. Outputs performance statistics.
  • flash_send_weights.m (2 KB) - Sends neural net weight values to MCU via serial. Runs in tandem with FlashMemLoad.c.
  • THE_NET.mat (169 KB) - The neural network that was designed and uploaded to the external flash for use in the final product.

B. Schematic

Download schematic file: touch_schematic.sch (19 KB).

C. Parts List

Item Source Unit Price Quantity Total Price
ATmega644 (8-bit MCU) ECE 4760 Lab $6.00 1 $6.00
ATmega644 custom PC board ECE 4760 Lab $4.00 1 $4.00
9V DC power supply ECE 4760 Lab $5.00 1 $5.00
Lumex LCM-1602-D/A (16x2 LCD) ECE 4760 Lab $8.00 1 $8.00
AT45DB321D (serial Flash) Digikey $4.30 1 $4.30
Nintendo DS Touchscreen Sparkfun $9.95 1 $9.95
Touchscreen breakout board Sparkfun $3.95 1 $3.95
6" solder board ECE 4760 Lab $2.50 1 $2.50
2" solder board ECE 4760 Lab $2.00 2 $4.00
Pushbutton ECE 4760 Lab free 3 $0.00
Nintendo DS stylus previously owned free 1 $0.00
Header pin ECE 4760 Lab $0.05 58 $2.90
Resistor ECE 4760 Lab free 8 $0.00
NPN transistor ECE 4760 Lab free 1 $0.00
Potentiometer ECE 4760 Lab free 2 $0.00
LED ECE 4760 Lab free 2 $0.00
Metal screw ECE 4760 Lab free 4 $0.00
PVC tubing previously owned free 1 $0.00
Plastic casing previously owned free 2 $0.00
Wire ECE 4760 Lab free a lot $0.00
Plywood board previously owned free 1 $0.00
Total $48.60

D. Tasking

Specific tasks for this project were divided as follows. All tasks not mentioned were performed together, including the touchscreen interface, the thresholding and cleaning algorithms, and the wireless extension.

Stephen

  • Hardware layout and soldering
  • User interface
  • Website design

Justin

  • SPI communication and Flash memory access
  • Neural network design and analysis

Acknowledgements

Special thanks to:

  • Bruce Land for his insight and willingness to help solve even the most bizarre hardware issues.
  • All of the ECE 4760 TA staff for opening the lab for hours on end.
  • Michael Wu and Garen Der-Khachadorian for their help in debugging SPI Flash issues.
  • The 2011 FaceAccess group, whose code aided us greatly in developing our own memory access suite.
  • Ranjay Krishna and Seonwoo Lee for their immense generosity with Ranjay's USB-UART board.
  • Jun Ma and David Bjanes for creating an amazing project that we were lucky enough to link ours with.

More thanks go out to the following people for donating their alphabets to train our neural network:

  • Adam Mendrela
  • Andrew Kerns
  • Bruce Land
  • David Bjanes
  • Eashwar Rangarajan
  • Garen Der-Khachadorian
  • Jun Ma
  • Matt Slemon
  • Michael Wu
  • Pavel Vasilev
  • Ryan Fanelli
  • Saummya Kaushal
  • Seonwoo Lee
  • Siping Wang