"An Ordinary Robot, An
Extraordinary Mind"
By:
Sanjay Aggarwal and Sahil
Kapur
Our project consisted of an elementary eight neuron network that used Hebbian Learning to train a robot to respond intelligently to input light stimuli.
First, we decided upon a task that would accurately denote Hebbian learning. One of the most common examples of conditional learning such as Hebbian learning is seen in Pavlov’s experiment with his dog. In this experiment, when food was offered to the dog, it caused the dog to salivate. At first the sound of a doorbell elicited no such response. However, Pavlov decided to sound the bell when he offered food to the dog. After a few repetitions of this experiment, the dog began to salivate at the sound of the bell even when no food was present. Here food was the unconditioned stimulus, and the doorbell was the conditioned stimulus. Similarly, in our experiment we first show that shining light in front of and or behind the robot elicits no response but pressing the push button causes the robot to move forward or backward. We then press the button while shining the light on the robot and the neural network programmed into the robot causes it to associate the light input with the push button input. Soon the robot moves forward or backward depending on whether the light is shined behind or in front of it in the absence of push button input. There are other neurons in this network that play an inhibitory role and prevent the robot from going too close to the light. They too display learning. Initially, the robot goes very close to a light source before it decides to move in the opposite direction. As time passes by the robot gets more responsive to the light and does not get too close to either light source.
In order to reach our end goal, we first programmed a three neuron neural network in C and thoroughly tested it using LEDs and hyperterm. We then extended this to a four neuron network and finally to an eight neuron network, with thorough testing at each level of complexity. Following this, we added the hardware interface. This involved integration of stepper motor control code into our neural network such that the stepper motors would step when the ‘motor’ neurons fired. We then built the chassis and added all the motors, LEDs, pushbuttons and MCU board to the design.
A neuron consists of a cell body with dendrites and an axon which terminates onto a muscle fiber or the synapse of another neuron. It receives signals in the form of charges (sodium, potassium, calcium ions) from other neurons that have their axons terminals sharing synapses with its dendrites. These charges are integrated spatially (across number of neurons that synapse onto it) and temporally (charges received over time) and change the membrane potential of the neuron. The membrane potential usually increases with the influx of sodium ions and the efflux of potassium and calcium ions. An increase in membrane potential is also referred to as a depolarization. Once the membrane potential increases beyond a certain threshold potential, specific for the particular neuron, an action potential is fired. Action potentials are characterized by a strong depolarization, followed by a hyperpolarization (decrease in membrane potential). The action potential is then transmitted down the axon to the axon terminals where it is passed onto the next neuron through a synapse.
Action potentials are ‘all or none events’, and their method of generation can be characterized by an ‘integrate and fire’ approach. That is, a cell integrates charge signals it receives from other neurons and only after its threshold is reached it fires an action potential, thereby transmitting a signal to its postsynaptic neuron. After firing this action potential, the firing neuron goes through a refractory period. During this time, it cannot fire another action potential even if the signals it receives push its membrane potential beyond its threshold value. At the end of the refractory period the membrane potential is at its resting state. All of the charge entering a neuron from other cells does not contribute to the rise in membrane potential. Some of the charge is constantly leaking out of the neuron. This is called leakage current.
Neurons connected in a network can be used to display Hebbian Learning. Donald O. Hebb a Canadian Neuropsychologist proposed the following postulate:
When an axon of cell A is near enough
to excite a cell B and repeatedly or persistently takes part in firing it, some
growth process or metabolic change takes place in one or both cells such that
A's efficiency, as one of the cells firing B, is increased.
(http://neuron-ai.tuke.sk/NCS/VOL1/P3_html/node14.html)
The principles underlying his postulate came
to be known as Hebbian Learning. Therefore, if two neurons in a network (a
post-synaptic and pre-synaptic) neuron fire repeatedly in the correct order,
the connection between them is strengthened. The order in
which neurons fire is very important because it can also cause the weights
between neurons to decrease. If a presynaptic
neuron fires shortly before the post-synaptic neuron their connection is
strengthened, however if the post-synaptic neuron fires shortly before a presynaptic neuron their connection is weakened. If the
time interval between the firings of these two neurons is very large, they
cannot be correlated. The following diagram shows how the timing of action
potential (also referred to as spikes) affect the weights between a two
neurons.
Figure
1 Biological Model for Change in Weights
At first, we had a hard time choosing
a final project idea.
Integrate and Fire Equations:
Temporal & Spatial Integration of voltage:
Where τ is our decay constant. Vrest is the resting membrane potential which we set to 0; Vexc is the voltage inputs due to excitatory connection and Vinh is the voltage inputs due to inhibitory connections.
The weights were updated according to the following equation:
where i corresponds to the presynaptic neuron, j corresponds to the postsynaptic neuron and Wij is weight of the connection; Tij is the weight of the connection * Learning / Unlearning rate.
We did the design process in many stages. The first stage was to simply getting the neural network working. We first started with a simple 3 neuron network and made sure we got this to work. We then added a fourth inhibitory neuron to make sure we understood how to code both excitatory and inhibitory effects. We then added four more neurons to give bi-directionality to the neural network. We used the STK500 board, hyperterm, and the LEDs to make sure we got this working first. Next, we separately tried to integrate the stepper motors both in hardware and software. After getting the motor control working, we integrated this with the neural network to make sure the motors only moved whenever an output neuron spiked. Finally, we had to build a robot and solder parts for the final product. During demo, our final project would consist of demonstrating learning first through hyperterm and show how the weights are changing. Then, we would shine light both backwards and forwards and show that the robot can move in both directions and there exists both excitatory and inhibitory effects. There will also be real-time learning as the robot is moving through its light course.
The following is an overview block diagram of the overall project.
Figure 2: Overall Flow of Project
The neural network has 4 inputs (forward and backward light sensors and pushbuttons) and two outputs (moving forward or moving backward). The following diagram shows the network, where solid lines indicate excitatory connections and dashed lines indicating inhibitory connections.
Figure 3: Schematic of our
Eight Neuron Neural Network
An explanation of each of the eight neurons is as follows:
Forward Pushbutton: The input to this neuron is the push button press. It has an excitatory connection to the forward motor neuron with a very large weight since it represents the ‘unconditioned stimulus’ to the network.
Forward Light: This takes in as input the voltage from the front light sensor. This has an excitatory connection to the forward motor neuron and an inhibitory connection to the backward motor neuron. This is because we decided our neurons can’t subtract input directly, so instead we coded it so that the front sensor will excite the front motor neuron and inhibit the back motor neuron.
Forward Shock Light: This also takes in the voltage from the front light sensor as an input. The difference between this and the forward light neuron is that this has a much higher threshold. Thus, if the input voltage from the front sensor is low, it means that the robot is pretty far from the front light source and is in no danger of being burned. But, if the robot is very close to a front neuron, then this neuron will spike very frequently. This is to warn the robot that it is too close and should not proceed as quickly in the front direction. Thus, if this neuron fires, it actually inhibits the forward motor output neuron, and in essence, competes with the forward light neuron. Furthermore, if this neuron fires, it is also telling the robot that it should move in the opposite direction to be safer, so it has an excitatory connection to the backward motor output.
The backward pushbutton neuron, backward light neuron, and the backward shock light neurons all behave very similarly to the descriptions above but in the opposite directions. Finally, the forward motor output and the backward motor output neurons basically fire depending on which input neurons are firing and the strengths of those connections.
More information on the process of coding the actual neurons is in the programming details section.
After designing the neural network, we had to design the robot and motor controls. The robot is controlled by two unipolar stepper motors, and they step backwards or forwards depending on which neuron fires. We also decided to build a stand-alone Mega32 prototype unit which would just be on the robot. Furthermore, a very important part of the project was to demonstrate the learning process of the robot through the neural network. To do this, we need to use hyperterm to real-time output the continuously changing weights of the network. Thus, during the initial learning phases, hyperterm is essential to demonstrate that the neural network is indeed working properly. Finally, we had to build soldered boards for all our parts. These boards are both lighter and neater for the final product and also take up less space.
There really are no standards we needed to worry about in this project. Our goal was just to build a biologically correct network that simulated learning, so the main “standard” we used was the process of Hebbian learning.
There are no patents of
copyrights associated with this project.
We used the stepper motors in lab
given by
To drive the motors, we used a ULN2803, which we sampled from TI. This device, also known as a Darlington Array, provides enough current and power to control the motor. The ULN2803 has an advantage over the ULN2003 in that the 2803 has 8 inputs and outputs versus 7 for the 2003. Thus, we only needed to use one ULN2803 to control both stepper motors. We used PORTC from the MCU as the output port to control the motor. Using the Darlington Array IC is very easy since no extra wiring is needed. The output from PORTC goes into the input of the ULN2803. The outputs of the Darlingtons are wired directly into the stepper motors. The following is a schematic of how to use the ULN2803 to drive the motors.
Figure 4: Schematic of ULN2803 and Motor
Control
Once the motor connections were made, we had to add in the motor code. Before integrating the motors, we used test code to help us debug our wiring and determine the best ways to wire it. The following was the test code we used:
#include
<Mega32.h>
#include
<Stdio.h>
#include
<Stdlib.h>
#include
<delay.h>
//#define prescale0 3
#define
begin {
#define
end }
#define
t0 20
void initialize(void);
unsigned int time0;
unsigned char
step;
//**********************************************************
//timer 0 compare ISR
interrupt [TIM0_COMP] void
timer0_compare(void)
begin
//Decrement the
three times if they are not already zero
if
(time0 > 0) --time0;
end
//**********************************************************
//Set it all up
void initialize(void)
begin
//set up timer
0
TIMSK=2; //turn on timer 0 cmp match ISR
OCR0 = 250;
//set
the compare re to 250 time ticks
//prescalar to 64 and turn on
clear-on-match
TCCR0=0b00001011;
//set up the ports
DDRB=0x00; // PORT B- input is bottom nibble, output is top nibble,
for pushbuttons
PORTB=0xff;
DDRC=0xff;
// PORT C is an output port for motor control
PORTC=0x00;
//init the task
timer
time0=t0;
step=0;
stepcount=0;
//crank up the ISRs
#asm
sei
#endasm
end
void main(void)
begin
initialize();
while(1)
begin
if(time0
== 0)
begin
time0
= t0;
if (~PINB == 0x01) //move motors forwrd
begin
if (step > 0)
step--;
else step = 3;
end
else if (~PINB == 0x02) //move motors backwards
begin
if (step < 3)
step++;
else step = 0;
end
if (step == 0) PORTC =
1+128;
if
(step == 1) PORTC = 2+64;
if
(step == 2) PORTC = 4+32;
if (step == 3) PORTC = 8+16;
end
end
end
The above test code was also adapted from the same vertical plotter final project (http://instruct1.cit.cornell.edu/courses/ee476/FinalProjects/s2001/vp2/). When step is increasing, the motor moves in one direction, and when step is decreasing, the motor moves in the opposite direction. Thus, it is easy to have the motors move forwards or backwards depending on the sequence of the pulsing.
The LEDs
used as light sensors were the green LEDs found in
the ECE476 digital lab. These LEDs are cheap, directional, sensitive, and easy to use,
making them great parts as light sensors.
At first we didn’t know that LEDs can also be
used as sensors, but after talking with
Figure 5: Hardware for LED input in ADC
Before adding the low pass circuit, there was some noise affecting the ADC which did cause some problems with the neural network. After adding the RC circuit though, most of the noise was eliminated, and the ADC input worked well. We usually got readings on the order of .1 V with a regular light at a reasonable distance away, but if we shined a light directly into the LED, we could get readings as high as 1.5 V.
Overall, the hardware for this project was relatively easy to design. Most of the problems occurred in having good connections and understanding the mechanics of our design.
Another aspect was the mechanical design of the project. The base of the robot is a 8.5 x 5 inch piece of wood. We used brackets to secure the motors to the front of the robot. Each of the two motors was attached to a wheel, and we used an omniwheel as a back wheel because it is fairly frictionless and thus easy to move.
Even though the design was fairly simple, it caused some problems. The two major problems involved torque and friction. At first, we felt that the motors didn’t provide enough torque because the robot was not able to move. In order to reduce the friction that that the motor had to overcome to move, we decided to use the omniwheel as the back wheel, but the robot still had difficulty moving. We tried many types of wheels for the front wheels, but rubber ones had too much friction while smooth plastic ones slipped too much. Also, the wheels we used were fairly large and thus had less torque. After many trials, we finally found a good compromise of friction and torque. We got smaller wheels that had grooves which provided enough friction with a surface, but not too much. Also, the smaller wheels meant they could provide more torque, which also helped out much. We also had to really pay attention to the weight of the robot so as to not to make it too heavy since we didn’t want to use more powerful and expensive motors.
One important consequence of our mechanical design is that the front LED could approach a light source closer than the back LED could (because of the design and position of the soldered board on the wood base). Thus, we had to do separate tests of the network to determine how the thresholds should be changed for the back and the front because of this asymmetry.
We programmed the network in various stages and tested each
part before integration. Each neuron on
the network had the following variables (or array elements) associated with it:
outputV: element of
an integer array variable indicating if a neuron had spiked.
Weights:
row of weight matrix (matrix of float variables) specific to the particular
neuron.
V:
float variable indicating the membrane potential:
Spiketime: element of a char array denoting
time elapsed since neuron had spiked.
Vthresh_(motor/shock/light/push):
integer variable denoting the threshold for a specific
kind of neuron.
Leakage:
constant factor less than 1.
Lrate/ULrate/ULrate_inhib: float
variables indicating how fast a particular neuron in the
network learns or unlearns.
The entire network was programmed in four stages described
below.
Stage
1:
Structure of Network: This network consisted of three
neurons, a pushbutton neuron (Neuron 0), a light neuron (Neuron 1) and a motor
neuron (Neuron 2). Each neuron was coded separately because their membrane
potentials were updated differently and thus could not be effectively
generalized in a loop. In the case of the push button neuron, when a button was
pushed, it added a factor of its threshold to its membrane potential. We kept
this factor as 0.8 so that the pushbutton would not fire every time it was
pressed. The pushbutton had to be pressed twice for it to fire. The light input
neuron added the input from pin A.0 of the ADC on the MCU chip. The motor
neuron updated its membrane potential by adding the weights of its presynaptic neurons that had fired in the present cycle to
its current membrane potential. After the membrane potentials were updated in a
specific neuron of the network, the code checked to see if the membrane potential
was greater than the threshold voltage for that neuron. If it was the outputV variable for that neuron was set to 1, indicating a
spike, and spiketime was set to refrac,
which indicated that the neuron had entered a refractory period of length refrac (in ms). In the case of the motor neuron, the timer
for the motor mtimer was set to 0 so that the motor
control code could begin execution.
Changing of Weights: This section
was the most complicated part of the code because one had to be very careful of
timing while updating neurons. The weights between neurons were updated
in order to denote Hebbian learning. The model that was used to update the
weights is an approximation of the biological model shown below.
Figure 6: Approximation to biological
model of weight change graph.
Weights between two neurons needed to be updated from
the viewpoint of the presynaptic neuron and the
postsynaptic neuron. There were three cases that need to be considered which
are diagrammed and explained below:
Case 1:
Suppose that the spike of the presynaptic
neuron occurs at the start of every other cycle. This is meant by ‘timing
reference’ in the diagram. The timing in the program is setup so that
refractory period is twice the length of the cycle. The small vertical slashes
between the two spikes of the presynaptic neuron
indicate a new cycle. So if the post synaptic neuron fires as seen in the
figure below then it has fired after the presynaptic
neuron has fired and the weights must be increased. In the code this case is
accounted for by checking if the spiketime of the
post synaptic is greater than the presynaptic neuron.
Since spiketime counts down, a greater value for
postsynaptic spiketime means that it fired after the presynaptic neuron.
Figure 7: Order of
spike occurrences in Pre and post synaptic neurons as denoted in Case 1.
Case 2:
If the postsynaptic neurons fires as seen in the fig below
then it has fired before the presynaptic neuron for
that cycle and so the weights between the two neurons must be reduced. This done by checking if the postsynaptic spiketime
is non zero (so that it didn’t finish spiking a very long time ago) and less
than presynaptic spiketime.
Figure 8: Order of spike
occurrences in Pre and post synaptic neurons as denoted in Case 2.
Case 3:
In this case we set the timing reference on the post synaptic
neuron, that is, the spiking of the post synaptic neuron occurs at the start of
every other cycle. Here the only time we need to adjust the weights is if the presynaptic neuron has fired before the postsynaptic
neuron. We can check for this by seeing if spiketime
for presynaptic neuron is not 0. If our check is true
then we increase the weights between presynaptic and
post synaptic neurons. In a given cycle we can never have the presynaptic neuron fire after the post synaptic neuron due
to the manner in which the code executes in a given cycle.
Figure 9: Order of
spike occurrences in Pre and post synaptic neurons as denoted in Case 3.
Each increase or decrease in weight is multiplied by a
learning or unlearning rate factor respectively. These factors control the
speed with which the network learns. Also note that to prevent very fast
saturation of weights the unlearning rate is greater than learning rate.
Stage
2:
Structure of Network: In this stage we decided to add
an inhibitory neuron (Neuron 3) to the network. This neuron also received its
input from the same ADC pin as Neuron 1 but its threshold is set much higher
and its outputV variable assumes a value of -1 when
it spikes. The role of this neuron is to stop the motor from going too close to
the light. It achieves its purpose by reducing the membrane potential of the
motor neuron (Neuron 2) to which it is connected. Since its output spike is
assigned a negative value the membrane potential of the motor neuron is updated
by subtracting the weight between this neuron and the motor neuron from the
membrane potential.
Changing of Weights: The weight adjustments for this additional neuron is similar to the
weight adjustments described in the previous stage.
The learning rate for this neuron is set higher than the
unlearning rate because one does not want to the robot it forget
quickly its proximity from the light source and crash into it.
We also want the robot to learn quickly when it is supposed
to stop while nearing a light source and so the starting weights between this
neuron and the motor neuron are higher than the starting weights between the
regular light neurons (Neuron 1) and the motor neuron. The threshold voltage
for this neuron is also set higher because we want this neuron to inhibit only
if the robot is too close to the light source.
Stage
3:
Structure of Network: We then decided to add another
set of four neurons to control backward motion of the robot in response to
another LED that sensed light coming from the rear of the robot. This network
of four neurons is coded very similar to the network described above in stages
1 and 2 of the design. The light neurons in this case received their input from
pin A.1 of the ADC.
Changing of Weights: The weights in
this network were also adjusted similar to how they were adjusted in stages 1
and 2. The backward inhibitory neuron and the motor neuron in this
network had a higher weight than the inhibitory neuron and motor neuron of the
forward motion network due to unsymmetrical geometry of the robot.
Stage
4:
Structure of Network: This was a very complex stage
because it involved the integration of the four forward neurons with the four
backward neurons. During the integration process the following connections were
made between the two 4 neuron networks:
Backward Light Neuron – Forward Motor Neuron: This connection
inhibits robot from going in forward direction when backward light neuron is
spiking.
Forward Light Neuron – Backward Motor Neuron: This is also an
inhibitory connection that prevents robot from going in backward direction when
forward light neuron is spiking.
Forward Shock Light Neuron-Backward Motor Neuron: This
excites the robot to move backward if it is too close to the forward light.
Backward Shock Light Neuron –Forward Motor Neuron: This
excites robot to move forward if it is too close to the backward light.
Changing of Weights: Weights are
changed as discussed in stages 1 and 2.
Timing
Section:
Neural network time: All the neurons in the network
and their weights were updated every 40 ms. The
refractory period was set to 80 ms. The refractory period had to be at least
twice as long as it takes to update every neuron in the network. The reason for
this can be seen from the model that was used to change weights of the neuron.
Also, if we observe the diagram below, we see that if the refractory period is
shorter than the neural network time, the area for learning and unlearning begins
to overlap.
Figure 10: Timing of Neural
Network
ADC time: This is the amount of time the ADC is
polled to get voltage values from the LEDs. This time
has to be the same as the amount of time allotted to update the network,
because voltage inputs from LEDs are inputs into the
network.
Motor Time: In order to move the stepper motor
through one step, we need to send it four consecutive pulses of equal width at
equal time intervals. In order to overcome inertia and show suitable amount of
movement for a single output spike the number of motorsteps
needed to be adjusted. We found that we got the best results when we executed
three motor steps per spike of the output motor neurons. (Neurons
2 or 6). Besides this, the amount of time needed between pulses to the
motor needed to be adjusted for optimum motor movement. We did not want to make
this time too long because then the motor timing would significantly affect the
timing of other parts of our neural network, however, we had to make the time
between pulses long enough so that the motor could move through an entire step
without ‘shivering’.
User Interface (HyperTerminal): In order to
check our code we used the LEDs and HyperTerminal to
debug. HyperTerminal was a great tool to help us debug because we were able to
watch how the weights changed and see if our program was functioning
successfully. During the demonstration we will use hyperterm
to train the robot to associate light input with the push button input and also
to show learning and unlearning during this training process. The weights that
are displayed on the screen are multiplied by a 1000 so that we don’t need to
display decimal points.
In the end, all three portions of the project accomplished the goals we set out to in the beginning. The mechanical design produced a robot that would move appropriately to the spiking of the motor neurons. The LED circuitry cut off most of the high frequency noise and provided very nice signals for the MCU to determine the intensity of the light at both the front and the back of the robot. The wiring for the motors worked well because we were able to get enough torque and pulse the robot correctly. The crux of our project, the neural network, worked as we hoped it would too. Since the network inherently has some randomness involved, we were never too sure how it would react to our inputs. Every trial we ran, the program would output results would vary. This was good because this Hebbian learning network had to figure out its own level of performance. This level could not be judged by fixed targets that needed to be reached. This added difficulty to our task because it was hard to know if there was a problem or where to start debugging. Most debugging was based on ensuring that the correct neurons fired by using the LEDs. We also used hyperterm a lot to examine the changing weights as we varied the amount of light entering the LEDs. Based on this debugging, we were able to determine reasonable values for the different thresholds of the different neurons and appropriate connections between the neurons that would show correct learning.
Outside factors such as the type of light source and the amount of ambient light also added more randomness to the network. We tried to eliminate the effect of ambient light as much as we could, but the system would react a little differently when tested in daylight versus at night. Furthermore, brighter or dimmer light sources would also add variability. To deal with this, we would have to measure the voltage output of the LED whenever we used a different light source to determine if the variability was significant enough for us to change some parameters in the code. Usually though, this was not the case and our network could handle many different situations and light sources within reasonable limits.
By splitting the neural network into stages, we were able to add complexity as the project went along. Thus, we were able to prefect smaller networks first, and then only after this worked, did we build and test the entire eight neuron network. Our goal in the beginning was to get anywhere from four to eight neurons working together, and by the end we were able to accomplish this goal. We were able to meet the specifications we set out to do.
Speed overall was not a huge problem in our program. By outputting the a timer onto hyperterm, we concluded that it took on average 3-4 ms to get through our entire neural network every time. This large time was a consequence of how some of the variables involved floats, such as the weights. Multiplication of floats takes many cycles, and we had to do many multiplications, so the overall speed was slowed down by this. This was not a factor though because we also had to take into account that the each motor pulse needed to be about 20 ms for the motors to behave nicely. As explained above, this resulted in us executing our neural network code ever 40 ms. This time was needed because it took about 240 ms for one motor movement, and we didn’t want to interlace many neuron firings together in order to show each individual firing through the robot. Thus, because we ran through our code every 40 ms, there were no speed problems and the ADC, motors, and the neural network showed no time problems.
One speed issue that we had to take into consideration was when we output the weights to hyperterm. It takes about 1 ms to transmit one character. Since we had to ensure that the motor code could execute every 20 ms, the maximum allowed time we had to get through our neural network and Hyper Terminal code together was 19 ms. If the neural network would take 4 ms, that meant that we had 15 ms extra time in which we could output to hyperterm. There were times in which we forgot just how long Hyper Terminal took, and we were outputting too much. This resulted in the motor pulsing incorrectly and thus the robot not moving. After realizing that we had to be very careful in what we could output, we made sure that we would only output at most 10 characters (2 weight values) to be safe. After doing this, all parts of the code worked correctly and within appropriate time limits.
Complete accuracy is hard to determine for the neural network because it will always behave a little differently every trial. Overall though, we had a sense of what should happen over time. At first, the robot should only move when a pushbutton is pushed. After the robot is trained, it should be able to move only with light. If a light source is far away, the robot will slowly move towards it and is it gets closer, the excitatory neurons will fire more causing the robot to move faster. As the robot gets even closer though, the inhibitory networks will kick in stronger and the robot should start moving slower again. When the light is really close, the robot should stop moving and actually start slowly moving in the other direction. Thus there comes to a point where the robot will just move back and forth (jittering) but won’t approach the light any more.
The above scenario would happen in
many trials, but there were always trials where one or two weights would be
much weaker than others even after learning.
Thus, in some trials, it would take longer to inhibit than in other trials. The robot would keep trying to move closer to
the light even if it was already so close.
In biology terms, this is kind of like how some animals survive and some
don’t. Those animals who
adapt and learn quicker have a better chance to survive than those that
don’t.
In other trials, inhibition would be very strong and the robot would stop moving when it got close to the light right away. Thus it learned very quickly an appropriate safe distance from the light. By looking at the weights, we could see how all the excitatory and inhibitory weights evolved throughout a trial.
The speed of learning itself also varied greatly from trail to trial. While animals can learn, they can also unlearn, and in fact, the unlearning rate is greater than the learning rate for stability purposes. During the initial learning phase when we trained the robot to respond excitatory to light while holding the pushbutton, we noticed the great variations in learning times. In some trials, the network would learn very quickly and would keep learning and never unlearn. The excitatory weights would increase relatively quickly. In other trials, the network would keep learning and unlearning, but the weights would slowly increase. Thus, the robot would learn but it would take much longer than if the robot only continuously kept learning. In even other trials, the robot would only unlearn and would only learn very slowly or never learn. In these trials, the excitatory weights were barely trained and the robot still would not respond to light. We were able to track all these weight changes during the learning phase via hyperterm. Therefore, this also shows real biological situations in how some animals learn much quicker than others, and how even others may never learn at all. We were able to see mathematically through our weights just how learning would occur in our robot, but many of these same situations could also be seen in the real-world.
Therefore, accuracy was hard to determine, but in every trial, we could always see evidence of learning. In almost all trials though, the robot would behave as it should after a certain amount of time. It would usually move slowly when the light was far away and faster as it approached the light source. If inhibition was initially strong enough, it would once again slow down as it got too close to the light. If inhibition was not strong enough, then the robot would still move quickly even if near the light, but then the inhibitory neurons would be firing quickly which would result in the inhibitory weights to increase. Thus, the robot would start to slow down or stop, but this would just take longer than if inhibition was strong to begin with. After this training though, if the robot is put back in the middle of the course, it would slow down sooner as the light got too close, showing that it did learn from the previous time. Thus, not only were we able to see the robot learn real-time, it was also able to remember what it learned over time through the strengths of certain weights over others. While the weights may be different, overall behavior did follow what we hoped it would. Therefore, our neural network did model the situation we set out for it to model accurately and we did meet the specs we did set up for ourselves.
This project did not have many safety concerns. We had to be careful not to touch any of the motor wires because the motors ran through 12 V. The main human contact involves pressing the pushbuttons, but no part of the project is attached to the user. Thus, there were minimal safety issues involved. Also, we had no interference problems because we did not have any wireless portions in the project.
Our final product is easy to use,
but the theory behind it is somewhat complicated. In order to understand what the project is
supposed to do, we have to explain how we set up the neural network and what it
is supposed to do. We could also show
how it should model the real world situation of a moth flying near a
light. Once this explanation is done,
any user could easily experiment with the project to see exactly how the neural
network evolves over time and how learning actually occurs.
In conclusion, our final project met all the specs that we initially set out to do. Initially, our goal was to design a neural network that would model some real world phenomena. We decided to use light and model a moth’s behavior via using a robot who’s movement was controlled by the outputs of a neural network. We at first just wanted to build a four neuron neural network that would move in one direction. We also wanted to build a robot with motor control to show the output and to also take advantage of the portability of the microcontroller. After we accomplished the four neurons, we realized that we could easily extend it to eight neurons and make the robot bi-directional with now two output neurons. Therefore, we were able to meet all our goals by building an eight neuron neural network with many different Hebbian learning pathways that would model the learning of a moth’s attraction to a light. Furthermore, we were also able to build a robot and put in motor control to move the robot as a moth would move in response to light.
While we are happy with out results, there are some things that could be improved. First, we could have built a better mechanical set-up whose movement might be a little smoother than our present robot. Also, our robot presently only moves bi-directional, forwards or backwards. If we had more time, we might have been able to build a much large neural network (about 16 neurons) that would allow movement in all four directions. This would also involve more sensors, output neurons, and links which would have made the network much more complicated. This would be a good next step, but we thought that for one month, eight neurons was complicated enough, and we were especially happy that we got eight working because at first we felt that four would be difficult enough. We also could have put a separate battery on the robot to power the motors instead of using the power supply so that the robot would be completely autonomous and wouldn’t need wires connected to an outside power supply.
We didn’t really have any legal or intellectual property considerations to worry about in this project. While we didn’t have any standards we had to follow, we did try to model biology as closely as we could. Thus, instead of standards, we used the biology behind neurons and Hebbian learning as how we should build this project.
We made sure we followed the IEEE Code of Ethics throughout this project.
1.
To
accept responsibility in making engineering decisions consistent with the
safety, health and welfare of the public, and to disclose promptly factors that
might endanger the public or the environment
We made sure that we were under the power limits of the motors and ensured that the chips wouldn’t get too hot. We also tried to tape down any sharp ends and nails so a user couldn’t get injured using the robot. We also taped lose wires and the boards to ensure safe handling of the robot.
3.
To be
honest and realistic in stating claims or estimates based on available data
We were honest in writing this report in explaining our project and how we designed it. We were realistic in that we said the robot behaved how we hoped most of the time, but there were few times where it would not learn.
5.
To
improve the understanding of technology, its appropriate application, and
potential consequences
One of the goals of our project was the build the neural network so that users can learn more how actually biology systems work. We also wanted to show that biological neural networks can be very applicable to non-biology scenarios and can be important in any system that needs learning.
6.
To
maintain and improve our technical competence and to undertake technological
tasks for others only if qualified by training or experience, or after full
disclosure of pertinent limitations
Building the neural network really helped us understand how our brains and neurons actually work. We gained much knowledge on how neurons work individually and the features of a neuron. We also gained knowledge into how Hebbian learning works.
7.
To
seek, accept, and offer honest criticism of technical work, to acknowledge and
correct errors, and to credit properly the contributions of others
Throughout the project, we were
always open to how we could improve our project or things in our project that
did not seem right.
The following is a commented version of our entire code:
/*
ECE 476 Final
Project- Neural Network
Sahil Kapur (skk23)
Sanjay Aggarwal
(ska7)
/*
/*
numbering system for
neurons
N 0- forward
pushbutton (B.0)
N 1- forward
light
N 2- forward output
motor
N 3- forward
shock input
N 4- backward
pushbutton (B.1)
N 5- backward
light
N 6- backward
output motor
N 7- backward
shock motor
*/
/*
PORTA: ADC
PORTB:
Pushbuttons
PORTC: Motor
Control
PORTD: Hyperterm
*/
#include <Mega32.h>
#include <stdio.h>
#define begin {
#define end }
#include <delay.h>
// delay_ms
#define t2 40 //neural network timing
#define t3 40 //adc polling
#define refrac
80 //refractory
period
#define t4 20 //motor timer
#define motorsteps
3 //number of
times codes goes through motor code for each output spike
char Ain1, Ain2; //ADC inputs
int adctime, timer, mtimer,
mtimer_back; //timer variables
float weights [8][8]; //weight matrix
float Lrate,
ULrate, ULrate_inhib; //learning,
unlearning rates
int outputV [8]; //1 if spike, 0 if
not spike
char spiketime[8]; //timer connected
to refractory period
float V [8]; //membrane
potential
float leakage; //leakage current
(based on RC constant)
int vthresh_motor, vthresh_shock,
vthresh_light, vthresh_push,
vthresh_shockback;
//thresholds for each type of neuron
int vrest; //resting potential
int i, j; //counters for for loops
char led; //led output
char step; //for motor stepping
char for_count,
back_count; //step counter for motor code
void initialize(void);
//**********************************************************
//timer 0 compare ISR
interrupt [TIM0_COMP] void timer0_compare(void)
begin
//Decrement the
three times if they are not already zero
if
(spiketime[0]>0) --spiketime[0];
if
(spiketime[1]>0) --spiketime[1];
if
(spiketime[2]>0) --spiketime[2];
if
(spiketime[3]>0) --spiketime[3];
if
(spiketime[4]>0) --spiketime[4];
if
(spiketime[5]>0) --spiketime[5];
if
(spiketime[6]>0) --spiketime[6];
if
(spiketime[7]>0) --spiketime[7];
if
(adctime>0) --adctime;
if
(timer > 0) --timer;
if
(mtimer > 0) --mtimer;
if
(mtimer_back>0) --mtimer_back;
end
//**********************************************************
void main(void)
begin //begin main
function
initialize();
while(1)
begin // begin
while(1) loop
//*************************************************ADC
PORTION*************************************
//poll light
inputs through ADC
if(adctime==0)
begin //begin adctime
adctime=t3;
ADCSR.6 = 1;
//front light input into ADC0
while
(ADCSR.6);
Ain1 = ADCH; //front light input
ADMUX = 0b01100001;
//set A.1 for next input
ADCSR.6 = 1;
//second light input into ADC1
while
(ADCSR.6);
Ain2 = ADCH; //back light input
ADMUX = 0b01100000;
//set A.0 for next ADC input
ADCSR.6=1;
end// end adctime if
//************************************end
of ADC part**********************************************
//*************************************motor
control here*******************************************
//motor moving
forwards
if (mtimer
== 0 && for_count>0) //occurs every 20
ms, 80 ms total motor step, 3 times through per output spike
begin //begin mtimer if statement
mtimer =
t4; //reset mtimer
if
(step == 3)
PORTC=8+16; //2 lines are
asserted on PORTC, 1 for each motor
if
(step == 2)
PORTC
= 4+32;
if (step == 1)
PORTC
= 2+64;
if (step == 0)
begin //begin step=0 if statement
PORTC =
1+128;
for_count--;
//decrement forward motor counter
step=4;
end //end last step if statement
step--; //decrement step
end
//end mtimer if
statement
// motor moving
backwards
if (mtimer_back
== 0 && back_count>0)
begin //begin if mtimer_back statement
mtimer_back = t4;
if (step == 0)
PORTC = 1+128;
if (step == 1)
PORTC
= 2+64;
if (step == 2)
PORTC = 4+32;
if (step==3)
begin //begin step==3 if
PORTC=8+16;
back_count--;
step=-1;
end //end step==3 if statement
step++; //increment step
end //end mtimer_back if statement
//***************************end
of motor stuff***********************************************
//***************************
NEURAL NETWORK STARTS HERE **************************************
if (timer==0)
begin //begin
if timer=0 statement, when timer=0, go through neural network code
timer=t2;
led=0xff; //used for
debugging later on
PORTB=led;
PORTD.7=~PORTD.7; //make sure board works
//First code
front inputs
// NEURON 1
LIGHT
INPUT***********************************************************************
if (spiketime[1] == 0) //if not in refractory period, want to update membrane
potential
V[1] = leakage*(V[1] + (float)(Ain1));
//update membrane potential based on Ain1,
front LED
else //in refractory period, membrane potential = vrest
V[1] = vrest;
if
(V[1] < vrest)
//check- can never go below vrest
V[1] = vrest;
if
(V[1]>=vthresh_light)
begin
//begin if V[1], greater than threshold,
should spike
outputV[1]=1; //neuron 1 spikes
led=0xdf;
PORTB=led;
spiketime[1]=refrac; //set refractory
period here
V[1]=vrest; //membrane potential
set back to resting potential
end
//end if V[1]
else
outputV[1]=0; //neuron doesnt spike if mem pot less than threshold
//NEURON 0 PUSH
BUTTON********************************************************************
if((~PINB)==0x01) //user pressing
first pushbutton, strong weight connection here
if (spiketime[0] ==
0)
V[0] =
leakage*(V[0] + (.8*vthresh_push)); //this .8
multiplier was set randomly
else
V[0] = vrest;
if
(V[0] < vrest)
//checker- can never go below vrest
V[0] = vrest;
if
(V[0]>=vthresh_push)
begin //begin if
V[0]
outputV[0]=1; //neuron 0 spikes
spiketime[0]=refrac; //set refractory period here
V[0]=vrest; //membrane
potential set back to resting potential
end //end if
V[0]
else
outputV[0]=0; //neuron doesnt spike if mem pot less than threshold
//NEURON 3 LIGHT
INHIBIT*********************************************************
if (spiketime[3] == 0) //not in refractory period
V[3] = leakage*(V[3] + ((float)(Ain1)));
//update membrane pot based on ADC0
else
V[3] = vrest; //in refrac period, set to vrest
if
(V[3] < vrest)
//checker- can never go below vrest
V[3] = vrest;
if
(V[3]>=vthresh_shock)
begin
//begin if V[3]
outputV[3]=-1;
led=0xbf;
PORTB=led;
spiketime[3]=refrac;
V[3]=vrest;
end
//end if V[3]
else
outputV[3]=0;
// NEURON 2
MOTOR NEURON, TO MOVE FORWARD
********************************************************************
if (spiketime[2] == 0) //output neuron dependent on weights of all incoming
connections
V[2] = leakage*(V[2] + ((outputV[0]
* weights[0][2])+(outputV[1] * weights[1][2])+(outputV[3]*weights[3][2])-(outputV[5]*weights[5][2])-(outputV[7]*weights[7][2])));
else
V[2] = vrest;
if
(V[2] < vrest)
//checker- can never go below vrest
V[2] = vrest;
if (V[2]>=vthresh_motor) //should spike
begin //begin if V[2]
outputV[2]=1; //neuron 2 spikes here, move forwards
spiketime[2]=refrac; //set refrac period
// set motor parameters
step=3;
mtimer=0;
for_count = motorsteps;
// set potential back to vrest
V[2]=vrest;
end //end if V[2]
else
outputV[2]=0; //doesn't spike
//***************************************************************************************
//***************************************************************************************
//Now code next
4 back neurons, very similar to top four, except differnt
numbering and using Ain2 instead of Ain1
//These next 4 are very similar... just replace the numbers but they do
same except back instead of forwards
// NEURON 5
LIGHT
INPUT***********************************************************************
if (spiketime[5] == 0)
V[5] = leakage*(V[5] + (float)(Ain2));
//use Ain2 here instead of Ain1
else
V[5] = vrest;
if
(V[5] < vrest)
//checker- can never go below vrest
V[5] = vrest;
if
(V[5]>=vthresh_light) //should spike here
begin
//begin if V[5]
outputV[5]=1;
led=0xdf;
PORTB=led;
spiketime[5]=refrac;
V[5]=vrest;
end
//end if V[5]
else
outputV[5]=0;
//NEURON 4 PUSH
BUTTON********************************************************************
if((~PINB)==0x02) //move backwards if
press second pushbutton
if (spiketime[4] ==
0)
V[4] = leakage*(V[4] + (.8*vthresh_push)); //once again, .8
provided a good multiplier
else
V[4] = vrest;
if
(V[4] < vrest)
//checker- can never go below vrest
V[4] = vrest;
if
(V[4]>=vthresh_push)
begin //begin if
V[4]
outputV[4]=1;
spiketime[4]=refrac;
V[4]=vrest;
end //end if
V[4]
else
outputV[4]=0;
// NEURON 7
LIGHT INHIBIT*********************************************************
if (spiketime[7] == 0)
V[7] = leakage*(V[7] + ((float)(Ain2)));
//use Ain2 here
else
V[7] = vrest;
if
(V[7] < vrest)
//checker- can never go below vrest
V[7] = vrest;
if
(V[7]>=vthresh_shockback)
begin
//begin if V[7]
outputV[7]=-1;
led=0xbf;
PORTB=led;
spiketime[7]=refrac;
V[7]=vrest;
end
//end if V[7]
else
outputV[7]=0;
// NEURON 6
MOTOR
NEURON********************************************************************
if (spiketime[6] == 0) //how to determine if backwards motor should spikes, dependend on excit/inhib weights
V[6] = leakage*(V[6] + ((outputV[4]*weights[4][6])+(outputV[5]*weights[5][6])+(outputV[7]*weights[7][6])-(outputV[1]*weights[1][6])-(outputV[3]*weights[3][6])));
else
V[6] = vrest;
if
(V[6] < vrest)
//checker- can never go below vrest
V[6] = vrest;
if (V[6]>=vthresh_motor)
begin //begin if V[6],
spikes
outputV[6]=1; //spikes
spiketime[6]=refrac;
//set motor control parameters
step=0;
mtimer_back=0;
back_count = motorsteps;
//set membrane potential back
to vrest
V[6]=vrest;
end
//end if
V[6]
else
outputV[6]=0; //doesnt spike
//************************
finished updating each neuron
*****************************************************
//********************
PRINTING TO HYPERTERMINAL SECTION********************************************
//want to just
print the weights[1][2]- forward light excit
// and weights[5][6]- backward light excit
printf("%i %i\r\n",
(int)(weights[1][2]*1000),
(int)(weights[5][6]*1000)); //make into ints, output less characters
//*******************************CHANGING
WEIGHTS SECTION******************************************************
//this section
changes the weights depending on the spiketime and
currently which neurons are spiking
//********************changing
weights if neurons 0-3 spike
if
(outputV[0] == 1)
//forward pushbutton spikes
begin // begin if statement for 0
j =
2;
if (spiketime[j]
>= spiketime[0])
//learning, output spikes after input
weights[0][j] = weights[0][j] +
weights[0][j] * Lrate;
else if (spiketime[j] != 0) //unlearning, outuput spikes before input
weights[0][j] = weights[0][j] - weights[0][j] * ULrate;
end // end if statement for 0
if
(outputV[1] == 1)
//forward light neuron spikes
begin // begin if statement for 1
j=2; //weight[1][2], excitatory
if (spiketime[j]
>= spiketime[1])
//learning, output spikes after input
weights[1][j] = weights[1][j] +
weights[1][j] * Lrate;
else if (spiketime[j] != 0) //unlearning,
output spikes before input
weights[1][j] = weights[1][j] - weights[1][j] * ULrate;
j=6; //weight[1][6],
inhibitory
if (spiketime[j] >= spiketime[1]) //learning
weights[1][j] = weights[1][j] +
weights[1][j] * Lrate;
else if (spiketime[j] != 0) //unlearning
weights[1][j] = weights[1][j] - weights[1][j] * ULrate_inhib; //inhib
end // end if statement for 1
if
(outputV[3] == -1) //forward
shock light neuron spiked on this cycle
begin // begin if statement for 3
j=2; //weight[3][2], inhib
if (spiketime[j]
>= spiketime[3])
//learning,
weights[3][j] = weights[3][j] + weights[3][j] * Lrate;
else if (spiketime[j] != 0) //unlearning
weights[3][j] = weights[3][j] - weights[3][j] * ULrate_inhib;
j=6; //weight[3][6], excit
if (spiketime[j] >= spiketime[3]) //learning, output
and input spike on same cycle
weights[3][j] = weights[3][j] + weights[3][j] * Lrate;
else if (spiketime[j] != 0) //unlearning,
output spikes cycle previous input
weights[3][j] = weights[3][j] - weights[3][j] * ULrate_inhib; //use inhib learning rate here
end // end if statement for 3
if
(outputV[2] == 1)
//always learning here, forward motor output
control, output spike after input always, always learning
begin // begin if statement for outputV[2]
for (i=0;i<=1;i++)
//first two input neurons
begin //begin for
loop
if (spiketime[i]!=0 && outputV[i]==0)
weights[i][2] = weights[i][2] + weights[i][2] * Lrate; //learning
end // end for loop
i=3; //inhibitory
if (spiketime[i]!=0 && outputV[i]==0)
weights[i][2] = weights[i][2] + weights[i][2] * Lrate; //learning
i=5; //inhibitory
if (spiketime[i]!=0 && outputV[i]==0)
weights[i][2] = weights[i][2] + weights[i][2] * Lrate; //learning
i=7; //excitatory
if (spiketime[i]!=0 && outputV[i]==0)
weights[i][2] = weights[i][2] + weights[i][2] * Lrate; //learning
end // end if statement for outputV[2]
spiking
//**********************
changing weights for neurons 4-7, backward neurons
//this is code
is very similar to changing weights for neurons 0-3
if
(outputV[4] == 1)
//backward pushbutton is spiking
begin // begin if statement for 4
j =
6;
if (spiketime[j]
>= spiketime[4])
//learning
weights[4][j] = weights[4][j] +
weights[4][j] * Lrate;
else if (spiketime[j] != 0) //unlearning
weights[4][j] = weights[4][j] - weights[4][j] * ULrate;
end // end if statement for 4
if
(outputV[5] == 1)
//backward regular light neuron spikes
begin // begin if statement for 5
j=6; //weight[5][6], excitatory
if (spiketime[j]
>= spiketime[5])
//learning
weights[5][j] = weights[5][j] +
weights[5][j] * Lrate;
else if (spiketime[j] != 0) //unlearning
weights[5][j] = weights[5][j] - weights[5][j] * ULrate; //excit
j=2; //weight[5][2],
inhibitory
if (spiketime[j] >= spiketime[5]) //learning
weights[5][j] = weights[5][j] +
weights[5][j] * Lrate;
else if (spiketime[j] != 0) //unlearning
weights[5][j] = weights[5][j] - weights[5][j] * ULrate_inhib; //inhib
end // end if statement for 5
if
(outputV[7] == -1) //backward
shock light neuron
begin // begin if statement for 7
j=6; //weight[3][2], inhib
if (spiketime[j]
>= spiketime[7])
//learning
weights[7][j] = weights[7][j] + weights[7][j] * Lrate;
else if (spiketime[j] != 0) //unlearning
weights[7][j] = weights[7][j] - weights[7][j] * ULrate_inhib;
j=2; //weight[3][6], excit
if (spiketime[j] >= spiketime[7]) //learning, output
and input spike on same cycle
weights[7][j] = weights[7][j] + weights[7][j] * Lrate;
else if (spiketime[j] != 0) //unlearning,
output spikes cycle previous input
weights[7][j] = weights[7][j] -
weights[7][j] * ULrate_inhib;
end // end if statement for 7
if
(outputV[6] == 1)
//always learning here, backward motor output
control, output spike after input always, always learning
begin // begin if statement for outputV[2]
for (i=4;i<=5;i++) //both excitatory
begin //begin for
loop
if (spiketime[i]!=0 && outputV[i]==0)
weights[i][6] = weights[i][6] + weights[i][6] * Lrate;
end // end for loop
i=3; //excitatory
if (spiketime[i]!=0 && outputV[i]==0)
weights[i][6] = weights[i][6] + weights[i][6] * Lrate;
i=1; //inhibitory
if (spiketime[i]!=0 && outputV[i]==0)
weights[i][6] = weights[i][6] + weights[i][6] * Lrate;
i=7; //inhibitory
if (spiketime[i]!=0 && outputV[i]==0)
weights[i][6] = weights[i][6] + weights[i][6] * Lrate;
end // end if statement for outputV[2]
//*******************************************************
END CHANGING WEIGHTS HERE*************************
//**************
CAPPING WEIGHTS SECTION HERE ************************************
// Normalized weights to 1- so cap maximum weight at 1
// Minimum
allowed weight is .001, can't let connection go to 0
// Goes through
all weight connections and check through if statements
for
(i=0; i<=1; i++) //[0][2], [1][2]
begin //begin for statment
if (weights[i][2]>1) //max capping
weights[i][2] = 1;
if (weights[i][2]<.001) //min capping
weights[i][2] = .001;
end //end for
statement
for(i=3; i<=5; i++)
begin
if (weights[i][2]>1)
weights[i][2]=1;
if
(weights[i][2]<.001)
weights[i][2] = .001;
end
if
(weights[7][2]>1)
weights[7][2]=1;
if
(weights[7][2]<.001)
weights[7][2]=.001;
for
(i=0; i<=1; i++)
begin
if (weights[i][6]>1)
weights[i][6] = 1;
if (weights[i][6]<.001)
weights[i][6] = .001;
end
for(i=3; i<=5; i++)
begin
if (weights[i][6]>1)
weights[i][6]=1;
if (weights[i][6]<.001)
weights[i][6] = .001;
end
if
(weights[7][6]>1)
weights[7][6]=1;
if
(weights[7][6]<.001)
weights[7][6]=.001;
end //end
if timer statement
//******************************END
NEURAL NETWORK CODING HERE ********************************
end //end
while(1) loop
end //end
main
//***********************************************************************
void initialize(void)
begin
//ADC init block
//init the A to D
converter
//channel zero/
left adj /Internal reference on green protoboard
//!!!CONNECT Aref jumper!!!!
//ADMUX =
0b00100000;
ADMUX = 0b01100000;
//enable ADC and
set prescaler to 1/128*16MHz=125,000
//and clear interupt enable
//and start a
conversion
ADCSR = 0b11000111;
//set up the
ports
DDRB=0x00; // PORT B- input is bottom nibble, output is top nibble,
for pushbuttons
DDRC=0xff;
// PORT C is an output port for motor control
PORTC=0x00;
led = 0xff;
PORTB=led;
DDRD.7=1;
//green led, to check if protoboard
is working
PORTD.7=0; //green led
//sets up 1 ms
timer on timer 0
//set up timer
0
TIMSK=2; //turn on timer 0 cmp match ISR
OCR0 = 250;
//set
the compare re to 250 time ticks
//prescalar to 64 and turn on
clear-on-match
TCCR0=0b00001011;
//init the task
timers
adctime=t3;
timer = t2;
mtimer=-1;
mtimer_back=-1;
// init motor
counters
for_count = 0;
back_count = 0;
// initialize neuroinputs
//set thresholds
vthresh_motor=4; //motor thresholds
vthresh_light=20; //regular light input thresholds
vthresh_push=2; //pushbutton neurons thresholds
vthresh_shock = 60; //forward shock input threshold
vthresh_shockback =
45; //backward
input shock neuron to sense burning to light
vrest=0; //resting potential set to 0
leakage = 0.999; //leakage constant set to .999
for(i=0;i<4;i++) //init rest of neuro parameters
begin
outputV[i]=0; //spikings set to 0
spiketime[i]=0; //spiketimes set at 0
V[i]=vrest; //start at vrest
for(j=0;j<4;j++)
begin
weights[i][j]=0; //weights at 0
end
end
// now only create
the 10 weights that matter in network
weights[0][2]=1; //forward
pushbutton on forward motor, excit
weights[1][2]=.005; //forward light on forward motor, excit
weights[3][2]=.5; //forward shock
neuron on forward motor neuron, inhib
weights[5][2]=0.2; //backward light on
forward motor, inhib
weights[7][2]=0.6; //backward shock on forward motor, excit
weights[1][6]=.2; //forward light on
back motor neuron, inhib
weights[3][6]=.6; //forward shock on
back motor, excit
weights[4][6]=1; //back pushbutton on back motor, excit
weights[5][6]=.005; //back light on
back motor neuron, excit
weights[7][6]=.5; //back shock neuron
on back motor neuron, inhib
Lrate=0.1; //learning
rate
ULrate=0.11; //make
unlearning rate little larger than learning rate for excit
neurons
//ULrate_inhib=0.1;
ULrate_inhib=0.02;
//serial setop for debugging using printf,
etc.- use of hypeterm
UCSRB = 0x18 ;
UBRRL = 103 ;
printf("Starting...\r\n");
//crank up the ISRs
#asm
sei
#endasm
end // end
initialize()
//********************************************************************************
The
following is an overview of the connections to the MCU. Specific circuits
can be found in the hardware section.
Part |
Cost ($) |
Atmel Mega32 |
8 |
Brown Breadboard |
2.50 |
Custom PC Board |
5 |
Power Supply |
5 |
Max233CPP |
Free (Sampled from Maxim) |
RS232 Connector |
2 |
2 Stepper Motors (#061-015) |
2 ($1 Each) |
ULN2803 |
Free (Sampled from TI) |
Resistors and Capacitors |
Free (In Lab) |
Wheels |
Free (Borrowed, Found) |
Wood Base |
.30 |
LM340LAZ-5.0 regulator |
Free (Sampled) |
Green LEDs |
Free (In Lab) |
Nails, Brackets, Washers, Bolts |
4 |
|
|
Total: |
$28.80 |
Therefore, we stayed well below the $50 limit in the project guidelines.
Most parts of the project were done by both members of the group. The following are the specific tasks involved:
1. We would first like to thank
2. Data Sheets:
http://instruct1.cit.cornell.edu/courses/ee476/AtmelStuff/full32.pdf
http://focus.ti.com/lit/ds/symlink/uln2803a.pdf
3. Vendor Sites:
4. Online
References:
http://www.people.cornell.edu/pages/mlk24/boom/main.html
http://www.alife.org/alife8/proceedings/sub4681.pdf
http://instruct1.cit.cornell.edu/courses/ee476/FinalProjects/s2001/vp2/
http://www.nbb.cornell.edu/neurobio/land/PROJECTS/BioNB330/index.html
A side view of our robot:
A top view of our robot:
The robot in action:
The robot with it's
creators: