Computational Hebbian Synapses and Self-Organizing Neural Maps


Home | Introduction | My Model | Results and Discussion | References and Links


Home

This website presents the work from my final project in the Computational Section of BioNB 2220, Spring 2009.

Abstract

This project investigates the efficacy of two variations of a Hebbian learning algorithm. Current research shows that the strengthening of synapses of pre-synaptic neurons on post-synaptic neurons is a function of depolarization of the post-synaptic cell membrane. Thus, coincidence detection and subsequent associativity and specificity requires no spiking activity of the post-synaptic neuron. In this project, I created a neural model that allows self-organization of neurons using a Hebbian learning algorithm. The extent of self-organization was quantified under two different Hebbian learning algorithms: one that required no spiking of the post-synaptic neurons (using a so-called activity independent output variable), and one that required spiking (using a so-called activity dependent output variable). With repeated simulations, it is shown that the extent of self-organization is significantly greater when the learning model does not require post-synaptic cell spiking to induce learning.

Files for my project can be downloaded below:


Matt Conlon
mac246@cornell.edu
BioNB2220 Computational Section
Final Project
Presented April 28, 2009