To more easily manage our code, we divided it into 4
finalproc.c contains the main program loop
xymotors.h contains functions that control the forward and backward movement of the stepper motors
video_int.h contains the video interrupt, written by Prof. Bruce Land
video.h contains the video code written by Prof. Bruce Land
Our main program loop first waits for input from the user. The program starts out in full-stepping mode, which will scan nearly an entire sheet of paper. The user can also press the half-step button to have the scanner perform over a smaller area with better resolution (motors will operate on half-steps). The program will wait until the user selects a scan mode: one-directional, zig-zag with slowing, and normal zig-zag. Because the output is sent to a TV we have a set image size that we can display, being that each pixel is simply a bit either being on or off we have the max screen size of 128x100 pixels. As a result all the scan modes use up all this space, so at full-stepping speed the screen area covered is twice as large as when the motor is half-stepped.
Before the image starts to scan the sensor first calibrates by looking at a totally white square and then moves to a totally black square, it then sets the threshold between black and white at exactly the mid point between the two values. It might be better to set the threshold closer to white since after a lot of testing it is apparent that the sensor is more sensitive to white on black than black on white...(will be explained later)
First, we must explain that our scanner was programmed so that the sensor scans along the x axis and steps along the y axis. That is, the x axle and motor constantly move back and forth while the y axle and motor moves only one step forward each time the x motor comes back to its starting point.
When it actually takes sensor data for a point we first
sample the ADC three times with a 1ms with of delay between them, then we take
the value that occurred 2 or more times either black or white, and set the pixel
as that value.
In one-directional mode, the scanner only samples the image when moving in the positive x direction. It then moves the sensor all the way back, and then steps in y. This method is the slowest, but typically provides a less distorted image as our zig-zag methods have a tendency to skew the image. This mode also has the slowing feature in that the x motor slows down when it reaches near both ends of a line. This feature was implemented to reduce the sensor shaking that occurs when the motor switches directions.
The zig-zag with slowing mode samples the image on both the forward and backward movement of the x motor. This effectively doubles the scan speed from the previous mode. However, an inherent problem that occured with zig-zag scanning was that the scanned lines displayed on the TV were systematically misaligned. We thought that this might have been attributed to the angle the sensor is pointing at the image surface (it is not exactly 90 degrees) and the different directions of movement of the sensor. We compensated for this problem by having the x motor take a couple pre-emptive steps just before it started its backward-movement loop. This way, we were able to re-align the screen line samples from both forward and backward movement.
The normal zig-zag mode is simply the same as above but without slowing at the edges. This was so we could compare image differences with the slowing feature.
Each mode goes through a similar loop structure that samples, puts to the video buffer, and detects user reset, detects TV display button, and steps the motor.
The trickiest part of writing the code was getting the proper motor brush scheme setup for when the motor would move from forward to backward, without this the motor would act unreliable and could slip and distort the image worse than it already is.