Introduction top
Robotic Candy Sorter
The purpose of the Robotic Candy Sorter project was to implement a 3 degree of freedom robotic arm and vision system that can detect and sort candy by color. This was accomplished by building an integrated system that leverages high-level (Raspberry Pi) and low-level (PIC32) processing to accomplish an ambitious task. The Raspberry Pi (RPi) handled the image processing and sorting algorithms, while the PIC32 microcontroller (uC) maintained control of the motors by solving the inverse kinematics (IK).

Figure 1: The finished project.
Many embedded systems leverage a hierarchical structure in their product architecture for a more robust and efficient product. The more complex the product becomes, it is more preferable to segment tasks according to the processing unit’s role to observe optimal tradeoffs in the system. For instance, it would be impractical for a single processing unit to handle both user interface tasks while also handling real-time control tasks. User interface tasks are very tolerating to missed deadlines while the latter is not. Rather than having a single processing unit that can comprehensively handle both ends of the spectrum, such as real-time operating systems (RTOS), this project explores a more modular build by segmenting high-level and low-level tasks to separate processing units in order to implement a complex system like a vision-driven robotic arm.
The Robotic Candy Sorter is an interdisciplinary project that integrates mechanical, electrical, and software aspects. The report discusses the inverse kinematics involved in mapping servo angles to orthogonal spatial coordinates, the individual electrical systems, and the software that handles object recognition to drive the FSM. The segmentation of the processors’ roles in this project aligns well with our academic purposes as well. The RPi development of the project was done for Cornell University’s ECE 5725 Embedded OS course and the PIC32 development of the project was done for ECE 4760 Microcontrollers courses.
System Overview top

Figure 2: Project Block diagram.
The high-level task processing unit is the Raspberry Pi Model 2 B+ with a 900 MHz quad-core 32-bit ARM Cortex A7 CPU. The low-level task processing unit is the Microchip PIC32MX250F128B with a 40 MHz MIPS M4K 32-bit core and 5 stage pipeline. The communication between the two processors is handled by a UART serial interface. To summarize the software of the project, the robotic arm has a vision-based control. Rather than precisely mapping camera pixel coordinates to robot x,y,z coordinates, the software uses an adaptive control scheme where processing each frame yields an incremental movement of the arm. The vision software uses the OpenCV and PySerial libraries to processes commands and send them to the PIC32. The PIC32 receives commands, solves the inverse kinematics and then outputs the new angles to each of the servos on the arm. The arm was built from the popular MeArm design.
Standards
For our final project, no standards were necessary to be followed.
Copyrights
We did use some existing code when constructing our project. This is mentioned below as needed and all atribution is given in References section.
Hardware Implementation top

Figure 3: CAD rendering of the base.
A lot of work went into designing the hardware to meet the requirements of the class. We were required to maintain a budget of under $100 and complete the project in five weeks. To do this we decided to use prebuilt systems whenever possible. We found that the MeArm 1.0 design met all of our requirements without breaking our budget. The 3mm acrylic for the arm and base was salvaged from the scrap pile at Cornell’s Rapid Prototyping Lab and all of the bolts, nuts and servos were collected from the leftovers of past projects of the Cornell Maker Lab. Additionally, the PCB mill in the Maker Lab was used to manufacture the printed circuit board.
The major flaw that needed to be fixed with the MeArm design was the instability of the base. The original design left a large gap between the fixed base and the rotating part. This meant that whenever the arm would move there would be additional deflection caused by this gap. To fix this instability, we designed a two piece spacer system that is inserted in the base to prevent it from tipping when the arm extends. This greatly improved the rigidity of the whole arm and made testing easier as all of the moves were repeatable now.
The goal of the base design was to protect and display the electronics while also allowing us to mount the camera. The base uses a jig-saw and t-nut pattern to allow the acrylic pieces to be assembled with minimal effort. This also produces a very rigid column which was needed to support the cantilevered camera mount. All of the CAD was done in Autodesk Inventor and then the drawings were exported as PDFs so that they could be laser cut.

Figure 4: Camera mount.
The design of the camera mount was the hardest part of the mechanical design. We wanted to have a system that had flexible mounting options so that we could adjust the camera to get the perfect angle and maximize field of view. We ended up mounting the camera in a parallel slot system so it could be moved in and out to adjust the center of the image. Additionally, spacers were inserted under the camera to cancel out the tilt caused by the deflection of the cantilever. Finally, the electronics tower was made slightly taller than it needed to be so that we had room to adjust the range of the image.

Figure 5: The Millied Printed Circuit Board.

Figure 6: The Final Schematic.
The design of the PCB was very simple as it only needs to breakout the connections and distribute power. Since the PIC32 runs off of 3.3V, a high power 5V source was needed for the servos. This was done by using an external DC power supply and running out a 2.1 mm barrel jack to hookup the power. Additionally, breakouts were made for the LCD, serial and PWM outputs. Finally, four potentiometers were included to give us the ability to control the arm without a computer. All of the electronics and hardware were assembled with no problem and we were able to quickly transition to writing the software.
Software Implementation top
The main execution flow of the robotic arm is handled on the Raspberry Pi. The RPi allows for general purpose input and output peripheral ports as well as the ease of development on Linux operating system. As a result, RPi handles the high-level sequence of execution code as well as the integration with the PiCamera for image processing. Once the image processing section of the FSM is completed, the RPi initiates sequence of sort function that detects position of the object and requests sequence of motor control tasks to the PIC32 microcontroller via serial communication protocol. This structure demonstrates high level of abstraction on the RPi control scheme because the motor control commands is abstracted to PIC32, and RPi can proceeds to the next stage of the FSM.
FSM on the Raspberry Pi
The Finite State Machine is what is executed in the main while loop of the python script for the Robotic Candy Sorter. The FSM has 5 different states: IDLE, INIT, DETECT, MOVE TO, MOVE OBJECT. What is executed and transition between the states are summarized in the illustration below.

Figure 7: The Finite State Machine.
In the IDLE state, the script throws a blocking function call to receive a valid input from the user keyboard. If the character input matches to the list of provided sortable colors, FSM advances to the INIT state. In the INIT state, instance of robot arm is created as well as initializing the serial interface through the PySerial library. In addition, one frame of input image is processed to create list of starburst objects to determine how many objects are in the visible sorting field. Each starburst object has variables of color, position, and isSorted boolean to facilitate the sorting and tracking procedure. The DETECT state is where bulk of image processing is done to determine the updating position of the robot claw as well as the position of the targeted starburst. To make the control robust to outlier or dropped instance of a frame, we utilize a median filter of size 5 for the detected position. It leverages the history of positions from the past 5 frames to make the control scheme more error-proof. If the euclidian distance from the claw and targeted object is less than specified radius of 10 pixels, the FSM transitions to the MOVE OBJECT state. If the distance is greater than the radius, the FSM transitions to MOVE TO where the arm will take an incremental step in the calculated delta X, delta Y direction. Lastly, once the arm has taken sufficient incremental steps to the targeted object, it will transition to the MOVE OBJECT state, in which the robot will execute sequence of predefined movements to close the claw, pick up the starburst, go to the sort basket, drop the starburst, and return back to default position. Once completed, the FSM returns back to the IDLE state where the user can input another request of sort.
Object Detection on Raspberry Pi
The object recognition is handled by utilizing OpenCV library in Python on the RPi. The PiCamera library interface is enabled through importing the 5 MB picamera raw RGB input to numpy 2D arrays to provide the Mat structure commonly utilized for OpenCV. Once the RGB frame from the PiCamera is captured into a matrix format, image operations from OpenCV can be easily utilized to ultimately recognize colored starburst objects. The image processing pipeline is as shown below.

Figure 8: Object Detection sequence.
We process the raw captured RGB input into the HSV color scheme. Operating in HSV color scheme provides significant benefit over the RGB color scheme such as isolating the desired color into light and value intensity. Bounding the lower and upper bound range of a particular color value is narrowed to one variable rather than three variables of RGB in raw input color scheme. This isolation also provides robustness to fluctuation in lighting condition. The next matrix operation is to threshold by flooring or ceiling pixel values into a binary matrix is within the desired HSV color bounds for the desired color inputs of red, yellow, and green.
To determine the position of the claw, we tagged the tips of the claw with green markers. The position of the claw is determined by averaging the two centers of mass of the green objects. The claw position is only updated when two contours of the green object is clearly captured.
The threshold outputs corresponding to the claw and the targeted starburst are then de-noised with a morphological filter that erode and dilate the binary matrix. The morphological filter resulted in a better contour detection over alternative filtering schemes such as traditional median or Gaussian filter with a 5x5 window. This is because the resulting binary matrix from eroding and dilating function maintains the edge while filling in the non-connected values in the surrounding body.
We observe the greatest benefit of utilizing OpenCV library with the provided contour detection function. With the given input of binary matrix, the contour detection function can result list of bounded objects. This list may contain spurious detected objects from remaining noise or even sometimes detect region of one starburst as duplicate contours. To prevent these cases, we filter out contours by size and shape. If the detected contour is less than specified pixel area, than it is not determined as a starburst. In addition, because starbursts have a regular ratio between width and height, we can further filter our contours to isolate detected instances that are starburst size and shaped. The sequence of the described image matrix operations can be illustrated in the figure below.

Figure 9: Raspberry pi screen capture with code running.
To facilitate the debugging and development purpose, we overlay the detected contour and its center of mass onto the raw RGB capture frame. In addition, we print out all relevant variables onto the terminal. As shown in the figure, the terminal output captures current state, detected delta X,Y positions between starburst and claw, and the euclidian distance. The state transition is clearly indicated to incremental hover towards the targeted starburst until the calculated distance is within grabbing distance, and then initiates the sequence of sorting movements.
Serial Communication
As mentioned above, the PySerial library was used to send and receive data from the RPi to the PIC32. A command language was developed to standardize how the data was sent. Each command is in the format of a character command letter and a float value. There are options to set the absolute cartesian and cylindrical coordinates or to take steps in any direction. A python module was written to abstract this language to a set of function calls that can be used to initialize and control the arm.
PIC32 Code

Figure 10: The Base angle definition.

Figure 11: The Shoulder and Elbow angle definition.

Figure 12: The Claw angle definition.
To begin discussing the PIC32 Code we must first define how each joint angle is defined and then go over how this is transformed into cylindrical coordinates with inverse kinematics. Since our arm has 3 DOF and a claw their are four joints to consider: the base, the shoulder, the elbow and the claw. The goal of inverse kinematics is then to take an input coordinate (theta, r, z, claw) and determine what joint angles would get the arm there. Looking from above the base angle is just the internal angle formed by the arm and the line extending from the center to the right. If we let this line become the x-axis, then the base angle is simply the desired theta coordinate. The claw angle is defined in a similar way in that it is the internal angle between the center of the claw and the gripper position and is thus the claw coordinate.

Figure 13: The Inverse Kinimatic System.
The final two joints, the shoulder and the elbow were the most difficult as the spatial coordinates that they need to produce, z and r, require simultaneous movement of the joints. The inverse kinematics is performed by solving the triangle formed by the forearm, bicep and the line formed by the desired r and z coordinates. Then through the law of cosines and more trigonometry each joint angle can be solved. With the shoulder angle being the angle of elevation of the bicep and the horizontal, and the elbow angle being the angle of declination between the forearm and the horizontal. The math produced a few equations which were converted into efficient c code to be ran on the arm.
Code execution begins in the main function which makes calls to initialize and start the thread scheduler. To keep the code short it was decided that the code needed to be separated into modules. The first of which is types.h which houses all of the structs to package all of the joint and servo values. Additionally, scaling.h and scaling.c were written to hold helper functions that were used to convert and map coordinate systems. The user input is handled in the parse.h and parse.c files. It is their job to have a function handler for each command that first verifies the validity of the command range and then sets the appropriate value. To know what the limits of the arm were, limits.h and limits.c were written to house the macro constants and functions for testing whether the move was in bounds. If the move was inbounds, the calculation thread calls the solve function in ik.h and ik.c to get the output servo angles. The output thread then uses arm.h and arm.c to first convert the angles to PWM values using the calibration data in calibration.h and then write the PWM data to the motors.
The main advantage of the modular design was that each module could be fully written and tested before moving onto the next part and that the code was easily written into threads.. This became very useful when working with the inverse kinematics as it allowed us to verify that the math worked and that the arm would not try to destroy itself. To help protect the arm, conservative limits were placed on the ranges of each coordinates which limited what the arm could do but was not a problem in our setup. There were three threads used, the command thread to take user input, the calculation thread to process it and the output thread to send the results to the servos.
Testing top
Throughout the entire development of the project, we leveraged modular functionality, incremental development approach, and test-driven design. Given the extensive amount of mechanical, electrical, and software required to integrate the entire robotic arm, we initially clearly defined the design components, high-level requirements within each component, and dependencies within subsystems. This approach allowed us to highly parallelize the development process. For instance, the vision and FSM part on the Raspberry Pi was almost all developed in parallel with the motor control on the PIC32.
The precision of the robotic arm movement was the challenging aspect of this project. To overcome this hurdle, we introduced the vision-based corrective control so that the system could utilize basic visual feedback to correct its path to the target. In addition, the custom circuit board allowed to fine-tune the desired PWM servo values to angles by using potentiometers. Lastly, the color thresholding range for the vision as well as the material of object to sort had to be tested across various options. We initially started with skittles to sort because it provided more options of easily distinguished colors but introduced the trade-off grabbing mechanical complexity due to its small, slippery, round surface. We resorted to an easier sorting object of a starburst with more rigidity and bigger size, but less favorable colors to detect.
Safety
We put a lot of thought into safety, to protect our project and ourselves. The base is designed so that with the limits set in software the arm cannot reach outside of it our collide with itself. Additionally, since the acrylic that we used was not FDA aproved we did not eat the candy after the arm handled it. Finally, we enclosed are electronics and used UL listed power supplies to prevent shorting and fire hazards.
Usability
A lot of focus was put into making the arm easy to demo and use. All of the code on the PIC32 starts up automatically and then any terminal program can be used to communicate with it. All of the code on the Raspberry Pi starts from a shell script so that important steps are not done out of order or missed. Additionally, all of the electronics and wires are enclosed to prevent damage and repairs. We believe that these systems are simple enough that anyone who could use a computer could use our project.
While our project does not tackle a big issue or fix a major life problem we believe that it still has merit. It is an interesting and fun introduction into robotics and can be used as an attractive display to inspire the next generation of engineers.
Interference with Other Designs
All of our software and hardware is self contained and we know that it poses no harm to other designs or people.
Results and Conclusions Conclusions top
Overall, we were very happy with the project. We had accomplished our original goal which was to learn about integrating high and low level systems to accomplish a task that would be difficult for either of them to do individually. Our robot was able to sort candy very reliably in an average of 20 seconds. While this time is not groundbreaking,we still learned a lot from this project and have many ideas to take it further. We would like to first improve our motion planning to make the arm movements more graceful and to prevent it from occasionally dropping the candy. Additionally, we would like to redesign the claw to allow for more irregular candy to be sorted. These improvements were possible with the hardware that we had available and if we had to change one thing about the project it would be to start earlier in the semester so that we could get to the all of the fun enhancements at the end. Other than that there is nothing we would change about our project and we are happy to call it finished.

Figure 14: The Final Bill of Materials.
Appendices top
A. Consent to publish
The group approves this report for inclusion on the course website. The group approves the video for inclusion on the course youtube channel.
B. Code and Resources
All of the code and other files used to build this project is available on github. https://github.com/PeterSlater/InefficientSkittleSorter
C. Member Contribution
The python code was developed by both Mark and Peter. With Mark leading the vision development and Peter working on the sorting algorithm and serial communication. The hardware was built and designed by Peter. All of the C code running on the PIC32 was written by Peter.
D. References
MeArm is a small "Hackable" Robot Arm. It's an Open Source project by Benjamin Gray and Jack Howard. Licensed under creative commons share alike. http://www.thingiverse.com/thing:993759
OpenCV was provided under its BSD license. http://opencv.org/license.html
Pyserial was provided under its BSD license. https://pythonhosted.org/pyserial/<\p>
Raspberry Pi 2 model B, 3D CAD assembly model - SOLIDWORKS,STEP / IGES - 3D CAD model - GrabCAD https://grabcad.com/library/raspberry-pi-2-model-b-3d-cad-assembly-model-2<\p>
Raspberry Pi Camera Module Mechanical Dimensions http://www.raspberrypi-spy.co.uk/2013/05/pi-camera-module-mechanical-dimensions/#prettyPhoto<\p>
MeArm - Your Robot - v1.0 by phenoptix - Thingiverse http://www.thingiverse.com/thing:993759 <\p>
MeArm Robot Arm - Your Robot - V1.0 - All http://www.instructables.com/id/MeArm-Robot-Arm-Your-Robot-V10/?ALLSTEPS <\p>
Learn about Robot Inverse Kinematics http://www.learnaboutrobots.com/inverseKinematics.htm