Depth Image Segmentation Using Statistical Methods

Paper presented to the Junior Science, Engineering, and Humanities Symposium at Maryville University in 2016.


This paper presents an algorithm that utilizes unsupervised machine learning methods for
classifying n-objects in a spatial image. Other methods of spatial image segmentation, for
example the Laplacian operator and support vector machines (SVM), solely return all visible
edges within the image, which can then be used to find all objects within the frame; however,
this algorithm returns all points belonging to a specific object in the image rather than the edges
of the entire image. This allows the omission of further processing and categorizes all the present
data into classes using a modification of the Otsu thresholding method, by using a spatial metric
and optimizing the local minimum to the partial derivative of an n-modal distribution.
Implementation of the algorithm can be specialized to work on any list of points, such as an
entire image or a region of an image. The results of the algorithm return distinct information for
each object in the image, allowing further processing to determine other characteristics about the
object, such as the orientation.


title = {Depth Image Segmentation Using Statistical Methods},
author = {Monahan, Connor},
note =
{preprint on webpage at \url{}},
year = {2016}

Article (PDF)

Local Positioning System with Multiple Cameras

Paper presented to the Junior Science, Engineering, and Humanities Symposium at Maryville University in 2015.


This paper presents the process of obtaining, filtering, and tracking targets to obtain the
camera’s relative position. The program used computer specific architecture features to optimize
the program and provide consistent results immediately. First, images are obtained of the
environment from 3 different infrared cameras in gray-scale with a field of view of 120 degrees.
Next, the images underwent a series of image processing algorithms using the OpenCV
programming libraries, such as morphological transformations and binary topological analysis, to
identify the targets in the image. This converts the 1080p 1-channel matrix into a vector of points
called contours. A series of tests were then used to remove potential noise and error from the
data. The program then uses vectors and matrices as storage for calculating the positioning and
distance. At this point the information from each camera was combined and utilized in a GPS
style calculation was used to find the local position of the robot relative to the field. Finally, this
information was used to solve for the relative pose of the robot by solving a system of non-linear
functions. The results provide positioning and motion information of the robot relative to the
static environment.


title = {Local Positioning System with Multiple Cameras},
author = {Monahan, Connor},
note =
{preprint on webpage at \url{}},
year = {2015}

Article (PDF)