Showing 1 - 3 of 3 Items
Hybrid Pixel-Superpixel Structures for Enhanced Image Segmentation:
Integrating Boundary Information in Deep Learning Models
Date: 2025-01-01
Creator: Jack Roberts
Access: Open access
- This project explores novel approaches to image segmentation using U-Net, leveraging superpixels to enhance accuracy. The first part investigates augmenting standard image inputs by encoding and integrating superpixel information, including an extension that reintroduces this information throughout the encoder. While results show that these methods can offer consistent improvements over the baseline, the gains are modest and suggest room for further optimization. The second part introduces a hybrid data structure, the Superpixel-Integrated Grid (SIGrid), which embeds superpixel boundary, shape, and color descriptors into a regular n × n grid. SIGrid enables more efficient training on smaller architectures while achieving noticeably higher segmentation accuracy, highlighting its potential as a lightweight and effective input representation. The code developed for this project can be found at: https://github.com/JackRobs25/Honors
Robot Detection Using Gradient and Color Signatures
Date: 2016-05-01
Creator: Megan Marie Maher
Access: Open access
- Tasks which are simple for a human can be some of the most challenging for a robot. Finding and classifying objects in an image is a complex computer vision problem that computer scientists are constantly working to solve. In the context of the RoboCup Standard Platform League (SPL) Competition, in which humanoid robots are programmed to autonomously play soccer, identifying other robots on the field is an example of this difficult computer vision problem. Without obstacle detection in RoboCup, the robotic soccer players are unable to smoothly move around the field and can be penalized for walking into another robot. This project aims to use gradient and color signatures to identify robots in an image as a novel approach to visual robot detection. The method, "Fastgrad", is presented and analyzed in the context of the Bowdoin College Northern Bites codebase and then compared to other common methods of robot detection in RoboCup SPL.
Exploring the Effect of Core Tactics and Demographics on Squash
Gameplay Patterns Using Computer Vision
Date: 2025-01-01
Creator: Abhiroop Reddy Nagireddygari
Access: Open access
- This paper presents a computer vision system for analyzing common tactical and training pat- terns in squash using player locations and movement dynamics. Leveraging convolutional neural networks (CNNs) such as YOLO and TrackNet, we extract player coordinates on a squash court through a lightweight, single-camera framework. Match footage and detections are segmented by gender, skill level, and match phase to enable contextual comparisons. From 2D coordinates, we generate heatmaps of player locations, court coverage percentages, and distance-over-time graphs to visualize movement tendencies. Our results show that women demonstrate greater ball control and accuracy than men across all levels, while professional players exhibit more aggressive court usage than amateurs. We also identify that games 2 and 3 are the most physically demanding, highlighting a balance between slow starts and fatigue.