Overview

During high school, I was part of FIRST Robotics Competition Team 5690 (SubZero Robotics). As programming lead and driver, I worked across autonomous behavior, vision tracking, subsystem architecture, and teleop automation reliability.

The 2024 season was our team's 10th year competing. We finished 8th overall at the Minnesota State Championships with a 7-1-0 qualification record, while maintaining strong runtime stability across events.

Robot Context

The 2024 robot used a REV MAXSwerve drivetrain with L3 gearing, a dual-sided under-the-bumper intake, fixed shooters on both sides, an amp-assist arm, and two climbers. That hardware created a fairly demanding software problem: the robot needed to intake from either direction, stage notes centrally, choose the correct shooter path, automate scoring motions, and still stay predictable for the driver.

Because of that, the software wasn't just a set of subsystem commands. It became the coordination layer between drivetrain pose, game-piece detection, beam-break sensors, shooter state, arm position, climber position, LEDs, and operator controls.

My Scope

  • Designed and maintained the C++ control codebase using WPILib.
  • Led autonomous system development and safety hardening for match-ready routines.
  • Built and tuned vision-assisted aiming/intake features using AprilTags and Note detection.
  • Supported team SDLC through issue tracking, sprint planning, PR reviews, and release flow.

Autonomous System

PathPlanner + Tagged Auto Selection

We used PathPlanner to build autos early, then expanded our auto framework to support tag-based filtering. Each routine was tagged by attributes such as piece count, distance, and starting position. At runtime, grouped choosers filtered the final auto list so we could select specific routines quickly under match pressure.

This mattered more as the auto library grew. Instead of relying on long chooser names and driver memory, the dashboard could narrow routines by practical match criteria like starting location, number of notes, and distance from the speaker.

  • Executed a 3-note auto successfully in all matches at our first regional.
  • Added a safety wrapper around PPLib path loading after seeing an alliance partner fail due to path deploy issues.

Debugging "Baby Auto"

We diagnosed an intermittent issue where the robot traveled shorter physical distances while odometry over-reported movement. Root cause traced back to Spark MAX config erase/reflash behavior in startup code. Removing the config erase path resolved the issue and eliminated future occurrences.

The lesson was that robot initialization code has to be treated like a reliability-critical path. A configuration call that looks harmless in example code can become a match-affecting failure if the motor controller only partially applies its settings.

Vision, Targeting, and Driver Assist

AprilTag Pose Reliability

We ran dual Limelights with PhotonVision for pose estimation. An important fix came from correcting track width and wheelbase values (robot dimensions were off by about 2 inches each direction), which resolved large pose errors during rotation. We also tuned PhotonVision iterator settings to improve long-distance pose quality at the cost of output rate, then balanced that tradeoff with multi-camera usage.

Note Detection Pipeline

For game-piece acquisition, we used a Limelight 2+ with a Coral TPU and fused multiple distance/heading strategies from bounding boxes: center-angle trigonometry, area-based calibration, and width-based calibration. We selected methods by distance regime and fused results to generate reliable Note poses for on-the-fly pathfinding.

For practical teleop control, we added a PID mode that keeps the Note centered in the image while preserving driver translation control. This avoided feedback loops caused by noisy full-pose correction while moving.

Pose/Angle-Based Aiming

Object detection solved aiming at visible Notes, but scoring required aiming at fixed field targets even when the driver could not easily judge angle or distance. We treated the whole swerve robot like a turret: given the current field pose and a target pose, the drive system could rotate toward the correct heading while still blending in driver translation input.

The aiming abstraction accepted field-relative targets, robot-relative angles, and fixture locations for known scoring zones. Each fixture described where the robot should begin aiming, how large that zone was, what target it should face, and optionally when it was close enough to trigger the scoring command. This made speaker, podium, subwoofer, amp, and Note aiming share the same control pattern instead of becoming separate one-off features.

Joystick and Driver Feel

We also tuned low-speed driver control. Early in the season the robot felt difficult to control near the corners of the joysticks because the deadzone was effectively square. Switching to a radial deadzone made slow movement smoother, and later moving to hall-effect joysticks let us reduce the deadzone further.

Architecture and Automation

Reusable Subsystem Abstractions

We developed reusable single-axis mechanism infrastructure around a PID motor controller wrapper and typed-unit interfaces. This reduced per-mechanism implementation overhead for arm/climber features and improved safety through shared soft-limit handling.

Teleop State Machine

A major 2024 goal was automation in teleop. We implemented 18 distinct states mapped to command compositions, with safe transition checks, LED state feedback, timeout guards, and immediate cancel behavior from driver override inputs. We used deferred command patterns to allow state-dependent behavior from fixed bindings.

The state machine let the operator trigger complex flows with one button press: drive toward source locations, score in the amp, score from podium or subwoofer positions, climb, run automatic intake, or chain an intake sequence into driving back and scoring. Each automatic action returned to a known default state when complete, and any driver movement could cancel automation immediately.

Command Composition and Intake Complexity

The intake was one of the clearest examples of why command composition mattered. Because the robot could intake from multiple directions, feed two shooters, and stage Notes through beam-break sensors, one user-facing action was actually a sequence of smaller guarded actions. Commands waited for beam breaks, motor velocity thresholds, and timeouts instead of assuming the mechanism was always in the expected state.

This also exposed failure modes. At one regional, fresh Notes behaved differently than our practice Notes and staged higher than expected in the shooter. Combined with a timeout issue that kept shooter wheels spinning between shots, autos could unintentionally fire staged Notes. The eventual workaround was to backfeed the shooters at a very low velocity when they weren't commanded to shoot, letting the wheels wind down and keeping Notes in place.

Custom Hardware Integration

We integrated custom electronics including the ConnectorX board for state signaling and a keypad for triggering state-machine actions. We also worked through RoboRIO lockups linked to notifier-thread priority and I2C interactions, then stabilized the system by changing communication behavior and replacing NavX with a CAN-based Pigeon 2.

The keypad was intentionally not direct mechanism control. It appeared to the driver station as an Xbox-style controller over USB and triggered high-level robot states instead. This kept operator input aligned with the state-machine model and avoided splitting automation logic across several unrelated control paths.

Simulation, Logging, and SDLC

We used WPILib simulation and AdvantageScope replay heavily to validate command logic and tune assists before robot access windows. In one case, a driver-assist PID was tuned fully in sim and deployed without additional changes on hardware.

Logging was treated as part of the architecture. We used interchangeable logger implementations that could write structured values to either stdout or SmartDashboard: strings, numbers, booleans, poses, and sendable objects with importance levels. That made it easier to inspect the same kind of data whether debugging locally, at practice, or after a match.

AdvantageScope replay was especially useful for diagnosing pose-estimation and intaking problems. We could review recorded robot behavior after the fact instead of depending only on driver memory or what someone happened to notice from the sideline.

Development workflow followed an agile structure with GitHub Issues, sprint boards, roadmap planning, PR review requirements, and competition release branching. This reduced regressions and improved delivery predictability during events.

Issues were created immediately after kickoff for subsystem skeletons, auto paths, features, and bug fixes. The team periodically pointed, prioritized, assigned, and sorted the backlog. Pull requests required review from another senior student or mentor, and competition changes were routed through release branches before being merged back into main.

Measured Outcomes

  • 8th overall finish at Minnesota State Championships with a 7-1-0 qualification record.
  • 0 code changes at first regional and fully stable teleop operation at second regional.
  • Reliable autonomous + vision-driven features deployed in competition without runtime failures.
  • Faster feature iteration through simulation, logging, and structured review workflow.
Read Whitepaper More Posts