Class SwerveState

java.lang.Object
frc.robot.subsystems.swerve.util.SwerveState

@NullMarked public class SwerveState extends Object
Maintains and updates the robot's estimated global pose for a swerve drive by fusing wheel odometry, gyro measurements, and delayed vision updates.

This class tracks two related poses:

  • Odometry pose - derived purely from wheel encoder deltas and gyro yaw
  • Estimated pose - a filtered pose that incorporates vision corrections

A time-interpolated pose buffer is maintained to allow vision measurements with latency to be applied retroactively at the correct timestamp.

This implementation resembles a lightweight pose estimator rather than a full Kalman filter, using configurable standard deviations to weight vision updates against odometry uncertainty.

  • Constructor Details

    • SwerveState

      public SwerveState(SwerveModulePosition[] wheelPositions, Rotation2d gyroYaw)
      Creates a new swerve state estimator.
      Parameters:
      wheelPositions - the initial swerve module positions used to seed odometry
  • Method Details

    • resetPose

      public void resetPose(Pose2d pose)
      Resets the internal pose estimate to a known field pose.

      This method forces the underlying swerve odometry to the specified pose, effectively redefining the robot's position on the field. It should be used when the robot pose is known with high confidence, such as:

      • At the start of autonomous
      • After a field-aligned reset
      • Following a trusted vision-based localization event

      This method updates only the pose estimator / odometry state owned by SwerveState. It does not update any associated simulation state or drivetrain model.

      Most code should prefer Swerve.overridePose(java.util.function.Supplier<edu.wpi.first.math.geometry.Pose2d>) when resetting the robot pose, as that method ensures both the estimator and any simulated drivetrain pose remain consistent.

      Future odometry and vision updates will be applied relative to this new pose.

      Parameters:
      pose - the desired robot pose in field coordinates
    • addOdometryObservation

      public void addOdometryObservation(SwerveModulePosition[] wheelPositions, Rotation2d gyroYaw, double timestamp)
      Updates odometry and pose estimates using swerve module encoders and an optional gyro measurement.
      Parameters:
      wheelPositions - current swerve module positions
      gyroYaw - current robot yaw, if available
      timestamp - measurement timestamp in seconds
    • updateSpeeds

      public void updateSpeeds(ChassisSpeeds speeds)
      Updates the robot's current chassis speeds.

      This method records the most recent robot-relative ChassisSpeeds command or measured velocity. The stored speeds may be used by future pose estimation or vision fusion logic to account for motion during sensor latency or to improve prediction accuracy.

      This method does not directly affect the current pose estimate.

      Parameters:
      speeds - the current robot-relative chassis speeds
    • overrideInit

      public void overrideInit(Pose2d pose)
      Forcibly initializes the pose estimator using a known robot pose.

      This method sets the internal odometry/estimator state directly and marks the estimator as initialized, bypassing the normal vision-based initialization path. It is intended for situations where a trusted global pose is already known (for example, at the start of autonomous) or when a vision needs to be tested on a non-compliant field (for example, the practice field at district events).

      After this method is called, subsequent vision updates will be treated as corrections rather than initialization measurements.

      Parameters:
      pose - the known robot pose in field coordinates to initialize the estimator with
    • addVisionObservation

      public void addVisionObservation(Pose3d cameraPose, Transform3d robotToCamera, double translationStdDev, double rotationStdDev, double timestamp)
      Adds a vision measurement using an externally computed camera pose.
      Parameters:
      cameraPose - estimated camera pose in field coordinates
      robotToCamera - transform from robot to camera frame
      translationStdDev - translation measurement standard deviation (meters)
      rotationStdDev - rotation measurement standard deviation (radians)
      timestamp - measurement timestamp in seconds
    • addVisionObservation

      public boolean addVisionObservation(CameraConstants camera, org.photonvision.targeting.PhotonPipelineResult pipelineResult)
      Adds a vision measurement from PhotonVision.

      If the estimator has not yet been initialized, a valid multi-tag estimate will be used to seed the global pose.

      Parameters:
      camera - camera configuration constants
      pipelineResult - latest PhotonVision pipeline result
    • getGlobalPoseEstimate

      public Pose2d getGlobalPoseEstimate()
      Returns the current best estimate of the robot's global field pose.
      Returns:
      estimated robot pose in field coordinates