Class SwerveState
This class tracks two related poses:
- Odometry pose - derived purely from wheel encoder deltas and gyro yaw
- Estimated pose - a filtered pose that incorporates vision corrections
A time-interpolated pose buffer is maintained to allow vision measurements with latency to be applied retroactively at the correct timestamp.
This implementation resembles a lightweight pose estimator rather than a full Kalman filter, using configurable standard deviations to weight vision updates against odometry uncertainty.
-
Constructor Summary
ConstructorsConstructorDescriptionSwerveState(SwerveModulePosition[] wheelPositions, Rotation2d gyroYaw) Creates a new swerve state estimator. -
Method Summary
Modifier and TypeMethodDescriptionvoidaddOdometryObservation(SwerveModulePosition[] wheelPositions, Rotation2d gyroYaw, double timestamp) Updates odometry and pose estimates using swerve module encoders and an optional gyro measurement.voidaddVisionObservation(Pose3d cameraPose, Transform3d robotToCamera, double translationStdDev, double rotationStdDev, double timestamp) Adds a vision measurement using an externally computed camera pose.booleanaddVisionObservation(CameraConstants camera, org.photonvision.targeting.PhotonPipelineResult pipelineResult) Adds a vision measurement from PhotonVision.Returns the current best estimate of the robot's global field pose.voidoverrideInit(Pose2d pose) Forcibly initializes the pose estimator using a known robot pose.voidResets the internal pose estimate to a known field pose.voidupdateSpeeds(ChassisSpeeds speeds) Updates the robot's current chassis speeds.
-
Constructor Details
-
SwerveState
Creates a new swerve state estimator.- Parameters:
wheelPositions- the initial swerve module positions used to seed odometry
-
-
Method Details
-
resetPose
Resets the internal pose estimate to a known field pose.This method forces the underlying swerve odometry to the specified pose, effectively redefining the robot's position on the field. It should be used when the robot pose is known with high confidence, such as:
- At the start of autonomous
- After a field-aligned reset
- Following a trusted vision-based localization event
This method updates only the pose estimator / odometry state owned by
SwerveState. It does not update any associated simulation state or drivetrain model.Most code should prefer
Swerve.overridePose(java.util.function.Supplier<edu.wpi.first.math.geometry.Pose2d>)when resetting the robot pose, as that method ensures both the estimator and any simulated drivetrain pose remain consistent.Future odometry and vision updates will be applied relative to this new pose.
- Parameters:
pose- the desired robot pose in field coordinates
-
addOdometryObservation
public void addOdometryObservation(SwerveModulePosition[] wheelPositions, Rotation2d gyroYaw, double timestamp) Updates odometry and pose estimates using swerve module encoders and an optional gyro measurement.- Parameters:
wheelPositions- current swerve module positionsgyroYaw- current robot yaw, if availabletimestamp- measurement timestamp in seconds
-
updateSpeeds
Updates the robot's current chassis speeds.This method records the most recent robot-relative
ChassisSpeedscommand or measured velocity. The stored speeds may be used by future pose estimation or vision fusion logic to account for motion during sensor latency or to improve prediction accuracy.This method does not directly affect the current pose estimate.
- Parameters:
speeds- the current robot-relative chassis speeds
-
overrideInit
Forcibly initializes the pose estimator using a known robot pose.This method sets the internal odometry/estimator state directly and marks the estimator as initialized, bypassing the normal vision-based initialization path. It is intended for situations where a trusted global pose is already known (for example, at the start of autonomous) or when a vision needs to be tested on a non-compliant field (for example, the practice field at district events).
After this method is called, subsequent vision updates will be treated as corrections rather than initialization measurements.
- Parameters:
pose- the known robot pose in field coordinates to initialize the estimator with
-
addVisionObservation
public void addVisionObservation(Pose3d cameraPose, Transform3d robotToCamera, double translationStdDev, double rotationStdDev, double timestamp) Adds a vision measurement using an externally computed camera pose.- Parameters:
cameraPose- estimated camera pose in field coordinatesrobotToCamera- transform from robot to camera frametranslationStdDev- translation measurement standard deviation (meters)rotationStdDev- rotation measurement standard deviation (radians)timestamp- measurement timestamp in seconds
-
addVisionObservation
public boolean addVisionObservation(CameraConstants camera, org.photonvision.targeting.PhotonPipelineResult pipelineResult) Adds a vision measurement from PhotonVision.If the estimator has not yet been initialized, a valid multi-tag estimate will be used to seed the global pose.
- Parameters:
camera- camera configuration constantspipelineResult- latest PhotonVision pipeline result
-
getGlobalPoseEstimate
Returns the current best estimate of the robot's global field pose.- Returns:
- estimated robot pose in field coordinates
-