MegaTag — Robot Localization with AprilTags

What Is MegaTag?

MegaTag is a Limelight-specific robot localization system that uses AprilTags + IMU (gyro) data to determine the robot’s position on the field. It’s not a Limelight-only concept — similar approaches exist with PhotonVision + WPILib pose estimators — but “MegaTag” specifically refers to Limelight’s implementation.

Key point: Team 2890 uses PhotonVision, not Limelight. MegaTag documentation is useful for understanding the concept of vision-based odometry fusion, but the implementation uses PhotonVision’s pose estimator + WPILib’s SwerveDrivePoseEstimator.

MegaTag vs. MegaTag2

FeatureMegaTag 1MegaTag 2
Data sourceAprilTag vision onlyAprilTag + IMU fusion
AccuracyGood when tags visibleBetter — gyro corrects drift
DriftAccumulates over timeReduced by gyro correction
Tag visibility requiredYesYes (but less sensitive)

MegaTag 2 fuses robot orientation data from the IMU with vision pose. This means even if vision is slightly noisy or a tag is partially visible, the gyro steadies the reading.

How It Works (Conceptual)

AprilTag camera → Robot pose (x, y, rotation)
        ↓
Gyro heading → Corrections / steadying
        ↓
Pose Estimator → Combines:
  - Wheel odometry (always drifting)
  - Vision pose (accurate but intermittent)
        ↓
Final robot position (fused)

For Team 2890 (PhotonVision)

Team 2890 runs PhotonVision on Raspberry Pi, AprilTags on the field. The equivalent fusion happens in WPILib:

// SwerveDrivePoseEstimator fuses:
// 1. Wheel odometry (always running, drifts)
// 2. Vision measurements (accurate when tags visible)
 
SwerveDrivePoseEstimator estimator = new SwerveDrivePoseEstimator(
    kinematics,
    gyro.getRotation2d(),           // IMU heading
    modulePositions,                // wheel encoders
    new Pose2d(x, y, rotation)       // initial estimate
);
 
// Add vision measurement when tag visible:
estimator.addVisionMeasurement(
    photonPoseEstimator.getEstimatedPosition(),
    Timer.getFPGATimestamp()
);

The Key Concept for Students

Robot localization = knowing where you are on the field.

AprilTags give you a reference point. The gyro gives you orientation. Fusing them gives you accurate, stable position tracking even when:

  • Only one tag is visible
  • Tags are partially obscured
  • Robot is moving fast

This is critical for:

  • Autonomous paths that need to return to same spot
  • Field-relative driving with joystick
  • Score estimation / match strategy

Common Issues and Troubleshooting

ProblemLikely CauseFix
Pose always offset in same directionCamera calibration wrongCheck camera tilt, height, direction
Pose jumps when tag visibleVision trusting single readingAdd filtering, check timing
No pose when tags visibleCamera not processing tagsCheck PhotonVision pipeline
Odometry drifts despite visionVision not being added to estimatorCheck addVisionMeasurement call
Gyro disagrees with visionGyro calibration issueRe-calibrate gyro, check wiring

Connection to Training

For students learning swerve odometry:

  1. Wheel odometry — always tracking, always drifting
  2. Vision pose — accurate when AprilTags visible
  3. Pose estimator — fuses both for best of both worlds
  4. Field constants — AprilTag positions, field dimensions must be correct

The MegaTag concept (vision + gyro fusion) is the same regardless of whether you use Limelight or PhotonVision. Understanding it helps students debug pose estimation issues.


Research from web search — MegaTag concept for FRC vision localization Queue: research complete — stored in wiki