MegaTag — Robot Localization with AprilTags
What Is MegaTag?
MegaTag is a Limelight-specific robot localization system that uses AprilTags + IMU (gyro) data to determine the robot’s position on the field. It’s not a Limelight-only concept — similar approaches exist with PhotonVision + WPILib pose estimators — but “MegaTag” specifically refers to Limelight’s implementation.
Key point: Team 2890 uses PhotonVision, not Limelight. MegaTag documentation is useful for understanding the concept of vision-based odometry fusion, but the implementation uses PhotonVision’s pose estimator + WPILib’s SwerveDrivePoseEstimator.
MegaTag vs. MegaTag2
| Feature | MegaTag 1 | MegaTag 2 |
|---|---|---|
| Data source | AprilTag vision only | AprilTag + IMU fusion |
| Accuracy | Good when tags visible | Better — gyro corrects drift |
| Drift | Accumulates over time | Reduced by gyro correction |
| Tag visibility required | Yes | Yes (but less sensitive) |
MegaTag 2 fuses robot orientation data from the IMU with vision pose. This means even if vision is slightly noisy or a tag is partially visible, the gyro steadies the reading.
How It Works (Conceptual)
AprilTag camera → Robot pose (x, y, rotation)
↓
Gyro heading → Corrections / steadying
↓
Pose Estimator → Combines:
- Wheel odometry (always drifting)
- Vision pose (accurate but intermittent)
↓
Final robot position (fused)
For Team 2890 (PhotonVision)
Team 2890 runs PhotonVision on Raspberry Pi, AprilTags on the field. The equivalent fusion happens in WPILib:
// SwerveDrivePoseEstimator fuses:
// 1. Wheel odometry (always running, drifts)
// 2. Vision measurements (accurate when tags visible)
SwerveDrivePoseEstimator estimator = new SwerveDrivePoseEstimator(
kinematics,
gyro.getRotation2d(), // IMU heading
modulePositions, // wheel encoders
new Pose2d(x, y, rotation) // initial estimate
);
// Add vision measurement when tag visible:
estimator.addVisionMeasurement(
photonPoseEstimator.getEstimatedPosition(),
Timer.getFPGATimestamp()
);The Key Concept for Students
Robot localization = knowing where you are on the field.
AprilTags give you a reference point. The gyro gives you orientation. Fusing them gives you accurate, stable position tracking even when:
- Only one tag is visible
- Tags are partially obscured
- Robot is moving fast
This is critical for:
- Autonomous paths that need to return to same spot
- Field-relative driving with joystick
- Score estimation / match strategy
Common Issues and Troubleshooting
| Problem | Likely Cause | Fix |
|---|---|---|
| Pose always offset in same direction | Camera calibration wrong | Check camera tilt, height, direction |
| Pose jumps when tag visible | Vision trusting single reading | Add filtering, check timing |
| No pose when tags visible | Camera not processing tags | Check PhotonVision pipeline |
| Odometry drifts despite vision | Vision not being added to estimator | Check addVisionMeasurement call |
| Gyro disagrees with vision | Gyro calibration issue | Re-calibrate gyro, check wiring |
Connection to Training
For students learning swerve odometry:
- Wheel odometry — always tracking, always drifting
- Vision pose — accurate when AprilTags visible
- Pose estimator — fuses both for best of both worlds
- Field constants — AprilTag positions, field dimensions must be correct
The MegaTag concept (vision + gyro fusion) is the same regardless of whether you use Limelight or PhotonVision. Understanding it helps students debug pose estimation issues.
Related
- photonvision — Team 2890’s vision system
- swere-modules — MK4i with encoders for odometry
- systemcore — upcoming controller with improved processing
Research from web search — MegaTag concept for FRC vision localization Queue: research complete — stored in wiki