Diskussion:Controlled Autonomous Driving for a JetRacer
| # | date | plan for this week | progress |
|---|---|---|---|
| 4 | 31.03.26 |
|
|
Concept
1) Per-frame camera processing (lower rate)
- Extract lane/track features (lane lines, road edges, path points).
- Estimate curvature κ_cam(s) at a chosen look-ahead distance s (or a small set of look-ahead distances).
2) High-rate IMU loop
- Read yaw rate r, steering angle δ (if available), wheel speeds.
- Use a simple kinematic/dynamic model (e.g. bicycle model) to predict short-term motion and where the vehicle will be in 1–2 m given current yaw rate and speed.
3) State estimation / sensor fusion
- Run an EKF/UKF or Kalman filter with states like lateral error, heading error, yaw rate, possibly curvature.
- Inputs/measurements: IMU yaw rate, steering angle, odometry; camera curvature or look-ahead heading as a lower-rate measurement/observation.
- Let the IMU provide high-frequency propagation and the camera provide periodic corrections (adapt measurement covariances when camera quality is low).
4) Compute steering command (feedforward + feedback)
- Feedforward: compute required steering angle Fehler beim Parsen (Syntaxfehler): {\displaystyle δ_{ff}<\math> from κ_cam and vehicle speed v (use steering geometry/bicycle model). *Feedback: compute corrective term from state errors <math>δ_{fb}} (e. g. lateral error, heading error, yaw-rate error) using a controller (PID on yaw-rate or lateral error, LQR, or state-feedback).
- Final command: Fehler beim Parsen (Syntaxfehler): {\displaystyle δ = δ_{ff} + δ_{fb}} , then apply actuator limits (max steering angle, rate limits).
Latency compensation & look-ahead tuning
Compensate for camera processing delay by propagating the estimated state forward by the measured latency before using camera-derived references.
Choose look-ahead distance dependent on speed: higher speed → longer look-ahead; lower speed → shorter.
Dynamically weight camera vs IMU in the estimator based on confidence (visibility, image quality).