Kalman Filtering
State Space Model and Uncertainty
Consider a trajectory of an object at some position and velocity at time zero. How do those quantity evolve over time?
State Space Equations (Process Model)
\mathbf{x}_{k+1} = \mathbf{F}_k \mathbf{x}_k + \mathbf{w}_k
where:
- \mathbf{x}_k \in \mathbb{R}^n is the state vector at time k
- \mathbf{F}_k \in \mathbb{R}^{n \times n} is the state transition matrix
- \mathbf{w}_k is the process noise with zero mean and covariance \mathbf{Q}_k
Trajectory realizations
Sample random initial condition x_0 with mean \mathbf{\mu}_0 and covariance \mathbf{P}_0 and propagate the realization adding the process noise \mathbf{w}_k with mean \mathbf{0} and covariance \mathbf{Q}:
\mathbf{x}_{k+1} = \mathbf{F}_k \mathbf{x}_k + \mathbf{w}_k
Distribution of Trajectories - Uncertainty Propagation
Mean and Covariance propagation \mathbf{\mu}_{k+1} = \mathbf{F}_k \mathbf{\mu}_k, \quad \mathbf{P}_{k+1} = \mathbf{F}_k \mathbf{P}_k \mathbf{F}_k^T + \mathbf{Q}_k
Linear MMSE of Observations
Observation Model
\mathbf{y}_k = \mathbf{H}_k \mathbf{x}_k + \mathbf{v}_k
where:
- \mathbf{y}_k \in \mathbb{R}^m is the observation/measurement vector at time k
- \mathbf{H}_k \in \mathbb{R}^{m \times n} is the observation matrix (maps state to measurements)
- \mathbf{v}_k is the observation noise with zero mean and covariance \mathbf{R}_k \in \mathbb{R}^{m \times m}
Assumptions:
- Observation noise \mathbf{v}_k is independent of process noise \mathbf{w}_k
- Observation noise is white: \mathbb{E}[\mathbf{v}_k \mathbf{v}_j^T] = \mathbf{R}_k \delta_{kj}
Goal: Use observations \mathbf{y}_k to improve our estimate of the state \mathbf{x}_k and reduce uncertainty
Trajectory based only on Observation
The Kalman Filter - Two-Step Recursive Algorithm
Predict Step (Time update - use process model): \hat{\mathbf{x}}_{k|k-1} = \mathbf{F}_k \hat{\mathbf{x}}_{k-1|k-1} \mathbf{P}_{k|k-1} = \mathbf{F}_k \mathbf{P}_{k-1|k-1} \mathbf{F}_k^T + \mathbf{Q}_k
Update Step (Measurement update - incorporate observation): \mathbf{K}_k = \mathbf{P}_{k|k-1} \mathbf{H}_k^T (\mathbf{H}_k \mathbf{P}_{k|k-1} \mathbf{H}_k^T + \mathbf{R}_k)^{-1} \hat{\mathbf{x}}_{k|k} = \hat{\mathbf{x}}_{k|k-1} + \mathbf{K}_k (\mathbf{y}_k - \mathbf{H}_k \hat{\mathbf{x}}_{k|k-1}) \mathbf{P}_{k|k} = (\mathbf{I} - \mathbf{K}_k \mathbf{H}_k) \mathbf{P}_{k|k-1}
where:
- \hat{\mathbf{x}}_{k|k-1} is the predicted state (before seeing \mathbf{y}_k) and \hat{\mathbf{x}}_{k|k} is the filtered state (after seeing \mathbf{y}_k)
- \mathbf{P}_{k|k-1} is the predicted covariance and \mathbf{P}_{k|k} is the filtered covariance (see here)
- \mathbf{K}_k is the Kalman gain (optimal weighting between prediction and observation)
- (\mathbf{y}_k - \mathbf{H}_k \hat{\mathbf{x}}_{k|k-1}) is the innovation or measurement residual
Using the Woodbury identify, the post-update form of the Kalman gain is \mathbf{K}_k = \mathbf{P}_{k|k}\,\mathbf{H}_k^T \mathbf{R}_k^{-1}
Kalman Filter Results: Tracking with Uncertainty Reduction
Uncertainty Evolution: Prediction vs Measurement vs Filtering
Key insight:
- Without measurements (prediction only): uncertainty grows over time
- With measurements (Kalman filter): uncertainty stabilizes or even decreases, uncertainty even for unobserved states