Statistical Signal Processing - MUSIC
Frequency Estimation with MUSIC
Signal Model
Consider a signal with K sinusoidal components observed over M consecutive time samples: \mathbf{x}[n] = \mathbf{A}\mathbf{s}[n] + \mathbf{w}[n] where:
- \mathbf{x}[n] = [x[n], x[n-1], \cdots, x[n-M+1]] \in \mathbb{C}^M
- \mathbf{A} \in \mathbb{C}^{M \times K} is the steering matrix with columns \mathbf{a}(\omega_k)
- \mathbf{w}[n] \sim \mathcal{CN}(\mathbf{0}, \sigma^2\mathbf{I}) is complex white Gaussian noise
- \mathbf{s}[n] \in \mathbb{C}^K contains the K complex amplitudes
Steering vector for frequency \omega: \mathbf{a}(\omega) = [1, e^{j\omega}, e^{j2\omega}, \ldots, e^{j(M-1)\omega}]^T
Assumptions:
- Source signals are uncorrelated: \mathbb{E}[\mathbf{s}[n]\mathbf{s}^H[n]] = \mathbf{P} (diagonal)
- Noise is independent of signals
- K < M (more time samples than frequency components)
Measurement Signal and Periodogram
Before diving into MUSIC, let’s see how a classical Periodogram analyzes a signal containing two sinusoids.
Covariance Matrix: Two Contributions
The covariance matrix of \mathbf{x}[n] has two components:
\mathbf{R}_{\mathbf{x}} = \mathbb{E}[\mathbf{x}[n]\mathbf{x}^H[n]] = \underbrace{\mathbf{A}\mathbf{P}\mathbf{A}^H}_{\text{Signal}} + \underbrace{\sigma^2\mathbf{I}}_{\text{Noise}}
Signal and Noise Subspaces
Perform eigenvalue decomposition of \mathbf{R}_{\mathbf{x}}:
\mathbf{R}_{\mathbf{x}} = \mathbf{U}\mathbf{\Lambda}\mathbf{U}^H = \sum_{i=1}^{M} \lambda_i \mathbf{u}_i\mathbf{u}_i^H
The eigenvalues split into two groups:
\underbrace{\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_K}_{\textrm{correspond to signal subspace}} > \underbrace{\lambda_{K+1} = \ldots = \lambda_M}_{\textrm{correspond to noise subspace}} = \sigma^2
Noise subspace is orthogonal to signal subspace: \mathbf{u}_n^H \mathbf{a}(\omega_k) = \mathbf{0} for k=1,\ldots,K and n=K+1,\ldots,M
MUSIC Pseudo-Spectrum
The MUSIC pseudo-spectrum searches for frequencies where the steering vector is orthogonal to the noise subspace:
P_{\text{MUSIC}}(\omega) = \frac{1}{\mathbf{a}^H(\omega)\mathbf{U}_n\mathbf{U}_n^H\mathbf{a}(\omega)} = \frac{1}{\|\mathbf{U}_n^H\mathbf{a}(\omega)\|^2}
MUSIC from Measured Data
Now let’s apply MUSIC to the actual measured signal from the beginning (same data as used for Periodogram).
We’ll estimate the covariance matrix from the data using sliding windows of M consecutive samples.
Comparison of Periodogram, Theoretical MUSIC, and Estimated MUSIC
Periodogram vs. MUSIC: Conceptual Difference
Two fundamentally different approaches to frequency estimation:
Periodogram: Direct Comparison
P_{\text{Periodogram}}(\omega) = \left|\mathbf{a}^H(\omega) \mathbf{x}[n]\right|^2 \quad \text{or} \quad \mathbf{a}^H(\omega) \mathbf{R}_{\mathbf{x}} \mathbf{a}(\omega)
- Compares candidate steering vector \mathbf{a}(\omega) directly with the signal
- Measures: “How much does the data correlate with this frequency?”
MUSIC: Indirect Comparison via Complement
P_{\text{MUSIC}}(\omega) = \frac{1}{\|\mathbf{U}_n^H\mathbf{a}(\omega)\|^2}
- Compares candidate steering vector with the noise subspace
- Measures: “Is this frequency orthogonal to the noise?”
- Uses the negative space (what signal is NOT) to find what it IS
Key Insight: \mathbb{C}^M = \text{span}(\mathbf{U}_s) \oplus \text{span}(\mathbf{U}_n)
If \mathbf{a}(\omega) \perp \mathbf{U}_n then \mathbf{a}(\omega) \in \text{span}(\mathbf{U}_s) → true frequency!