0
$\begingroup$

When using the "unbiased" sample autocorrelation sequence estimate, i.e., \begin{equation} \bar{r}(k) = \begin{cases} \frac{1}{N-k}\sum_{n=0}^{N-k-1}x[n]x^{*}[n-k]& k \geq 0 \\ \bar{r}^{*}(-k) & k<0\end{cases} \end{equation} I sometimes do not get $\bar{r}(0) \geq \left\lvert \bar{r}(k) \right\rvert$. Why is this? Is there a bug in my code or in the Matlab/Python implementation of the autocorrelation function?

$\endgroup$

1 Answer 1

2
$\begingroup$

TL;DR

The autocorrelation sequence can be proved positive semi-definite for the theoretical estimate, and strictly positive definite for the biased sample estimate of the ACS. However, the unbiased sample estimate is indefinite. A consequence of a sequence being positive semi-definite is that the peak is at the origin. Thus, if a sequence is indefinite, it is not guaranteed to have a peak at the origin.

Assumptions

To prove positive semi-definiteness, it must be shown arbitrarily for all cases. All that is needed to prove indefiniteness is to provide a single sequence in which the result is not positive semi-definite. The proofs will make use of the fact that the largest element of symmetric, positive semi-definite matrices are on the main diagonal (i.e. the largest element of a positive semi-definite sequence is at the origin).

Theoretical ACS Proof

The proof is very straightforward for the theoretical ACS. Let $x[n]$, a random process, be an input to a linear shift-invariant system with impulse response $h[n]$ to produce random process $y[n]$. Mathematically, this is \begin{equation} y\left[n\right] = \sum_{k=-\infty}^{\infty}h\left[k\right]x\left[n-k\right] \end{equation} The average power for this system is \begin{equation} E\left\{\left\lvert y\left[ n \right]\right\rvert^{2}\right\} \geq 0 \end{equation} Plugging in for $y\left[ n\right]$ gives \begin{align} E\left\{\left\lvert y\left[ n \right]\right\rvert^{2}\right\} &= E\left\{ y\left[ n \right]y^{*}\left[ n \right]\right\} \\ &= E\left\{ \sum_{k=-\infty}^{\infty}h\left[k\right]x\left[n-k\right] \sum_{l=-\infty}^{\infty}h^{*}\left[l\right]x^{*}\left[n-l\right] \right\} \\ &= \sum_{k=-\infty}^{\infty}\sum_{l=-\infty}^{\infty}h\left[k\right]h^{*}\left[l\right]E\left\{ x\left[n-k\right]x^{*}\left[n-l\right] \right\} \\ &= \sum_{k=-\infty}^{\infty}\sum_{l=-\infty}^{\infty}h\left[k\right]h^{*}\left[l\right]r_{xx}(l-k) \geq 0 \end{align} Thus, the condition for positive semi-definiteness is met.

Biased Estimate ACS Proof

The biased estimate of the ACS is defined as \begin{equation} \hat{r}_{xx}\left[ k \right] = \begin{cases} \frac{1}{N}\sum_{n=0}^{N-k-1}x\left[n+k\right]x^{*}\left[ n\right] & 0 \leq k < N \\ \hat{r}^{*}_{xx}\left[ -k \right] & -N < k < 0 \end{cases} \end{equation} To prove this estimate is guaranteed positive definite, we need to reference the correlation matrix estimate corresponding to this autocorrelation sequence (i.e that is populated with biased sample ACS estimates). Define the matrix \begin{equation} \mathbf{X} = \begin{bmatrix} x[0] & 0 & \cdots & 0 \\ x[1] & x[0] & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ x[P-1] & x[P-2] & \cdots & x[0] \\ \vdots & \vdots & \ddots & \vdots \\ x[N-1] & x[N-2] & \cdots & x[N-P] \\ 0 & x[N-1] & \cdots & \vdots \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & x[N-1] \end{bmatrix} \end{equation} The correlation matrix estimate corresponding to the biased estimate of the ACS is \begin{equation} \hat{\mathbf{R}}_{xx} = \frac{1}{N}\mathbf{X}^{H}\mathbf{X} \end{equation} which is equivalent to populating the correlation matrix with samples from the biased sample ACS estimate. We wish to show that for arbitrary vector $\underline{a}$, $\hat{\mathbf{R}}_{xx}$ is positive semi-definite. Mathematically we can say \begin{align} \underline{a}^{H}\hat{\mathbf{R}}_{xx}\underline{a} &= \frac{1}{N}\underline{a}^{H}\mathbf{X}^{H}\mathbf{X}\underline{a} \\ &= \frac{1}{N}\left\lvert \mathbf{X}a\right\rvert^{2}>0 \end{align} Since the columns are independent, the matrix is positive definite. Therefore, the biased sample ACS is positive definite.

Unbiased Estimate ACS Proof

The unbiased estimate of the ACS is defined as \begin{equation} \bar{r}_{xx}\left[ k \right] = \begin{cases} \frac{1}{N-k}\sum_{n=0}^{N-k-1}x\left[n+k\right]x^{*}\left[ n\right] & 0 \leq k < N \\ \bar{r}^{*}_{xx}\left[ -k \right] & -N < k < 0 \end{cases} \end{equation} One may wonder whether or not there exists a similar proof for the unbiased estimate. However, there is not as the computation cannot be broken up into a simple matrix multiplication. Using the definitions of the biased and unbiased ACS estimates, it is clear that \begin{equation} \bar{r}_{xx}\left[ k\right] = \frac{N}{N-\left\lvert k \right\rvert}\hat{r}_{xx}\left[ k \right] \end{equation} This is equivalent to element-wise multiplying $\hat{\mathbf{R}}_{xx}$ by an inverted triangular window $\mathbf{W}$. For a 3-by-3 correlation matrix, this would be \begin{align} \mathbf{W} &= \begin{bmatrix}1 & 1.5 & 3 \\ 1.5 & 1 & 1.5 \\ 3 & 1.5 & 1 \end{bmatrix} \\ \bar{\mathbf{R}}_{xx} &= \mathbf{W}\otimes\hat{\mathbf{R}}_{xx} \end{align} This cannot be represented by a simple matrix multiplication since $\mathbf{W}$ is not positive semi-definite (from the assumption in the beginning), therefore there is no matrix $\mathbf{Y}$ such that \begin{equation} \mathbf{W} = \mathbf{Y}^{H}\mathbf{Y} \end{equation} Additionally, it is easy to imagine a positive semi-definite matrix where this type of weighting would destroy the positive semi-definite structure.

On the other hand, there are many sequences which can prove this estimate is not positive semi-definite. For example, take the sequence $\begin{bmatrix} 2 & 1 & 2 \end{bmatrix}$. We find \begin{align} \bar{r}_{xx}\left[ 0 \right] &= \frac{1}{3-0}\sum_{n=0}^{2}x\left[ n\right]x^{*}\left[ n\right] = \frac{1}{3}\left(2*2 + 1*1 + 2*2\right) = 3 \\ \bar{r}_{xx}\left[ 2 \right] &= \frac{1}{3-2}\sum_{n=0}^{0}x\left[ n+2\right]x^{*}\left[ n\right] = 2*2 = 4 \end{align} From the fact that the largest element of a positive semi-definite sequence is at the origin, we recognize the unbiased estimate of the ACS is not positive semi-definite. However, it can be shown that for certain sequences, for example $\begin{bmatrix}1 & 3 & 1\end{bmatrix}$, that the condition for positive semi-definiteness is met. Therefore, the unbiased estimate of the ACS is indefinite. This poses quite a bit of a problem for a number of reasons. One major issue, for example, is that this can result in negative estimates in power spectral densities. Thus, for cases where the ACS needs to be used not just for visual analysis but in further computations, using the biased ACS is recommended as it is much more numerically robust and stable.

Appendix

Proof that the biased correlation matrix estimate is populated with biased sample ACS entries

Again, the biased sample ACS estimate is defined as \begin{equation} \hat{r}_{xx}\left[ k \right] = \begin{cases} \frac{1}{N}\sum_{n=0}^{N-k-1}x\left[n+k\right]x^{*}\left[ n\right] & 0 \leq k < N \\ \hat{r}^{*}_{xx}\left[ -k \right] & -N < k < 0 \end{cases} \end{equation} The biased sample covariance matrix is \begin{equation} \hat{\mathbf{R}}_{ij} = \frac{1}{N}\sum_{n=0}^{N-k-1}x^{*}[n]x[n+k] \end{equation} Rewriting $\hat{\mathbf{R}}_{ij}$ in terms of $k$ gives \begin{equation} \hat{\mathbf{R}}_{ij} = \frac{1}{N}\sum_{n=0}^{N-k-1}x^{*}[n]x[n+k] \end{equation} where $k=i-j$. Notice, when $i-j>0$, we get $\hat{\mathbf{R}}_{ij} = \hat{r}(i-j)$, and when $i-j<0$ we get $\hat{\mathbf{R}}_{ij} = \hat{r}(j-i) = \hat{r}^{*}(i-j)$.

$\endgroup$

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.