Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 22 Next »

Problem definition

In speech processing and elsewhere, a frequently appearing task is to make a prediction of an unknown vector y from available observation vectors x. Specifically, we want to have an estimate \( \hat y = f(x) \) such that \( \hat y \approx y. \) In particular, we will focus on linear estimates where \( \hat y=f(x):=A^T x, \) and where A is a matrix of parameters.


The minimum mean square estimate (MMSE)

Suppose we want to minimise the squared error of our estimate on average. The estimation error is \( e=y-\hat y \) and the squared error is the L2-norm of the error, that is, \( \left\|e\right\|^2 = e^T e \) and its mean can be written as the expectation \( E\left[\left\|e\right\|^2\right] = E\left[\left\|y-\hat y\right\|^2\right] = E\left[\left\|y-A^T x\right\|^2\right]. \) Formally, the minimum mean square problem can then be written as

\[ \min_A\, E\left[\left\|y-A^T x\right\|^2\right]. \]

This can in generally not be directly implemented because we have the abstract expectation-operation in the middle.

(Advanced derivation begins) To get a computational model, first note that the error expectation can be written in terms of the mean of a sample of vector ek as

\[ E\left[\left\|e\right\|^2\right] \approx \frac1N \sum_{k=1}^N \left\|e_k\right\|^2 = \frac1N {\mathrm{tr}}(E^T E), \]

where \( E=\left[e_1,\,e_2,\dotsc,e_N\right] \) and tr() is the matrix trace. To minimize the error energy expectation, we can then set its derivative to zero

\[ 0 = \frac{\partial}{\partial A} \frac1N {\mathrm{tr}}(E^T E) = \frac1N\frac{\partial}{\partial A} {\mathrm{tr}}((Y-A^TX)^T (Y-A^TX)) = \frac1N(Y-A^T X)X^T \]

where the observation matrix is \( X=\left[x_1,\,x_2,\dotsc,x_N\right] \) and the desired output matrix is \( Y=\left[y_1,\,y_2,\dotsc,y_N\right] \) . (End of advanced derivation)

It follows that the optimal weight matrix A can be solved as

\[ \boxed{A = \left(XX^T\right)^{-1}XY^T = X^\dagger Y^T}, \]

where the superscript \( \dagger \) denotes the Moore-Penrose pseudo-inverse.


Estimates with a mean parameter

Suppose that instead of an estimate \( \hat y=A^T x \) , we want to include a mean vector in the estimate as \( \hat y=A^T x + \mu \) . While it is possible to derive all of the above equations for this modified model, it is easier to rewrite the model into a similar form as above with

\[ \hat y=A^T x + \mu = \begin{bmatrix} \mu^T \\ A^T \end{bmatrix} \begin{bmatrix} 1 \\ x \end{bmatrix} := A'^T x'. \]

That is, we can extend x by a single 1, (the observation X similarly with a row of constant 1s), and extend A to include the mean vector. With this modifications, the above Moore-Penrose pseudo-inverse can again be used to solve the modified model.


Estimates with linear equality constraints

(Advanced derivation begins)

In practical situations we often have also linear constraints, such as \( C^T A = B \) , which is equivalent with \( C^T A - B = 0. \) The modified programming task is then

\[ \min_A\, E\left[\left\|y-A^T x\right\|^2\right]\quad\text{such that}\quad C^T A - B = 0. \]

Such constraints can be included into the objective function using the method of Lagrange multipliers such that the modified objective function is

\[ \eta(A,G) = E\, \left[\left\|y - A^T x\right\|^2\right] - 2 \sum_k\left[ G_k^T \left(C^T A_k - B_k\right)\right]. \]

A heuristic explanation of this objective function is based on the fact the G is a free parameter. Since its value can be anything, then \( C^T A - B \) must be zero, because otherwise the output value of the objective function could be anything. That is, when optimizing with respect to A, we find the minimum of the mean square error, while simultaneously satisfying the constraint.

The objective function can further be rewritten as

\[ \eta(A,G) = E\, \left[(y - A^T x)^T(y-A^Tx)\right] - 2 \sum_k\left[ G_k^T \left(C^T A_k - B_k\right)\right]. \]



TBC

(Advanced derivation ends)


  • No labels