Date: Tue, 27 Jul 2021 00:19:05 +0000 (UTC) Message-ID: <462366677.39621.1627345145403@3ecc041e.tietoverkkopalvelut.fi> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_39620_248556212.1627345145399" ------=_Part_39620_248556212.1627345145399 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html Sub-space models

# Sub-space models

=20
=20
=20
=20

In many cases, we can assume that signals are low-dimensional in the sen= se that a high-dimensional observation  $$y\in{\mathbb R}^{N\times 1}=$$ can be completely explained by a low-dimensional representation  \= ( x\in{\mathbb R}^{M\times 1} \) such that with a matrix  $$A\in{\mat= hbb R}^{N\times M}$$ we have $$y =3D Ax$$ with N>M. Thi= s signal thus spans only a M-dimensional sub-space o= f the whole N-dimensional space.

=20
=20
=20
=20

=20
=20
=20
=20
=20
=20

## Application with kn= own sub-space

This representation comes in handy for example if we assume that we have= only a noisy observation of y. The desired signal lies in a = sub-space, so all the other dimensions have only noise in them and we can r= emove them. We thus only need a mapping from the whole space to the sub-spa= ce. It turns out that such a mapping is a projection to the sub-space spann= ed by matrix A. In fact, the minimum mean square error (see <= a href=3D"/display/ITSP/Linear+regression">Linear regression) solution = is exactly the Moore-Penrose pseudo-inverse. Howev= er, the downside with this approach is that here the matrix A nee= ds to be known in advance such that the pseudo-inverse can be formed.

=20
=20
=20
=20

=20
=20
=20
=20
=20
=20

## Estimation of unknow= n sub-space

In practical cases it is rather unusual that we would have access to the= matrix A, but instead, it must be estimated from availa= ble data. A typical approach is based on a singular val= ue decomposition (SVD) or the eigenvalue decompos= ition. In short, we first estimate the covariance matrix of the signal = and then decompose it into uncorrelated components with the singular value = decomposition. Often, a small set of singular values make up most of the en= ergy of the whole signal. Thus if we discard the smallest singular values, = we do not loose much of the energy, but have a signal of a much lower dimen= sionality. The singular value decomposition thus takes the role of the sub-= space mapping matrix A, and we can apply the model as describ= ed above.

=20
=20
=20
=20

=20
=20
=20
=20
=20
=20

## Discussion

Sub-space models are theoretically appealing models, since their analysi= s is straightforward. In terms of speech signals, the difficulty lies in fi= nding a representation which is actually low-rank. In other words, it is no= t immediately clear in which domain we can apply analysis such that speech = signals can efficiently modelled by low-rank models.

=20
=20
=20
=20

=20
=20
=20
------=_Part_39620_248556212.1627345145399--