Dear All,

In a Deterministic Framework where we have the following Linear Model:

y(t) = H.x(t) + n(t)

where

y(t) is the observed vector of size Nx1 (we have T observations)

H is an NxP matrix (no constraint on P, P could be smaller or larger than N)

x(t) is a Px1 vector

n(t) is random noise.

It is well known that if n(t) is a Gaussian process, then you couldn't do any better than Max Likelihood, i.e. the L2 norm is optimal to estimate parameters in H and x(t). 

My question is : when does ML become sub-optimal ? 

Thanks.

More Ahmad Bazzi's questions See All
Similar questions and discussions