Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence.
We developed a sparse non-negative matrix factorization (NMF) method with l0-sparseness constraints. This is similar in spirit to K-SVD but includes non-negativity constraints. Since images are naturally non-negative, this might be a direction to go for denoising.
There is a non-negative version of K-SVD too, which is, however, rather ad hoc and works slightly worse than our method. The full paper and Matlab code is provided on my profile.