In a multidimensional space, PCA finds the directions that maximizes the variance whereas ICA finds the directions that maximize statistical independence. Both methods are used for the so-called blind source/signal separation:
They do not assume any class distributions of the data, i.e., all data are assumed to be unlabeled.
Fisher's LDA, however, is a standard classifier that assumes that there exists Gaussian distributed classes having the same covariance matrix. I do not see a clear relation of LDA to PCA or ICA.
In a multidimensional space, PCA finds the directions that maximizes the variance whereas ICA finds the directions that maximize statistical independence. Both methods are used for the so-called blind source/signal separation:
They do not assume any class distributions of the data, i.e., all data are assumed to be unlabeled.
Fisher's LDA, however, is a standard classifier that assumes that there exists Gaussian distributed classes having the same covariance matrix. I do not see a clear relation of LDA to PCA or ICA.
I agree with the answer of Christian. I would like to add that PCA and ICA are equivalent in case of normally distributed data. In fact minimizing correlation (PCA) and minimizing statistical dependence (ICA) are the same thing if the moments of a distribution from 3rd to above are zero (as for a normal distribution).
Moreover, PCA is often used as a preliminary step when performing ICA.
As Christian already pointed out LDA is a supervised dimensionality reduction method, while ICA and PCA are unsupervised. Stating a relation between these methods is not trivial.