I have trained Word embedding using a "clean" corpus in fastText and and I want to compare the quality of the Word embedding obtained against the word embedding from the pre-trained multi-lingual embedding in BERT which I perceive(discovered) to be trained on a "very-noisy" corpus(wiki).
Any Suggestions or Ideas on how to go about evaluating/comparing the performance would be appreciated.