Word2Vec and GloVe are two core word embedding techniques used to learn word vectors using a text corpus. Please suggest me the suitable one to use it in recommendation environment.
Let me describe the process of Word2Vec then it easily will understandable which one is better and why!
Somehow Word2Vec is like a typical Neural Network models which the document are converted to sentences and sentences are tokenized to aspect-levels as well.
But the last stage of conversion has a difference between ordinary NN and a model of Word2vec.
Here the pair of words will be made up by the window size.
The optimization method minimize the loss function (target word/context word)
If we do the process in enough epochs then the embedding layer illustrates the coordinates.
As I described the Word2vec is a predictive model on the other hand Glove formed based on word count so keeping all of the occurrences consuming extravagant memory. Although for escaping of exceeding memory consumption it is a force to optimize the data structure (Matrix), eventually a massive amount of data will be lost.
For further study, I added a manuscript, hope this will help you