Although a complex question like yours would require a complex answer, I will try to summarize the main idea of distributional models to understand meaning/semantics.
In NLP, distributional models emulate part of the mechanisms that generate meaning/knowledge. Here, different coocurrence statistics summarize the "distribution" of words based on their similarities (assuming that similar words occur in similar contexts) through dimension reduction processes.
Then, a semantic representation is obtained from the semantic space and it is believed that it properly represents the meaning of words.
While these models are very useful for their theoretical and applied implications, it should not be forgot that the good accuracy of these methods (and its dimension reduction) do not mean that the human brain can compute language and semantic representations of words through the use of arithmetic algorithms. But it seems that these models can explain, at least, part of the acquisition of the meaning of words.