E.g. Use the BERT to find the embedding vector of each input words. It maps them into n-dimensional vectors. Then reduce n-dim to 2-dim utilizing T-SNE or any other dimensionality reduction algorithms. Next plot all 2-dim vectors on the coordinate system and compute their cosine similarities. It shows the similarities between vectors representing your words.
From the example you give, it sounds like rather than lexical similarity, what you really want is to find the hypernym (the broader category) of all other elements within the list. If this is the case, it might be worth taking a look at a SemEval shared task on hypernym discovery Conference Paper SemEval-2018 Task 9: Hypernym Discovery