I’m working on a paper on Likert-type scales, as well as a statistical measure/test that sort of emerged by accident whilst working on the paper. However, I was hoping for some preliminary feedback (and what better place for such information as RG?). Specifically, the following are basically universally accepted by specialists whose fields are closely related to the matter:

1) There exists no 1-to-1 mapping between a word in a source language and a word in a target language. More simply, translation always involves information loss.

2) If we throw out the substantial evidence that lexemes aren’t the basic unit of language (and choose not to adopt a construction grammar), we are still left with polysemy

Linguists on opposite sides of the fence, such as Jackendoff and Langacker, still agree that “words” are encyclopedic: there isn’t any mapping from a word to some “unit” of knowledge, information, brain activity, etc. That’s why, if one looks in a dictionary, one finds words defined by other words.

3) Even if we accept the modern version of grandmother neurons (“concept cells” that have been found to respond selectively to e.g., specific people in ways that have lead some researchers to claim that there exists a 1-to-1 mapping between such cells and concepts, nobody believes (and it is an empirically validated falsehood) that there exists any mapping between the conceptual representation via neural activity in one brain to that in another.

4) Finally, language is intricately involved in shaping thought and knowledge, in particular through the relationship of constructions (or lexemes, phrasal nouns, collocations, etc.) and concepts. However, there isn’t any 1-to-1 mapping between a particular instantiation of lexemes in a particular construction that is sufficiently stable such that an individual can separate the scale (which is usually completely conceptual, although sometimes also empirical as in e.g., frequency) from the individual responses (whether only the endpoints are labeled or all possible responses are) and keep these conceptual domains distinct. In other words, the items necessarily force the respondent’s novel conceptualization, making it impossible to treat a single participant’s response to a single item as somehow remotely precise.

What, then, is the justification for treating all participant responses as infinitely precise and corresponding to the exact same values as if all participants were observations of a single value?

Thanks for any and all input!

Similar questions and discussions