If 'imaginability' is simply 'uncovnetionality' you could just take any corpus linguistics methodology and turn it on its head. Outlier and Rare Events analyses can help. But 'imaginability' also includes an element of 'creativity', so you could look up Artificial Creativity literature. If you are familiar with Conceptual Integration Theory (CIT, Fauconnier & Turner), I would suggest counting blends and blend-to-blendoid ratios.
Thanks for the reply. I’m taking my definition of imaginability from Horst Ruthrof: “If you are able to imagine what I am talking about and the way I am saying it, then there is meaning; if not, there is not”.
Eg: “The cat walked across the carpet and through the front door,” is more imaginable than...
”The animal went through the room.”
The first contains visual elements that relate closer to things we can imagine (cat, walked, carpet, door), while the second uses more abstract terms (animal, went, room) which are harder for us to imagine.
That's a way different direction from what I was thinking about. :)
Alright, for your Ruthrof 'imaginability' you could try this: Devise a rudimentary iconographic conlang. You could use icons from the 'noun project' (https://thenounproject.com) for that. Get raters to attempt to re-write your texts in the conlang. Get them to assess how difficult it was to do so (eg. on a likert scale). Account for inter-rater variability. Normalize for text length. If it was easy to re-write, the text was 'more imaginable'. If it was hard to re-write, the text was 'less imaginable'.
You could also get raters to re-write back to text from conlang. Then compare original text to conlang-to-text. If the distance is small, the original text was 'more imaginable'. If the distance is large, the original text was 'less imaginable'.
Also, have a look at doctor Neil Cohn's work on visual cognition (http://www.visuallanguagelab.com).
Thanks! I see what you mean, and I think it would make a good foundation for research. Have you done something similar, yourself? My concern is that this would rely on a lot of participants to work on analysing the children’s writing. I was hoping to do this on my own, as I would likely have hundreds of children’s samples to work with and can’t rely on so many people analysing all of this work. So rather than a research design, I’m after an instrument that I can use as part of my design.
No, not really. Not anything of this scale. I did try to come up with an iconographic notation to capture concpetual structure of blends in my MA thesis. It was a descriptive not an evaluative tool, though.
Do items below the basic level of conceptualization get more or less 'imaginable'? 'Cat' is more 'imaginable' than 'animal'. What about 'Abyssinian'? Technically, there's more detail to 'imagine' with 'Abyssinian' but only if one is familiar with that specific cat breed, which a lot of people are not. My assumption is 'imaginability' maxes out on basic level concepts. You could scan your texts for hypernym/... /base/... /hyponym tuples. Then, check distance to base in an ontology, like WordNet. It works very much along the same intuitions of your original 'abstract to concrete ratio', but it seems a little more refined.
Also, adjectives and adverbs should boost 'imaginability' but only if they are common enough for the majority of people to understand, otherwise they would hinder it. So count your adjs and advs, check where they rank in a reference word list. Add to your text 'imaginability' score if they rank high, subtract if they rank low.
Thanks again! The example of Abyssinian cats raises my concern about how to actually define imaginability: can a text be inherently imagimable, or does it depend upon the knowledge of the reader? I will look further into your suggestions.
To describe my proposed research further, I want to look at the writer’s perception of their own writing’s imaginability vs it’s imaginability as perceived by a reader.
I have a feeling I may need to generate a new instrument, using some of your suggestions as starting points.