Referential and model-theoretic semantics has wide applications in linguistics, cognitive science, philosophy and many other areas. These formal systems incorporate the notion - first introduced by the father of analytic philosophy Gottlob Frege more than a century ago - that words correspond to things. The term ‘2’ denotes or refers to the number two. The name ‘Peter’ refers to Peter, the general term ‘water’ refers to H2O and so on. This simple idea later enabled Alfred Tarski to reintroduce the notion of ‘Truth’ into formal logic in a precise way, after it had been driven out by the logical positivist. Willard van Orman Quine, one of the most important analytic philosophers of the last century devoted most of his carer to understanding this notion. Reference is central to the work of people such as Saul Kripke, David Lewis and Hilary Putnam and many others.
Furthermore, the idea of a correspondence between whole expressions between, sentences or propositions and states of the world or facts drive the recent developments in philosophy of language and metaphysics under the label of ‘Grounding’ and ‘Truthmaking’ where a state of the world or a fact is taken to “make true” a sentence or a proposition. For example, the sentence “Snow is white.” is made true (or is grounded in) the fact that snow is white obtains. [1]
Given that this humble notion is of such importance to contemporary analytic philosophy, one may wonder why the father of modern linguistics - and a driving force in the field ever since the (second) cognitive revolution in the nineteen fifties - has argued for decades that natural language has no reference. Sure, we use words to refer to things, but usage is an action. Actions involve things like intentions, believes, desires etc. And thus, actions are vastly more complicated then the semantic notion of reference suggests. On Chomsky’s view then, natural language (might) not have semantics, but only syntax and pragmatics.
On Chomsky’s account, syntax is a formal representation of physically realized processes in the mind-brain of an organism. Which allows him to explain why semantics yields such robust results (a fact that he now acknowledges). What we call ‘semantics’ is in fact a formal representation of physically realized processes in the mind-brain of an organism – us. [2]
Chomsky has argued for this for a very long time and, according to him, to no avail. In fact, I only found discussion about this by philosophers long after I learned about his work. No one in a department that sides heavily on philosophy of language, metaphysics and logic ever mentioned Chomsky’s views on this core notion to us students. To be fair, some in the field seem to begin to pay attention. For instance, Kit Fine, one of the leading figures in contemporary metaphysics, addresses Chomsky’s view in a recent article (and rejects it). [3]
The main reason why I open this thread is that I came recently across an article that provides strong independent support to Chomsky’s position. In their article Fitness Beats Truth in the Evolution of Perception, Chetan Parakash et al. use evolutionary game theory to show that the likelihood for higher organisms to have evolved to see the world as it is (to have veridical perception) is exceedingly small. [4]
Evolutionary game theory applies the formalism originally developed by John von Neumann to analyze economic behavior and applies it in the context of natural selection. Thus, an evolutionary game is a game where at least two types of organisms compete over the same resources. By comparing different possible strategies, one can compute the likelihood for a stable equilibrium. [5]
Parakash et al. apply this concept to the evolution of perception. Simplifying a bit, we can take a veridical perception to be a perceptual state x of an organism such that x corresponds to some world state w. Suppose there are two strategies. One where the organism estimates the world state that is most likely to be the true state of the world. And another where the organism estimates which perceptual state yields the highest fitness. Then, the first strategy is consistently driven into extinction.
Now, compare this with reference: Some word (here taken to be a mental state) refers to a thing or a state of the world such that there is a one-to-one correspondence between the word and the world. It seems that this is an analogous situation. And thus, it should be equally unlikely that we have evolved to have reference in natural language. Any such claim needs empirical evidence and this is what Chomsky provides.
Chomsky’s main evidence comes from a test. I frame the test in terms of truthmaking. Consider the basic idea again:
Now, if this is true, then one would expect that the meaning of A changes because the world changes. We take a fact to be something that our best scientific theories can identify. In other words we take the objective reality to be whatever science tells us it is. Then we systematically vary physically identifiable aspects of the world and see how the meaning of a term that is supposed to pic out these aspects changes. The hypothesis is that if there is reference or correspondence, then the changes on one side should be correlated with changes on the other side. If this is not the case, then there is no one-to-one correspondence between words and things, and thus, natural language is not related to the physical world.
I give three examples, often discussed by Chomsky, to illustrate how this works: Consider the term ‘water’, embedded in the sentence “The water flows in the river.” Then, what flows in the river should be H2O. Suppose there is a chemical plant upstream and suppose there is an accident. There may be very few H2O molecules left, but it is still a river, it’s still water. So, we have enormous change in the world, but no change in meaning.
Or suppose you put a teabag into a cup of water. The chemical change may be undetectable small, but if you order tea and you get water, you wouldn’t be amused. So, virtually no change in the physical world and clear change in meaning.
Last, consider a standard plot of a fairy tale. The evil witch turns the handsome prince into a frog, the story continuous and at the end, the beautiful princess kisses the frog and turns him back into the prince. Any child knows that the frog was the princess all along. All physical properties have changed, but no child has any difficulty to track the prince. What this suggests is that object permanence does not depend on the physical world, but on our mind-internal processes.
This test has been carried out for a large number of simple concepts, in all cases, there is no correlation between physically identifiable aspects of the world and words. Notice that the test utilizes a dynamic approach. Only if we look at changes we see what is going on.
So, counterintuitive as this may seem, the evidence from the test supports the argument from evolutionary biology that developing concepts that correspond to the world is no advantage at all. And so, we shouldn’t be surprised that this is what we find, once we look closely.
On the other hand, does this conclusively prove that there is no relation between our concepts and the physical world? Not really, after all, the logical structure of language is there, but it suggests that we should look at the mind for a connection between words and the world. If we want to show that language has reference in the technical sense.
Sven Beecken
Dear Sven Beecken I think the answer to this question of yours (" which one is more likely to survive, the one who perceives the cube or the one who perceives the cuboid? ") is we can never ever know beforehand (a priori). As I replied to you earlier, I think evolutionary scientists (if can be called real scientists) can only justify a posteriori, and after a very long time.
Humans are supposed to be one of the fittest organisms, but look they are extincting themselves like a piece of cake. Dinosaurs, mammoths, dagger-tooth tigers, etc were among the fittest (otherwise they would not grow and develop such sophisticated) all lost the competition to many others including less sophisticated and more sophisticated ones.
This can be seen IMO only and only after the event (extinction) that has happened. Until the point of extinction, we can never know or tell which organism is fitter.
=============
Dear Sven, I agree that " If we cannot say that it is not possible for an organism to survive, we cannot exclude the possibility that the perceptual states can be uncorrelated with the world states. "
IMO It is not only possible, but also probable. After all, schizophrenics (whose perceptual states are not much correlated with the world states) have not been extinct. In some cultures or societies or even in some countries, they are praised as leaders or shamans or rulers who are quite confident in what they "see".
==============
Dear Sven Beecken something rather interesting came to my mind. If the organism does not perceive the cube as a cube but perceives it as a cuboid, then it seems to be Not veridical. But if the organism sees the same cube as the same cuboid every time (in a geometrically predictable way), its perception would be veridical. To be Not veridical, the organism needs to perceive the cube as something different each time (to see the cube once as a cuboid, once as a triangle, once as a rose, once as an exact cube, once something else). If each time it perceives an object incorrectly, but exactly in the same incorrect way, after some time the organism would learn to match that incorrect perception with the correct world (and correct the distorted signals coming from the senses). For example, the image on our retina is upside down and also distorted, but we perceive it as normal; this is because we are receiving the image from the retina upside down and distorted always and in the exact same way.
So to be not veridical, I think an organism needs to be perceptually inconsistent (each time seeing a different thing); if the organism makes the same mistake each time in the exactly same way, its qualia making nodes / modules / systems would adapt to the distortion and correct it for the organism, according to other senses. A fly has many lenses; we don’t know what it sees; but I guess it perhaps sees only one world and not many worlds overlapped besides each other. (of course this is merely a guess, generalizing humans and myself to a fly). This is because the image distortion caused by its many lenses is always in the same way.
=============
Dear Joachim, Thanks for your point but I think you misunderstood me. I did not say that schizophrenics are exemplary and representative of humans. In the context of my whole post to Sven (and his earlier post to me), I said “not only it is possible for them to survive [any chances other than zero for survival, as Sven suggested], but also they have a great chance (hence it is probable for them) to **survive**, and not to dominate the population. Even if one schizophrenic survives in the whole 8 billion people and passes his/her gene, the possibility to survive exists.
“To have the possibility (or probability) of survival” differs from “to have the possibility or probability of necessarily wining the competition”. Survival means necessarily not losing [not dying] (and/or even winning as a bonus) IMO. The important part of survival is to remain in the genetic pool. And schizophrenics do remain in the genetic pool. I never claimed that they dominate the genetic pool, but said that there is possibility and probability for those cases of non-veridicality to continue their existence without extinction.
I never said anything implying that “”perceptual states are not much correlated with the world states" because schizophrenics are not extinct,”. I said it is possible and even probable for at least 1 person having perceptual states not correlating with the world states to survive. I think for the whole context, we should read the 2 or 3 comments of Sven and me before this schizophrenic one.
The context was not about the reasons for the success of humans, or the dominance of veridicality over non-veridicality in evolution. The context was about the possibility that some non-veridical cases can survive as well.
As a matter of fact, I had written earlier about my view that I agree with you that veridical cases should be more successful. (if I recall correctly – sorry I haven’t slept in weeks – 3 to 6 hours daily – no memory remains lol)
============
Dear Joachim, thank you for your nice explanation. You are right, I should have considered the whole population, and not single cases, when talking about veridicality.
============
Dear Sven, I think these two are the same: "Organism learns to adapt to the new (distorted) sensory signal and perceive it in a normal way" and " The other is that the organism perceives the object in exactly the same way each time, then it may eventually settle on an incorrect representation ".
At least I was saying the former in the latter sense. I mean there is no need for correction of the distorted signal. And we don't have any clue that our qualia is not distorted the way the image on our retinas is distorted. We just think our qualia is clear cut and sharp and normally sided (not upside down), but in actuality, there is no reference for the up or down, etc. So as long as we do our tasks accurately, what we see does not even matter. And if we always see the same incorrect image in the same incorrect way, then we can use that very incorrect and distorted image to base our calculations on it, or use it for painting the scenery. This is because we have 100% consistency and predictability of our (distorted) sense. So despite its distortion, it is still accurate.
I couldn't understand why we should have a one-to-one correspondence between the world items and words. Words are not completely made of world objects. Mind does matter. There is grouping and gestalting (for example in the case of water+ dry tea -> converted to (fluid) tea). There is interpreting. There is sarcasm. There is imagination and fantasy (items that cannot exist in the real world but exist in the mind). So why should we assume that there needs to be a 1-to-1 correspondence, and if there was not such a correspondence, then something is wrong?
===========
Dear Sven
I am not a philosopher and thus can only contribute to your professional talks occasionally, when I understand something or have a question in mind. But see no harm in opening a new thread dedicated to this topic. :) I am interested but the way a child can be interested in a jet fighter (a very superficial interest towards a very complicated thing the child knows almost nothing about).
Regarding your question: " consider two organisms that receive the same stimulus, if it is possible that they have two representational states and if it is possible that the states are incompatible, which organism would perceive the world accurately? If this cannot be decided, we cannot be certain that the we perceive the world accurately. "
Let's say that both of them look at a cube and one perceives a cube but the other perceives a cuboid. Let's say that the one that perceives a cuboid does so because of geometrical distortions somewhere along the way (in the eye, in the brain, or between the eye and cortex). If these geometrical distortions are constant and predictable for all things and all times, the organism should see everything in the same distorted fashion.
So in this case, both organisms would IMO perceive the world accurately.
Now let's say the second organism looks at the cube but sees a cuboid that changes into a circle and after some time changes into a triangle and then changes into a cube and then to a star etc.... In such a case, the second organism cannot have an accurate perception.
In any case, none of these organisms have any way (IMO) to subjectively verify the accuracy of their perception, unless (1) they have more than 1 senses and (2) only one or some of their perceptions are distorted and some others are not (or the other ones are distorted in another way). In this case, the organism can verify its different perceptions against each other. For example, it perceives a cuboiud instead of a cube; he can use his sense of touch to verify if it is a cube or a sphere or a cuboid, if its tactile sense is intact or at least not affected by the same distortion that has affected its eyes.
Now if the organisms has more than 1 senses, it still cannot be so sure which sense or which groups of senses are right (accurate), But in the case of inconsistency between senses, it can understand that at least something is wrong and some of its senses are not perceiving the world accurately.
=============
Dear Joachim
Thanks a lot for agreeing with me. :) I further agree with you on the lack of any reference for the up or down, and other points of yours (intersubjective verification, etc).
==============
Dear Larry
Very interesting. Thank you. You basically killed the objective reality, the god's eye view, and Popperian science foundations! lol :) And you did it effectively. Not as if I was a very skeptic person before, now I can't even cling to the objective reality either lol.
What is used here is a judgment about the truth conditions for certain scenarios. These judgments may vary from person to person (or not) and may vary with change of context (or not). In order to get somewhere, we need to control the situation and only allow for specified variables to change.
So, the context is given by the scenario and then you (or anyone) is asked what are your judgments about the truth conditions in this scenario (not in another imaginable one). So, given that a river is highly pollute, is it still a river? And does a river contain water?
Similarly, if you ask for tea, and you get water, would you say that it is true that you got tea?
My earlier posts on this:
Hi Sven, I have not read the main articles but based on your description of their theory, I think the authors may have ignored neuroscientific evidence, when saying " As the size of X increases (more possible perceptions), the likelihood that fitness strictly dominates truth goes against 1 ".
In reality and in healthy people, the size of X never increases beyond a limit. It is strictly limited by two interwoven mechanisms: attention and working memory. So regardless of how many objects are in the room, we can attend to a small (and rather fixed) number of them at each time and keep a fewer of them in our mind (up to merely 4+-2 chunks).
So one might be able to say that at the center of active attention, truth beats fitness. However, out of that rather small focal domain fitness beats truth.
[Of course, fitness still plays an important role in the attentional span too; and truth is still a strong element of what our brain creates (or imagines) outside the attentional span --but in the center of attention truth is more important, and vice versa].
And the above was for healthy people. In schizophrenia, fitness may beat truth even at the center of attention, because they don't have fully functional and healthy attentional and working memory systems, and instead they have quite strong systems for imagination.
My two cent, sorry in advance if I didn't get the gist of what you said lol. :)
============
I see now. Of course evolutionary fitness might increase when the brain power increases, and it is something totally different from what I had in mind. I said might, because it is not quite predictable that what increases fitness. It depends on numerous out-of-control factors. Right now, I think, the evolutionary fitness of human brain is lower than the evolutionary fitness of a cockroach brain! At least cockroaches don't cause fatal climate change with the products of their brains, and hence they seem to be more fit, evolutionarily speaking! (kididng)
=============
Dear Sven (and Dear Joachim)
Thanks for elaborating on the evolutionary game and the other points. I have read your and Joachim's comments some times and should re-read them to digest them.
I think I too would reach a similar deduction as that of Joachim that in reality, the fittest strategy would be to have accurate (veridical?) perceptions. So fitness seems to overlap almost completely perceptual veridicality in reality. The thing about evolutionary games or theories is that (1) they are stated post hoc, i.e., once one reaches the desired justification and (2) they have a long-term perspective, and (3) they consider very large populations as one single thing.
It is possible that the same evolutionary game be run with slightly altered parameters, and the result changes in favor of perceptual veridicality (or whatever parameter that is being called veridicality by those auhtors).
It is also possible that for example 10000 billion microbes get (incorrectly and unknowingly) close to antibiotic. Only 1 bacterium will survive out of 10000 billion. but that 1 bacterium will duplicate into 10000 bacterium that are now resistant to that particular antibiotic. Now (1) if we choose a propsective view, we would say what those bacteria will do is suicidal. (2) If we choose a short-term retrospective span of assessment, we will say "what those bacteria did was not fit (because it was lethal for almost all of them except 1) and also was not accurate (veridical?) (because those bacteria could not correctly detect the dangerous and bad taste of toxin).
However, (3) if we choose a long-term evolutionary retrospective view, when looking at the long-term span, the same suicidal act of bacteria would be "fittest" because it gave rise to a new sub-type of bacteria that can resist that antibiotic and therefore survive much easier.
==============
You are right Larry. I too was saying that this retrospective long-term evolutionary view is not a much valid one (without saying anything about it being or not being selfish). I think Joachim and Sven too did agree with me. It was in response to a short discussion between us 3 here, some 30-60 posts earlier. I suggest you read the whole discussion first (started by Sven), if you were interested, so that you know the whole context. I guess it might be of interest to you.
Dear Sven Beecken I think the answer to this question of yours (" which one is more likely to survive, the one who perceives the cube or the one who perceives the cuboid? ") is we can never ever know beforehand (a priori). As I replied to you earlier, I think evolutionary scientists (if can be called real scientists) can only justify a posteriori, and after a very long time.
Humans are supposed to be one of the fittest organisms, but look they are extincting themselves like a piece of cake. Dinosaurs, mammoths, dagger-tooth tigers, etc were among the fittest (otherwise they would not grow and develop such sophisticated) all lost the competition to many others including less sophisticated and more sophisticated ones.
This can be seen IMO only and only after the event (extinction) that has happened. Until the point of extinction, we can never know or tell which organism is fitter.
=============
Dear Sven, I agree that " If we cannot say that it is not possible for an organism to survive, we cannot exclude the possibility that the perceptual states can be uncorrelated with the world states. "
IMO It is not only possible, but also probable. After all, schizophrenics (whose perceptual states are not much correlated with the world states) have not been extinct. In some cultures or societies or even in some countries, they are praised as leaders or shamans or rulers who are quite confident in what they "see".
==============
Dear Sven Beecken something rather interesting came to my mind. If the organism does not perceive the cube as a cube but perceives it as a cuboid, then it seems to be Not veridical. But if the organism sees the same cube as the same cuboid every time (in a geometrically predictable way), its perception would be veridical. To be Not veridical, the organism needs to perceive the cube as something different each time (to see the cube once as a cuboid, once as a triangle, once as a rose, once as an exact cube, once something else). If each time it perceives an object incorrectly, but exactly in the same incorrect way, after some time the organism would learn to match that incorrect perception with the correct world (and correct the distorted signals coming from the senses). For example, the image on our retina is upside down and also distorted, but we perceive it as normal; this is because we are receiving the image from the retina upside down and distorted always and in the exact same way.
So to be not veridical, I think an organism needs to be perceptually inconsistent (each time seeing a different thing); if the organism makes the same mistake each time in the exactly same way, its qualia making nodes / modules / systems would adapt to the distortion and correct it for the organism, according to other senses. A fly has many lenses; we don’t know what it sees; but I guess it perhaps sees only one world and not many worlds overlapped besides each other. (of course this is merely a guess, generalizing humans and myself to a fly). This is because the image distortion caused by its many lenses is always in the same way.
=============
Dear Joachim, Thanks for your point but I think you misunderstood me. I did not say that schizophrenics are exemplary and representative of humans. In the context of my whole post to Sven (and his earlier post to me), I said “not only it is possible for them to survive [any chances other than zero for survival, as Sven suggested], but also they have a great chance (hence it is probable for them) to **survive**, and not to dominate the population. Even if one schizophrenic survives in the whole 8 billion people and passes his/her gene, the possibility to survive exists.
“To have the possibility (or probability) of survival” differs from “to have the possibility or probability of necessarily wining the competition”. Survival means necessarily not losing [not dying] (and/or even winning as a bonus) IMO. The important part of survival is to remain in the genetic pool. And schizophrenics do remain in the genetic pool. I never claimed that they dominate the genetic pool, but said that there is possibility and probability for those cases of non-veridicality to continue their existence without extinction.
I never said anything implying that “”perceptual states are not much correlated with the world states" because schizophrenics are not extinct,”. I said it is possible and even probable for at least 1 person having perceptual states not correlating with the world states to survive. I think for the whole context, we should read the 2 or 3 comments of Sven and me before this schizophrenic one.
The context was not about the reasons for the success of humans, or the dominance of veridicality over non-veridicality in evolution. The context was about the possibility that some non-veridical cases can survive as well.
As a matter of fact, I had written earlier about my view that I agree with you that veridical cases should be more successful. (if I recall correctly – sorry I haven’t slept in weeks – 3 to 6 hours daily – no memory remains lol)
============
Dear Joachim, thank you for your nice explanation. You are right, I should have considered the whole population, and not single cases, when talking about veridicality.
============
Dear Sven, I think these two are the same: "Organism learns to adapt to the new (distorted) sensory signal and perceive it in a normal way" and " The other is that the organism perceives the object in exactly the same way each time, then it may eventually settle on an incorrect representation ".
At least I was saying the former in the latter sense. I mean there is no need for correction of the distorted signal. And we don't have any clue that our qualia is not distorted the way the image on our retinas is distorted. We just think our qualia is clear cut and sharp and normally sided (not upside down), but in actuality, there is no reference for the up or down, etc. So as long as we do our tasks accurately, what we see does not even matter. And if we always see the same incorrect image in the same incorrect way, then we can use that very incorrect and distorted image to base our calculations on it, or use it for painting the scenery. This is because we have 100% consistency and predictability of our (distorted) sense. So despite its distortion, it is still accurate.
I couldn't understand why we should have a one-to-one correspondence between the world items and words. Words are not completely made of world objects. Mind does matter. There is grouping and gestalting (for example in the case of water+ dry tea -> converted to (fluid) tea). There is interpreting. There is sarcasm. There is imagination and fantasy (items that cannot exist in the real world but exist in the mind). So why should we assume that there needs to be a 1-to-1 correspondence, and if there was not such a correspondence, then something is wrong?
===========
Dear Sven
I am not a philosopher and thus can only contribute to your professional talks occasionally, when I understand something or have a question in mind. But see no harm in opening a new thread dedicated to this topic. :) I am interested but the way a child can be interested in a jet fighter (a very superficial interest towards a very complicated thing the child knows almost nothing about).
Regarding your question: " consider two organisms that receive the same stimulus, if it is possible that they have two representational states and if it is possible that the states are incompatible, which organism would perceive the world accurately? If this cannot be decided, we cannot be certain that the we perceive the world accurately. "
Let's say that both of them look at a cube and one perceives a cube but the other perceives a cuboid. Let's say that the one that perceives a cuboid does so because of geometrical distortions somewhere along the way (in the eye, in the brain, or between the eye and cortex). If these geometrical distortions are constant and predictable for all things and all times, the organism should see everything in the same distorted fashion.
So in this case, both organisms would IMO perceive the world accurately.
Now let's say the second organism looks at the cube but sees a cuboid that changes into a circle and after some time changes into a triangle and then changes into a cube and then to a star etc.... In such a case, the second organism cannot have an accurate perception.
In any case, none of these organisms have any way (IMO) to subjectively verify the accuracy of their perception, unless (1) they have more than 1 senses and (2) only one or some of their perceptions are distorted and some others are not (or the other ones are distorted in another way). In this case, the organism can verify its different perceptions against each other. For example, it perceives a cuboiud instead of a cube; he can use his sense of touch to verify if it is a cube or a sphere or a cuboid, if its tactile sense is intact or at least not affected by the same distortion that has affected its eyes.
Now if the organisms has more than 1 senses, it still cannot be so sure which sense or which groups of senses are right (accurate), But in the case of inconsistency between senses, it can understand that at least something is wrong and some of its senses are not perceiving the world accurately.
=============
Dear Joachim
Thanks a lot for agreeing with me. :) I further agree with you on the lack of any reference for the up or down, and other points of yours (intersubjective verification, etc).
==============
Dear Larry
Very interesting. Thank you. You basically killed the objective reality, the god's eye view, and Popperian science foundations! lol :) And you did it effectively. Not as if I was a very skeptic person before, now I can't even cling to the objective reality either lol.
Dear Sven,
there is A LOT to unpack in your question (it's probably the longest introductory text to any question I've ever seen on RG). The way you are framing it is also quite different from what we discussed on the "What is consciousness ... its physical processes?" thread. Back there, we only discussed Parakash et al.'s claims regarding veridicality and fitness, and I never even considered that it could lead to the question whether semantics is in fact syntax. Now, I'm sure much meatier stuff could be derived from Chomsky about the relationship(s) between syntax and semantics, but that would require starting from scratch by evaluating Chomsky's arguments (and whether his are valid is a completely different question - one that I assume you have a personal interest in), and then the basic question is how to connect this new discussion to the one we've been leading:
If semantics depends on the truth-capability of the entities that have semantics (not all representations have them [e.g. most art is representational, but has no truth-conditions], but e.g. statements and claims do, so we should keep that in mind as potential examples), then indeed the fact that our perception lacks truth (in some qualified sense) seems to undermine its having semantics.
However: Your initial question "Is Semantics in Fact Syntax?" is prima facie not a question about whether there is semantics, but whether semantics is reducible to syntax. Hence, it seems the argument that perception is not veridical does not connect well to the question whether semantics is syntax (Parakash et al.'s argument, if it is correct, would not lead to a reduction of semantics to syntax, but to doing away with the first and perhaps even the second).
At worst, under some substantial assumption (said connection between semantics and veridicality), there is no perceptual semantics if Parakash et al. are right (which I have not yet seen a reason to believe).
So, I'd rather we keep these separate until we know more: We either debate whether the argument by Parakash et al. even makes sense (as we did on the "What is consciousness ... its physical processes?" thread), or we discuss whether and potentially how semantics could be reduced to syntax. Or do you have a more pressing reason for discussing both at once?
Best,
Joachim
Dear Joachim, you are right and I was just to post a reply addressing some of your points. I think it is best to post it now as it is and addressee your comment later. As ever, thank you for your contributions!
Best,
Sven
Dear Vahid Rakhshan , Joachim Lipski and readers,
Before I continue with our discussion, let me first say for people that are not familiar with our discussion that this is a continuation from a different thread (I link it below, but I fear that if you are interested, you have to scroll back quite far). The debate was about one aspect of the question here, i. e. the argument from evolutionary biology against the veridicality of perception.
I also apologize to Vahid and Joachim if the way I have set this thread up is not what they may have expected. My apologise! The reason why I was interested in the argument against veridicality of perception in the first place is that it is analogous to the word-world relation. As I tried to explain in the exposition, whether or not this is really so is a crucial step. So, I think we can continue with the discussion of perception (and since Joachim brought in the notion of truth evaluable content, I briefly discuss it at the end). Thankfully, Vahid has posted his contributions. I include (most of) his and Joachim’s latest contributions in this reply.
Second, let me briefly restate the problem as Parakesh er al. present it, so we are on the same page.
Parakesh et al. look at types of organisms as they evolve over generations, not at changes in individual organisms. What they target is the claim that a particular organism perceives the objective world as it is. Note that this is not a metaphysical view, but a naturalistic one. There is an external world and we, as organisms, are part of it. We take the objective world to be what our best theories tell us it is.
It is here where I think Parakesh et al. went wrong, we do not need to engage in any further considerations about questions of realism with regard to the objective world. We can simply stop at the naturalistic view and take the results of science for granted. Parakesh et all accept the later, but seemingly reject the former. The advantage of the naturalistic view is that there is no “God” perspective required, we just look at organisms in their environment with the epistemic apparatus that the natural sciences give us. We just happen to be one of the organisms thusly looked at.
Vahid wrote:
“Regarding your question: " consider two organisms that receive the same stimulus, if it is possible that they have two representational states and if it is possible that the states are incompatible, which organism would perceive the world accurately? If this cannot be decided, we cannot be certain that the we perceive the world accurately. "
Let's say that both of them look at a cube and one perceives a cube but the other perceives a cuboid. Let's say that the one that perceives a cuboid does so because of geometrical distortions somewhere along the way (in the eye, in the brain, or between the eye and cortex). If these geometrical distortions are constant and predictable for all things and all times, the organism should see everything in the same distorted fashion.
So in this case, both organisms would IMO perceive the world accurately.
Now let's say the second organism looks at the cube but sees a cuboid that changes into a circle and after some time changes into a triangle and then changes into a cube and then to a star etc.... In such a case, the second organism cannot have an accurate perception.
In any case, none of these organisms have any way (IMO) to subjectively verify the accuracy of their perception, unless (1) they have more than 1 senses and (2) only one or some of their perceptions are distorted and some others are not (or the other ones are distorted in another way). In this case, the organism can verify its different perceptions against each other. For example, it perceives a cuboiud instead of a cube; he can use his sense of touch to verify if it is a cube or a sphere or a cuboid, if its tactile sense is intact or at least not affected by the same distortion that has affected its eyes.
Now if the organisms has more than 1 senses, it still cannot be so sure which sense or which groups of senses are right (accurate), But in the case of inconsistency between senses, it can understand that at least something is wrong and some of its senses are not perceiving the world accurately.”
As for the first case, consider it in terms of truth evaluable content (essentially another way to spell out the word-world relation). If x is a cuboid, not a cube for organism A and if x is a cube, not a cuboid for organism B, then we have a contradiction. This is essentially Parakesh’s and Hoffman’s thesis. So, accuracy really means accurate-for-particular organism. Which seems to be consistent with the second case.
Now, where I’m a bit puzzled is what you mean by ‘verification of the accuracy’. For the way in which you used it, in the normal case, verification doesn’t make much sense, since we allow the world to be contradictory anyways. I would agree with you that in the special cases (something changes every time I look at it or I see a sphere, but feel a cube), verification makes sense, but not really in the normal case.
Now, we can make sense of the notion of ‘verification’, if we use our epistemic capacities, we look at two types of organisms, say us and some bat species and some physically identifiable aspect of the world. We then are in the position to formulate the argument from evolutionary biology with the result that both types can have inconsistent interpretations and it can be that neither interpretation is consistent with the physically identifiable aspect of the world. In the sense that A sees a cube, B a cuboid and the state of the world is a pyramid.
This also seems to apply to Joachim’s take on your points.
“Regarding your recent exchange, I must admit I am completely on Vahid‘s side, except I would go even further: if an external object consistently triggers in us a representation, then I do not even see what possible criteria we could have for judging the representation to be distorted (etc.). Of course, the projection on our retina may be upside down, distorted, etc.; but we do not „see“ it as such - we do not „see“ the projection in the same way we do not „see“ the neural processing underlying our visual perception. It is just as Vahid says: when external object X impinges on our senses, and Y is the result, then we see Y as X, no matter the „visual“ properties of the physical representations „in our head“. This is why I judge it to be extremely difficult for perceptions to not be veridical - the best chance, again as Vahid says, would be if they were inconsistent. Accuracy would to a large degree be a matter of intersubjective exchange.”
That’s exactly the point. The possibility of inconsistency both between the perceptions of two types of organisms as well as with the physically identifiable aspects of the world makes the notion of ‘accuracy’ or ‘veridicality’ a much weaker notion then intended. It seems we can have inconsistencies that are undetectable given only our sense (and the fact that we can’t talk to other species), but they are in principle detectable with our epistemic apparatus. After all, science tells us that the world is very different from what we perceive. This gives credence to the not so trivial claim that most of what we perceive is actually due to our internal systems, not the physical world.
As for your take on truth evaluability and veridicality:
(if I remember correctly, there is a passage early in Quine’s Word and Object, where he talks about a rectangular table causing entirely different projections on the retinas of each person standing around it, and only through their intersubjectively treating their perceptions as being of the same table the „objective image“ of the table as a rectangle is created). The representations could of course, as in the case of said frogs, just be triggered by such a wide variety of external objects that their content would be highly dubitable - but that would impede their truth-evaluability, which is itself a prerequisite for judging veridicality, so we wouldn‘t even arrive at judgments about veridicality in the first place.”
It seems to me that truth evaluability suffers from the same problem, as I tried to indicate above. Truth-evaluability may proceed judgments about veridicality, but truth evaluability itself may be the result of (mostly) internal processes. In the sense that the answer to my question here would be: yes, semantics is in fact syntax. Since I don’t suppose that this is what you want to say, one would need a way to stop the argument against veridicality either totally or at least in its applicability to truth evaluation.
Since I need to think about your further points some more I stop here for now.
Best,
Sven
https://www.researchgate.net/post/What_is_consciousness_its_physical_processes
Dear Sven,
thank you for your explanation, and for agreeing to break the discussion down a bit the way I suggested.
So, in order to make sense of Parakesh, and to take a first step in evaluating their claim in a more detailed way, I would like to pose the question: How do they conceive of "truth" naturalistically?
For example, as in your statement "I see a sphere, but feel a cube", both the terms "sphere" and "cube" each mean an object of a representation, so the statement is only a naturalistic one if you supply a naturalistic rephrasing of the entire statement. One naive first attempt could be: When external object X impinges upon my retina, my visual system is excited in the way it would if X were a sphere, whereas when I touch X, my somatosensory cortex is excited in the way it would be if X were a cube.
What is your take on this? What is Parakesh et al.'s take?
Best,
Joachim
Dear Joachim and readers,
Before I try to give an answer, let me briefly address your point on the role of formal proofs. You wrote: “I appreciate and value formal proofs, but I do not specialize in them, and have little interest in carrying them out myself. What I care about is the interpretation of (a) the axioms going into the math and (b) the results coming out of it.” I completely agree with you and I would add (probably superfluous as well) that in an empirical context, the math alone is never sufficient. One needs evidence. Having said that, I basically follow Chomsky’s dictum that one can formalize things to the degree that one understands them.
Now, as a second preliminary point, let me address the notion ‘naturalistic’, so we are on the same page (the term is not exactly a well defined one after all). What I had in mind is the way neuroscientists such as Randy Gallistel and Adam King study, say, insect navigation. They take the brain to be a representing system (or a collection of such systems) and the state of the world as a represented system. In other words, they assume that the brain contains symbols and carries out manipulations on these symbols. The two systems together with the mapping between them is said to be a representation iff i) the mapping is causal & ii) the mapping is structure preserving (the systems “mirror” each other, i. e. the mapping is homomorphic) & iii) the symbolic operations in the representing systems are at least sometimes behavioral efficacious.
This seems to fit with your example:
“For example, as in your statement "I see a sphere, but feel a cube", both the terms "sphere" and "cube" each mean an object of a representation, so the statement is only a naturalistic one if you supply a naturalistic rephrasing of the entire statement. One naive first attempt could be: When external object X impinges upon my retina, my visual system is excited in the way it would if X were a sphere, whereas when I touch X, my somatosensory cortex is excited in the way it would be if X were a cube.”
So, we have two representing systems, the visual system and the somatosensory system. There is a causal connection, i.e. the signals from X that exits both systems. Now, let us assume that the homomorphism holds for either system, then the representing systems are inconsistent with one another. What this means in behavioral terms, I don’t know (probably some sort of cognitive dissonance, but this is just speculation on my part).
It seems to me that this setup satisfies your condition for a naturalistic statement. If not, please correct me.
As for Parakesh’s and Hoffman’s take on the notion of ‘truth’ within their framework, it is not too clear (or at least not to me). After all, the paper is primarily a mathematics paper. So, I looked at Donald Hoffman’s paper The Interface Theory of Perception. Unfortunately, the theme of ambiguity in the informal exposition continuous. Sometime, he seems to take truth just to mean “true” i.e. veridical perception. Sometimes, he seems to talk about other mental states. For instance, when he quotes Steven Pinker who says that “Members of our species commonly believe, among other things, that objects are naturally at rest unless pushed, that a severed ftetherball will fly off in a spiral trajectory, that a bright young activist is more likely to be a feminist bankteller than a bankteller […]. The idea that our minds are designed for truth does not sit well with such facts.”
What is the case is that his formal proposal for what he calls a “conscious agent” (CA) does seem to allow for a wider range of mental states then only perceptual states. For Hoffman, a CA is an entity that has perceptions, can make decisions and acts. The way he spells this out is essentially equivalent to universal Turing machines and since Turing machines can represent all currently known physical processes, the mental states the a CA can represent can include whatever states correspond to truth evaluable content. (Unless, of course, Penrose is right and what is going on in the mind-brain involves currently unknown laws of nature.)
As for my own take on the notion of “truth” in a naturalistic context, I think it is an unsolved problem. If we take Chomsky’s current theory, then words are symbols in the mind-brain that are combined into expressions which in turn are interpreted at two interfaces. One interface is responsible for internal semantic interpretation and the other is responsible for externalization by interactions with sensory-motor systems.
So, if we take the semantic interpretation to constitute a mental object that is handed over to other cognitive systems, then it seems that Chomsky’s theory cannot tell us what exactly distinguishes a mental object that has the property of being true from one that has the property of being false. (Let alone what this means in behavioral terms.)
If we look at the philosophical side of things, then reference is given by condition ii). A homomorphism between the manipulations on the mental object and the way objects behave in the physical world (or an isomorphism in the case of grounding, since the relation is usually taken to be asymmetric).
What Chomsky criticizes is that this is merely a stipulation. What is need is an account of how the internal semantic interface connects to the physical world.
What Parakesh and Hoffman seem to say is that condition ii) does not hold and their argument seems to be general enough to give independent support to Chomsky’s claim. But I think I’m getting ahead of myself. So, I stop here.
Last but not least, let me say that I’m extremely grateful for your willingness to discuss these issues. You are right, I do have personal interest in this and I have been thinking about these issues for quite some time. So, please stop me at any point, I need resistance.
Best,
Sven
Gallistel, Charles R., and Adam Philip King. Memory and the computational brain: Why cognitive science will transform neuroscience. Vol. 6. John Wiley & Sons, 2011.
https://www.researchgate.net/publication/324271736_The_Interface_Theory_of_Perception
Dear Sven,
thank you very much for explaining. Here are my thoughts on the points you raise:
In any case, I'm happy to discuss these issues with you. I am not exceedingly interested in Chomsky's cognitive theories (I appreciate him more as a political figure ;p), but that's mainly because of the "problem" I indicated above.
Best,
Joachim
Dear Sven Beecken,
I think your question "Is Semantics in Fact Syntax?" is particularly interesting for linguists.
Lexemes in themselves are just meant to refer to notions. Some of them can be used as such to refer to notions, such as 'courage', 'fear', 'love', 'anger' when 'courage' or 'fear' or 'anger' or 'love' are the topics being discussed. Lexemes are said to be arbitrary in so far as 'love' in English refers to the same notion as 'amour' in French and obviously they have nothing in common as lexical units, except their acquired ability to refer to the same phenomenon, /L.O.V.E/. In this case, you could say that semantics is not syntax.
But whenever the referent requires a determiner ('my father', 'this issue') or when reference (1/ state 'my neighbour is a wonderful woman' ; 2/ event "she had an accident this morning') requires some form of determination to acquire its referential power, you can say that meaning builds up through the use of morpho-syntactic features.
The notion referred to by the lexical unit 'neighbour' can be defined and understood out of context.
But in order to represent a referent ('my neighbour"), you definitely need to add determination (cf. 'my'). In itself, 'my' as a determiner refers to the relation that must be expressed whenever the notion 'neighbour' is used to represent a referent (a neighbour is 'somebody's' neighbour, i.e. a neighbour acquires referential value only if the relation is specified). The relation between me expressed by 'My' and 'neighbour' follows syntactic rules. And you can say that in this case the noun phrase 'my neighbour' acquires meaning through syntax.
The same can be said about the predicative and enunciative relation used to refer to the state "my neighbour is a wonderful woman', or the predicative and enunciative relation used to refer to the event "she had an accident this morning". They acquire referential value as reference to a state of things or as reference to an event, through the use of verbs (which have nodal, modal and predicative function) and through the use of verbal determination (tense, aspect, modality and complementation) are both verbal determination and the determination of utterances (given the central function of verbs in utterances).
Here again, you can remark that meaning is built up through the syntactic relations contained in these utterances.
In conclusion, I would say that syntax is always a contribution to meaning, and that meaning depends most of the time on the syntax of a context.
Reference to notions (strict semantics), however, can be made through the use of lexemes.
And there are a number of utterances whose syntax is minimalist, such as interjections or apostrophes ('hey!' / 'Peter!') which are immediately related to the enunciative situation.
Best regards,
Jean-Marie Merle
Dear Joachim,
Thank you for your thoughts. Just to clarify, I referred to Chomsky's theory as part of an answer to what I think the problem is. For the purpose of our discussion, which at this point I take to be a discussion about the applicability of the argument form evolutionary biology to semantics in a naturalistic setting, I’m primarily interested in the applicability to externalist views on semantics. So, for what follows, I shall not presuppose any internalist assumptions unless otherwise stated. (I suppose this makes it a bit more interesting for you?)
With regard to the homomorphism, I may have been a bit sloppy in my formulation. Basically, what Gallistel and King capture is the dynamics of the situation, while it seems to me that the definition of veridicality you proposed is static, in the sense that it doesn’t tell us what happens if the world states change. Consider again the definition:
“A perceptual state w1 in an organism o is veridical iff
i) w1 is a (biologically reliable) causal effect of a set of world states [X] &
ii) the obtaining of w1 in o makes organismically advantageous behavior more likely or organismically detrimental behavior more unlikely compared to other ws [this is my elaboration of the "behavioral constraints" clause] &
iii) the differences between members of the set [X] are negligible with regards to o's evolutionary fitness [this is my elaboration of the "interests or requirements" clause].”
i) constitutes the mapping. ii) allows us to compare at least two perceptual states w with regard to a collection of world states [X]. What happens if there is a transition from one world state to another in a way that violates iii)? There must be some change in perception if the perception of the new worldstate is veridical. After all, it is a natural thought that if I perceive an object in motion, then there is an object moving. How else would you model the dynamics of the situation other then with a structure preserving relation?
Parakesh et al. allow for radically different world states to correspond to the same mental state. This entails that there cannot be a structure preserving relation in the first place. So, in that sense I would agree that you don’t need the condition, but that doesn’t mean you avoid their argument, just the question of dynamics.
The consideration of the dynamics of the situation is also important for Chomsky's test. It seems to me that the test is best understood in dynamical terms. If we only look at a static situation, the problem becomes hard to see (it might not be impossible, similar to in the case of veridicality, where the argument applies to the static definition, but certainly much harder). Just to be certain, Chomsky's test works without any commitment to a particular theory. All that it relies on are the judgments of individuals. I do, however, begin to think that the test itself needs some serious rephrasing. For now, I focus on the argument from evolution against externalism.
So, to give you an idea of what I have in mind, consider the following toy argument. Since we already talked about content, let us look at the familiar broad/narrow content distinction. Just to start, consider what I call the naive externalist view: Narrow content is some collection of mental states which in turn are restricted causally by some collection of world states. In this senses, narrow content is included in the broad content, such that the semantic evaluation depends on the state of the world.
Before I continue, let me address some further point. It is indeed unfortunate that Parakesh and Hoffman are so vague in their informal exposition on truth, but in their defense, they are primarily interested in perception and they are precise in their formal expositions. (As a side note, I don’t recall that I ever encountered a discussion of the notion of ‘truth’ in a naturalistic setting. I think I read somewhere that there are discussion about the logical structure of the LOT. Unfortunately, I’m not too familiar with that debate, if you know any literature on the subject, I would be grateful if you could point me to it.) To continue, let us see what Parakesh and Hoffman could say to the naive externalist.
There are two possible routs that I can see. One is to point to the fact that any causal influence on the narrow content is mediated by perceptions. Thus, if the perceptions are no good, what makes the naive externalist think that mental states build on perceptual states are any better? The other is to modify Parakesh’s argument. All that I need are two models, one for mental states and one for the world states. Since nothing in their argument depends on what kind of mental states we are talking about, one can make that argument that an organism that can tell the truth about the world is likely to be on the loosing side of evolution.
Naturally, this is a simplistic exposition and I certainly do not think that any one who believes in externalism would be convinced. It is my hope however that it suffices to see the possible strategies (as far as I can tell) that Parakesh et al. can use against externalist views on semantics.
As an interesting factoid, the SEoP article on Externalism about Mental Content, cites a poll taken on PhilPapers in 2009 that suggest that “these days, only a thin majority (51.1%) of analytic philosophers “accept” or “lean toward” externalism (20% endorsed internalism while 29% gave one of the "other" responses).” - Who knew? Not me.
Best,
Sven
Dear Sven,
thanks for your further reply.
I must admit I am a bit skeptical regarding the very idea of homomorphism. I guess ideally the idea is that the world is structured, and our mind incorporates models which are models of the world because they are structured like the world.
I simply think it is probably naive to think that mental representation is generally of this sort. I guess it works for orientation, or perhaps for one's field of vision, or things like that. So, the first question would be: If Parakesh et al.'s claim extends to this picture, how does fitness beat veridicality?
Now, when I think of mental representation, I do not just think of orientation and the like. As a philosopher, I think of propositions. And I think doing this is well justified, because theories of truth and semantics are firmer in the area of propositions than in the area of, say, models. Models approximate the state of the world, they do not necessarily have to truthfully represent it. Whereas propositions, when appearing in claims or statements, are true or false. Hence, when we talk about veridicality, I think of propositions. But it might just be that veridicality can be thought of as some Bayesian property of internal models. This matters because depending on what we think of here, the way semantics are assigned will likely differ. Internal models can be assigned meanings depending on recent (evolutionary) history and some notion of structural match (and then the question is if such an idea of "structure" is neurally plausible - I will go into this further below); whereas, in the case of propositions, we are talking about the semantics of concepts, which are typically externalist and socially conditioned (perhaps the 48.9% of philosophers you mention weren't specifically queried about propositions - and mid-20th century analytic philosophy is not so fashionable right now).
That is: No matter what Parakesh et al. prove - if their proof is valid, then semantic externalism should just follow suit and assign truth-values in such a way that the "fit" representations are also the most likely to be true. I see no other way.
Further, "homomorphism" can mean different things, depending on whether we think of good old propositional mental representation, or implementation in the brain. Must the structure of concepts correspond to the world? If so, what does that even mean? We can think of early Wittgenstein's theory, such as that the relations between (linguistic) concepts (do or must) mirror the structure of the world. For anyone who believes that mental representations are akin to linguistic concepts, this might be the theory of cognitive representational homomorphism. But we can also think about the way our neural representations must be "structured" in order to provide the information necessary for organismic control and behaviour. To this end, must be the structure of neural networks, or electrical charges and patterns in them, correspond to the world? And, again, what does that mean?
Let's take a look at a classical proposition, such as "the cat is on the mat". It expresses a real-world structure, namely the object cat being on another object mat. The expression does not have that same relation: the term "the cat" is not on the term "the mat". Rather, the two are concatenated by the term "is on" which refers to the relation being on. In this sense, at the very least, concepts do not mirror the structure of the world - they merely represent that structure. Importantly, in order to understand its truth conditions, we must know that the sentence structure does not (need to) mirror the real-world structure.
However, you might also interject that there must be a (perhaps holistic) network between concepts such as "object[cat]", "relation[is on]" or "object[mat]". That structure can be represented as something like a "mind map", and then we have an actual structure to talk about - and that actual structure might occasionally correspond to the structures in the world, but it will likely also often correspond to idiosyncratic concept-learning-effects.
Now, if the instantiation of mental representations can in any way be ascribed propositionally, then we can start to ask whether the neural implementation of the concepts which we use to build propositions, and the neural implementation of the conceptual concatenations, have any structure which is anything like the real world. Here, we must address questions of the nature of neural networks - what structure must or do they have to implement such informational properties? We must also address the question whether concepts are actually implemented in a kind of "LoT" or "just" in a connectionist way. And so on and so forth.
I cannot claim to have a fixed, ultimate theory of everything regarding neural implementation of truth-capable representations. But I can tell you that the work I've done, and the way the mind boggles when considering all of the above in detail, suggests that "structure", both in concepts and in the brain, is not immediately like the kind of structure we think of as being "in the world". Rather, we assign that structure in a roundabout, pragmatic way. That is, seeing the structure of the world in the world's representations just means mapping the alleged representations to the world in such a way that it makes the most sense.
I think the only way we can directly make sense of "structure" in mental representation is by looking for informational content in neural structures. Everything else is just manners of speaking. And here, two urgent caveats present themselves: Most people do not straightforwardly mean neural structure when they speak of the structure of representations. They mean the roundabout thing. And secondly: We know very little about the representational structure of neural networks. Currently, science is typically content with Bayesian representations - modeling neurons in such a way that they usually and vaguely do what we want them to do, typically without any profound understanding of why they do this.
So much for my rant on homomorphism for now. I will try to address some of your further points later on.
Best,
Joachim
Sven,
I'm visiting this thread late in your discussion. After stipulating what will surely be points of agreement, let me ask you a question.
I assume you would agree that there are an indefinite number of syntactical constructions by which the same semantic content can be expressed. I also assume you would agree that the converse is not true. Finally, I assume you would agree that semantic content can be expressed in non syntactical ways (via gestures for one).
How do you see the above stipulations which I believe are not controversial, applying to the original question you posed in this thread, "Is Semantics in Fact Syntax?"
Dear Jean-Marie Merle ,
Thank you very much for your interesting answer. If I understand you correctly, you are discussing the semantic features of an expression, often called the logical form. In turn these features can be given an interpretation in some suitable referential or modal-theoretic semantics (for instance along the lines laid out in Angelika Kratzer’s and Irene Heim’s textbook Semantics in Generative Grammar).
The so-called “Syntax/Semantics- interface” has puzzled me for quite some time and I would like to ask you for your thoughts on an issue I shall briefly try to explain.
The idea that one can apply formal semantics to the analysis of truth conditions for linguistic expressions is heavily influenced by the work of Richard Montague and David Lewis. I’m particularly interested in David Lewis who, according to Barbara Partee, is not only a very influential analytic philosopher, but also rather influential among linguists.[1]
To make a long story short, in Languages and Language Lewis proposes a way in which abstract semantic systems connect to a language spoken in some speech community. The language is sustained by a mutual interest in communication, in other words, language is a matter of conventions. As Chomsky has pointed out in Rules and Representations, Lewis’s account fails to explain how such a language could be unbounded and how the speakers of such a language can understand new expressions without trouble. Hence, according to Chomsky, Lewis’s account is “deeply flawed” (of course, Lewis says something similar about Chomsky’s account).[2,3]
Now, Lewis suggests to separate the questions about the abstract semantic systems from the sociological and psychological facts (and probably the biological facts as well). The situation is similar to the issue Joachim brought up, the relation between propositions as abstract objects and mental states (spelled out in terms of informational content of neuronal structures). Chomsky cares about the later, on his view, the recursive procedure and the interfaces are physically realized procedure in the mind-brain of an organism. This is the syntax side of the syntax/semantics- interface. The semantics on the other hand is abstract, i.e. no one seems to think that semantics represents physically realized procedures (according to Partee). Now, it is certainly true what Lewis points out in General Semantics that one can discuss abstract systems in complete abstraction from the facts of the world, just as one can do mathematics in complete abstraction from physics, but just as in physics, the mathematics is only interesting in so far that it represents physical processes. It seems then that there is a considerable gap in the explanation for why formal semantics has such robust results (which is a fact that Chomsky accepts). [4]
Any thoughts on this?
Best,
Sven Beecken
Dear Joachim,
Thank you very much for your rich and interesting comment. Let me first address the notion ‘homomorphism’. You wrote:
“I must admit I am a bit skeptical regarding the very idea of homomorphism. I guess ideally the idea is that the world is structured, and our mind incorporates models which are models of the world because they are structured like the world.”
This would be the dream, but I share your assessment that:
“We know very little about the representational structure of neural networks. Currently, science is typically content with Bayesian representations - modeling neurons in such a way that they usually and vaguely do what we want them to do, typically without any profound understanding of why they do this.”
Before I go on, let me say that I did not mean to offend you by choosing the label ‘naive externalist’. What I had in mind was something along the lines of Kit Fine’s “Naive Metaphysics”. A very simple Idea that, if spelled out right, can become extremely powerful. I shall return to this.
With regard to the very notion of a homomorphism. I do share some of your misgivings, in particular the very notion of ‘structure’ in an empirical context. Furthermore, I think you are right when pointing out that the relation between propositions and mental representations is highly problematic. On the other hand, I think some of your worries can be overcome by using abstract, but well defined concepts.
Gallistel and King use the term ‘homomorphism’ in a rigorously defined way. For instance, the term can be defined in the context of algebra as structure preserving map that holds between two algebras of the same kind. In turn, the concept of an algebra is extremely versatile. Basically, all you need is some set of objects (numbers, states, propositions etc,) and some suitably chosen rule system to manipulate the symbols.
On this abstract conception, we can just choose a suitable ontology. Gallistel and King chose an ontology that consists of symbols, i.e. physically realized objects in the brain of an organisms. They are trying to answer the question: What are the rules that describe how these objects are manipulated; and on what level are they implemented (neuronal networks, synapses, or perhaps on the molecular level).
On the side of the world, we can choose whatever algebra can capture whatever physical aspects of the world we are interested in. If there is a homomorphism, we know that there is a set of rules that each describe how either algebra works and we know in which way the rules are related.
On this abstract level, we can deal with a wide variety of concepts. Provided that they are all related by the same underlying mathematical structure. In practice, this seems to work for the study organisms. Furthermore, the very fact that the concept of an algebra is so general allows for very different applications that may look radically different at the face of it.
For instance, the development of of truthmaker semantics, a semantics that is intended to overcome certain limitations of modal notions, is closely related to Boolean algebras which in turn appear to have interesting applications in QM.
To be sure, I do not claim that there is a relation. All I want to illustrate is how radically different the applications of the abstract notions can be. Furthermore, it shouldn’t be too hard to give a suitable conectionist model in these terms.
Since this notion of a representation is so abstract, it doesn’t commit us to much beforehand. What is problematic is the commitment that is carried by a particular choice. For instance, if we look at the relation between mental states and propositions.
You are, of course, right that the very notion of truth is deeply related to propositions. Propositions are usually taken to be abstract objects. Abstract objects are in turn taken by most to be causally inefficacious. Which makes the whole business a bit puzzling, to say the least. (See also my reply to Jean-Marie Merle .)
All I wanted to illustrate is that by choosing a highly abstract framework, one can think of certain situations without being drawn into what I take to be problems of metaphysics (i.e. questions about the structure of the world, whatever one thinks that means). As in the case of Parakesh and Hoffman, my feeling is that one ought to be clear about the distinction between empirical theories and questions about the status of these theories, as far as it is possible.
With this admittedly a bit abstract remarks on the background in place, let me address some of your concrete points. In particular, I will work my way up to the attempt of an answer to your question:
“If Parakesh et al.'s claim extends to [semantics], how does fitness beat veridicality?”
First, a quick point. You wrote:
“[In] the case of propositions, we are talking about the semantics of concepts, which are typically externalist and socially conditioned (perhaps the 48.9% of philosophers you mention weren't specifically queried about propositions - and mid-20th century analytic philosophy is not so fashionable right now).”
I agree with you that dominant view on semantics is externalist in nature, at least in the areas of philosophy I’m familiar with (where the mid-20th century analytic philosophers are very much alive and kicking. For instance, one can think of truthmaking as the modal approach on steroids).
What I’m a bit concerned about is your following claim:
“No matter what Parakesh et al. prove - if their proof is valid, then semantic externalism should just follow suit and assign truth-values in such a way that the "fit" representations are also the most likely to be true. I see no other way.”
I’m not entirely certain that I understand you here. What it seems to sound like is some sort of pragmatist approach to truth? If so, this would seem that you are willing to follow Chomsky's claim that semantics is syntax, i.e. whatever happens in the brain is related to the external world via actions. Since this goes directly against much of what the mainstream view on truth is, I think I’m misunderstanding you. Let me elaborate a bit on my problem, perhaps you can tell me where I get things wrong.
In a nutshell, my problem is this: If Parakesh et all. would succeed, then one would assign truth values independently of the state of the world. This is exactly the opposite of what I take to be externalism.
Before I continue, let me briefly address the following point:
“Further, "homomorphism" can mean different things, depending on whether we think of good old propositional mental representation, or implementation in the brain. Must the structure of concepts correspond to the world? If so, what does that even mean?”
It is my hope that the discussion of the abstract concept helps to see how one can rephrase these issues in an abstract setting. If I phrase it in these abstract terms, then your claim that “"structure", both in concepts and in the brain, is not immediately like the kind of structure we think of as being "in the world". Rather, we assign that structure in a roundabout, pragmatic way. That is, seeing the structure of the world in the world's representations just means mapping the alleged representations to the world in such a way that it makes the most sense” seems to have an unfortunate consequence.
Consider again the basic idea behind truthmaking, the correspondence theory of truth, expressed by the following biconditional:
If you spell out the left hand side in terms of “informational content in neural structures” and the right hand side in terms of physically identifiable properties of the world, then we have the situation Chomsky explicitly and Parakes et al. implicitly challenge. If we assume the right hand side to be the world as science tells us it is (the “real” world, if you like). Then what they both say is that we should not expect that the informational content of neuronal structures is related to the physical world such that the informational content changes is accordance with the world in any predictable way. So, unless there is a reason why fitness should lead to truth (which is exactly the assumption Parakesh et al. challenge), truth seems to boil down to true-for-us.
This seems consistent with your willingness to say that we simply assign an interpretation on the world, but it is deeply against the very ideas behind externalism, as I understand them.
Since it is very likely that I simply misunderstand you, I stop here.
Best,
Sven