What were the main weaknesses of generative semantics adherents' claim that "a grammar starts with a description of meaning of the sentence and then generates syntactical rules through introduction of syntactical rules and lexical rules?
A good question. I think the best book to read about this is The Linguistics Wars by Randy Allen Harris (1993). He gives an enormous amount of detail about the arguments for and against generative semantics (GS) and comes to the conclusion on p. 241 that GS promised too much and failed to deliver: it claimed not just to handle semantics and syntax, but also pragmatics, fuzziness, logic, ... . As a result, representations were becoming more and more unwieldy and the main practitioners just seemed to abandon the ideas they had put forward, very often turning their back on generative grammar and founding cognitive grammar/linguistics, various functional approaches, etc. I don't think that their conception of syntax/semantics was ever really shown to be wrong.
What are your arguments for the claim that it went wrong? That an idea doesn't do it in the history of science doesn't mean it was faulty. Independently of whether Generative Semantics was faulty for empirical or theoretical reasons, one factor that contributed to its death was that it didn't have Chomsky's sheer authority on its side. Thats a question of sociology of science, not one of the plausibility of Generative Semantics. By the way, there are people saying that Minimalism is Generative Semantics in disguise.
A good question. I think the best book to read about this is The Linguistics Wars by Randy Allen Harris (1993). He gives an enormous amount of detail about the arguments for and against generative semantics (GS) and comes to the conclusion on p. 241 that GS promised too much and failed to deliver: it claimed not just to handle semantics and syntax, but also pragmatics, fuzziness, logic, ... . As a result, representations were becoming more and more unwieldy and the main practitioners just seemed to abandon the ideas they had put forward, very often turning their back on generative grammar and founding cognitive grammar/linguistics, various functional approaches, etc. I don't think that their conception of syntax/semantics was ever really shown to be wrong.
Interesting topic. In my opinion, GS' main problem is conceiving meaning as encapsulated as put forward by Fodor, this being a building block of the theory. Empirical data, especially data gathered in the last decade in Neuroscience, simply does not comply with an encapsulated theory of meaning There are many works pointing to this. Of course more theoretical points could be made, but unlike GG, GS was strongly linked to empirical predictions about language processing. Not being a GS follower at all, my guess is that quite a few researchers started to be put off when they started to deal with non-theoretical problems. LSA and PDP might have strongly contributed to this, because of their power to explain and predict facts that can hardly be dealt with from a rule-based approach.
Perhaps to be truly relevant one needs an abstraction structure (e.g. syntax, number series) that is also intuitive that leans more on nature than nurture.
https://www.ut.ee/SOSE/tartu/suveseminar_08/SemioticsUG_Deacon%5B1%5D.pdf
This is not a problem with Generaive Semantics alone. There are sooo many theories on Language, and each theory is generated because prior theories are considered as insufficient. The basic problem, I think, is that some how we are seeing the trees and failing to see the forest. Another is that English may not be a suitable language to study Linguistics; it is done only because it is an International language. We are not getting the right answers because we are not asking the right questions. We seem to be knowing more and more about less and less. Forgive the comments of a novice Biologist-turned-Linguist.
Narayanan Bhattathiri
I still much like the Paul Postal version that I learned in 1964. It seemed very nicely to cut nature at its joints. Regarding what came after, I think that Narayanan Bhattathiri has a point.
I strikes me that a big problem for generative semantics (which assumed that syntax started in semantics) are those areas where syntactic elements are required, but obviously for no semantic reason. (This goes against the assumption that all of syntax starts in the semantics.) One example is "do"-insertion in English, another example is provided by expletive pronouns that are not the subject of weather verbs ('ambient its'), as e.g. in the German construction "Es gibt hier X." (X is located here.): Ich glaube, dass *(es) hier Ratten gibt. (I believe that there are rats around here.) There is also some discussion on the non-semantic roots of syntactic phenomena in Gazdar, Klein, Pullum and Sag (1985:32).
The problem of purely syntactic elements also applies to other areas of linguistics, such as "Fugenmorpheme" (expletive interfixation) in compounds, so that you get Maus*e*falle (mouse trap) and not *Mausfalle.
Another big problem clearly was that Noam Chomsky didn't like it.
Tibor Kiss har a good point: syntax has peculiarities that do not necessarily make sense semantically. But still it is possible to reconsider James McCawley's idea that what syntax does is to 'generate' or build up semantic units. In a cognitive perspective, this does not mean that semantics is just what syntax 'generates'; semantics has to do with the conceptualiations that we want to express! So what syntax builds can be considered as semantic simulations of the conceptualizations that we want to express. Therefore, syntax has particularities that do not necessarily make sense; our conceptualizations do not depend on those peculiarities.
Syntax maybe has the job, in this context, of making semantics processable. Perhaps by flattening a deep binary structure into a shallow multinary one, by providing markers or signposts that enable left-to-right (or right-to-left) processing of chunks at a time instead of direct top-down processing of the whole, etc. This functionality is maybe developed as part of the process of developing, learning, and changing language, while the specific entities that accomplish it can be accidents of history, including dead idioms, passe locutions, or whatever. That is, by this conception syntax requires some words (and ... ?) that do not carry any direct semantic load.
Just a thought...
I wanted to add something different.
The shape and function of any structure that is made up of units is affected by not only the shape and function of these units but also by the arrangement. A different arrangement may result in a structure that doesn’t function or does an opposite function (imagine what if the blades of your fan revolve in the opposite direction!). A sentence is a structure the function of which is to convey an intended meaning to a listener: the speaker in effect recites a ‘list’ of words from which, at some point during the ‘listing’, the ‘listener’ grasps the meaning. (By the way, any relation between list and listen?). The units of a sentence are words which have definite functions. If the listing is not in order ( ie if the syntax is not correct) or units with the wrong functions are used, the meaning (ie. the semantics) of the sentence is altered, some times drastically. Now the units themselves consists of still smaller units called morphemes with their own semantics; and many words contain morphemes arranged in a particular way (ie word syntax) to get their assigned meaning. The morphemes/words consists of still smaller units called alphabets, arranged in a particular way to get their semantics. Altering this will affect their meaning; but this is apparent in writing only and not in speech. This may be called Level 3 word syntax, Level 1 being that of words in the sentence and Level 2 being that of the syntax of morphemes; ie how morphemes are bound together (if no other term exists for this). This stratification applies for Semantics too, ie sentence, word and morpheme -wise. Of course there is Level 0, ie with paragraph, arrangement and expression of ideas, etc. This may not be field of linguists.
Even with written speech, in English, the order of letters may not be very important; the eye mostly glosses over the mal-positions if the first and last letters are ok: anhtprology, samentics, sytnax, etc will be mostly appropriately read, and only editors, teachers and will be really upset. This because, in English the meaning of the individual letter is non-existent. But in Sanskrit, an Indo-European language, each letter (it is an abugida), called a varNa/akshara has its meaning and words/morphemes and the meaning of the letter may affect the meaning of the word, ie there is a Level 4 semantics and syntax. In Malayalam, which has even been called a Creole of Sanskrit and Draividian language, too has this phenomenon. One can wonder how Sanskrit, an Indo-European language got this, not there in other Indo-European languages. But, it is possible that they too may have this; at least some word parts called quasimorphemes, exhibit a semblance of indicating the meaning of the word; egs. Being fl-, gl-, etc. This area has not been researched as to whys and hows (See my slides uploaded in Researchgate).
Since there are so many levels of Semantics and Syntax, there is no point in saying that one is more important than the other, or the labels ‘generative’ semantics or syntax.
Somehow, we are failing to grasp the Basic Principle on which the languages were formed. We assume that they developed de novo, randomly, etc. I believe many Languages had some principles based on which they were ‘generated’. Unless we are able to understand this, Linguistics will remain some thing like Radiobiology, a science no one gives much credence to, except Radibiologists; not even radiotherapists. Most of the important observations of Radiobiology developed from Clinical observations, and Radiobiology research per se.
These are the observations of a novice Radiobiologist-cum-Clinical-Onologist turned Linguist. Forgive me if they appear wrong.
Narayanan
The debates of the time were interesting, too. Model theoretic semantics (Montague Grammar), while sharing Generative Semantics' (GS) insistence that semantics, too, deserved serious linguistic attention, rejected the GS view that a form-based calculus of the generative grammar sort was an appropriate framework for semantics. Instead, they argued, meaning was more insightfully characterized not by a calculus of forms, but rather by more abstract perspectives. So GS had noted that durative adverbials are often interpreted as saying something about the length of an activity (she jogged for 30 min.) but that in other combination they are interpreted as saying something about the state resulting from an activity (she turned if off for 30 min.). They argued that this reflected a difference in underlying form (in which 'switch off' reduced to something like 'CAUSE to BE off', opening up room for a second point at which a durative might attach). But it was difficult to pin down all of what might be in the underlying forms, and the model theoreticians pointed out that one needed to accept that some consequences (of propositions containing these predicates) had to be accounted for in non-structural way anyway -- using so-called 'meaning postulates'. The debate is surveyed in David Dowty's Word Meaning and Montague Grammar.
Like some other generative language theories generative semantics presumes that language production is working like a computer. I think that's the fundamental problem.
I don't think GS did that. But the problem is to come up with some human way of producing the kind of extreme systematic regularity that we find in the grammar of a language--and, for that matter, in parts of culture as well. And the system has to be distributed and shared across a given population.
My wife read the notes that one of my linguistics professors wrote on a grad school research paper and relayed to me that the professor said I sounded like a "degenerate masochist." I explained that the criticism was much deeper than that. The professor actually said I wrote like a "Generative Semanticist."
I has some second thoughts on this, so I'll try with a second answer.
I think that there are two reasons why GS was not successful.
1. Their proponents did not build a school, but decided after initially aversing Aspects-style generative syntax to either pursue syntax in isolation (Ross, MacCawley) or to move out of syntax altogether (Lakoff). In the initial antogonism, success could only have meant "replacement of Aspects-style GG by GS".
2. More important was that GS tried to replace the already baroque formalism of ordered Aspects-style transformations by even more baroque transformations from an ill-understood semantic base to syntactic structures. Chomsky contrasted this with a programme of restriction, where transformations were reduced (eventually, in Lectures of GB to a single one – Move a) and had to obey constraints (such as a prohibition on sideward movement). There were side-effects to non-transformational components as well. Endocentricity did not play a role in either Aspects nor GS, all kinds of base rules were allowed. With the X-bar-schema in place, Chomsky restricted syntactic projections, and also made them uniform across categories. Eventually, Chomsky's model of syntax has a representational interpretation and was in any case much clearer than GS-style analyses.
So, Chomsky hooked up on replacing Aspects-style of syntax but provided a model that was more lucid than what GS proposed.
The gap between deep structure and surface structure is too big. Thus, too many transformational rules are needed and most of them are ad-hoc. The analysis is not convincing.
I agree with Mackenzie that the Harris book provides an excellent discussion of the difficulties associated with GS, arguably the most significant being Chomsky's forceful attacks on it and it advocates. Whether it "went wrong" is open to debate, I think. Perhaps it was, in a modified form, folded into cognitive grammar.
Nice observation by Tibor K., and original as far as I know, i.e. that there was an effort toward progress in the opposing camp. I got the impression that GS got stuck, too.
I see the affinity between GS and cognitive grammar that several have pointed to (in agreeing on the importance of semantics, and in some of the people involved), but from another perspective -- the grammar, the kind of theory aimed for -- they were a long way apart. GS was always wedded to discrete, rule-based systems, and cognitive grammar emphatically gives that up.
As Simon and Tibor have pointed out, the main reason why Generative Semantics failed was simply that it did not get Chomsky's stamp of approval. As so often before in history, the tremendous authority of a particular individual (or organization) prevented what might have been a superior theory (or technology) from succeeding.
As for why the GS conception of language is faulty, I think Klaus hits the nail on the head: the reason is that GS - like Generative Grammar - assumed an essentially computational model of language. Or, more generally, it attempted to construct a logic-based model of language. I guess the world was simply too enamoured of computers at the time for most people to think differently...
Today, instead of a computational view of language, what the world really needs, in my opinion, is a linguistically-informed view of computing. I believe that Cognitive Linguistics (and especially Cognitive Grammar) has a lot to offer in that respect.
Well, if you want an expedient answer, I would recommend Viv Evans' very recent book THE LANGUAGE MYTH: WHY LANGUAGE IS NOT AN INSTINCT.
I tend to like his work, but this time I was disappointed, as his rebuttal of generative linguistics felt to me like sort of a "vendetta". I mean, the literature is there, the arguments too, but not the attitude (it smacks of --childish?-- aggression and, after all, semantics has little to do with generative lnguistics, so why trying to convince those already convinced while failing to make others think otherwise?)
Pity, since a much-needed work in the field (for a variety of audiences) smacks of aggression and over-simplification. Maybe at a second edition. Meanwhile, for a "harder" (but certainly! worth reading), please refer to Dirk Geeraerts' THEORIES OF LEXICAL SEMANTICS (a must for every linguist, whether a semanticist or not).
Best regards ([email protected])
I'm sorry, Clara, but I don't see how your post is relevant to the question. Moreover, I have serious problems in viewing the Evans book as legitimate linguistics.
There are more problems with the book than I can relate in this limited space, so I will mention just a few that I found striking. First, criticizing Chomsky is pretty easy to do, given that most of what he has written about language and syntax, such as the poverty-of-input hypothesis, is fairly widely recognized as wrong.
More problematic is that much of what Evans wrote is, for a linguist, embarrassingly simplistic and in many instances just plain wrong. When she notes, for example, that the idea of syntactic universals is refuted by the fact that all possible combinations (6) are attested, she has drawn an incorrect conclusion from an established fact, failing to point out that approximately 95% of the world's 5-6 thousand languages follow either SVO or SOV patterns. Those that do not, such as VOS, tend to be spoken in very isolated regions of the world by very small numbers of people, in some cases just a few hundred.
When Evans uses Everett's (2005) Amazonian research to challenge the notion of linguistic recursion, she failed to mention the rather important fact that Everett's claims have not held up well to scrutiny (e.g., Nevins, et al., 2009). Nor does she consider that Chomsky, when writing about linguistic universals, might have been better served if he had taken a more general approach to what the term entails. For example, nearly all children develop language at about the same time, and they follow a similar pattern, with recognition of prosodic patterns emerging first, followed by nouns, then verbs, then motherese, with its rudimentary grammar. She also ignores how children born in a pidgin environment regularize the language (commonly along the lines of SOV/SVO) and how this phenomenon implies some time of universal at work. (I have suggested in some of my work that the universalization is related to neural architecture.) Evans may well be correct in criticizing Pinker's use of the term "instinct" to describe such phenomena, but she doesn't really offer an alternative.
In sum, the problems I see in the Evans book are numerous. It reads more like a piece of journalism than a serious linguistic text.
Vyvyan Evans is a man. And of course it reads like a piece of journalism, because it is meant to be a response to the equally journalistic and unserious popular accounts by Pinker, McWhorter and other latter day chomskyists.
Also the claim that "Everett's research has not held up to scrutiny" is itself problematic- because the studies that claim to refute Everetts fiindings have also themselves been criticized. The linguistic wars are still ongoing.
That's part of my point (maybe you couldn't reach my latest post, sorry, I've been having trouble with my internet connection today.) I do not think Evans' book stands up to the standard but, given Pinker's success within the "general public", I do believe a sound response is long due.
A lot of us will agree in that (some) linguists should try and "spread the word" about other (ANY OTHER) theories than those which end up adding yet "another little box at the end of the tree" whenever something does not fit. And, if at all possible, in "understandable" terms.
If there is one thing I do admire, no kidding, about TG, GB, PP, MP... advocates is how very well trained they are. I mean, they do know about language, even if in a way I do not get at all, and even if "the" model keeps on changing kinda radically every so ofen.
What I'd like is to find is "readable" reading alternatives, so we could discuss real language ("incongruences et al") that helped all of us people (not only linguists) understand what "this" is all about (to my mind, nothing to do with "modularity").
@James Williams
A factual correction. In Dravidian languages any word order (SOV, SVO, VOS, VSO, etc.) can be freely used, and it is spoken by millions. More over, Malayalam is considered by some as a Creole. I wonder which Linguistics theory can explain these!
Narayanan
The existence of languages with relatively free word-order is well known Narayanan, but some linguists argue that they can all be said to have an unmarked basic word-order and that deviations from that word order tends to be used pragmatically convey additional information
Apart from that I fail to see how the fact that languages with uncommon word orders are spoken by few people has any relevance for the fact that they contradict the assumption that word order is innate. If some languages dont have X trait then by definition it is not innate and not universal. The fact that some traits are very common and others are not, says very little about universality in the sense postulated by generativists. At best suggests that certain syntactic properties are more easy to learn or invent than others - which is not necessarily due to innate dispositions at all, but can simply be due to general semiotic and cognitive principles.
If you want to read a "proper linguistic" paper that demonstrates the futility of the idea of innate universals I recommend Evans and Levinsons "The myth of language universals".
In Malayalam (a Dravidian spoken by >35 million people) I think basically there is no Rule on the word order; Grammar books do not even speak about this. Effort is taken to teach the correct use of the case marker (which may just be a vowel). But in a way Magnus is right. The word order can be influenced by the theoretical question to which a particular sentence may be the answer. Word order in Malayalam may be different to the two questions 'Who killed A' and 'Whom did A kill'.
This is only for information; I never propose that this influence the arguements in the present discussions; I really do not know.
Thanks Cora and Magnus. I would certainly obtain and read the recommended books
Narayanan
Word order can be more or less strict, and will sometimes depend on stylistics, because the underlying syntax only needs to imply lexical proximity (syntactically close words only need to appear in the same sentence!). Linearity obtains by rule-governed projection from a semantic tree, this is the interesting suggestion made by McCawley. It explains why linearity offers such plasticity. Next question is of course how to model the semantic tree – and this is where generative semantics 'ended' its short career. I think that its basic intuition is still valid; it is just extremely challenging.
@Narayanan Bhattathiri
Hello. You mentioned Dravidian languages in your post, given my elementary knowledge in syntax, if the word order of a language tends to be very flexible, then there is a large probablity that that language is rich in grammatical markers of all sorts, such as case makers, etc. For example, Japanese and Korean allow free word order or scrambling, but Chinese does not. The reason is that Chinese lacks very crucial case makers as in Japanese and Korean, so basicly speaking, a scrambled Chinese sentence does not make much sense because you cannot figure out who-does-what-to-whom kind of thing. So if all (or at least most) free word order languages exhibit such a tendency, then the tendency itself can be regarded as a universal principle.
Thanks, Matthew. That corresponds exactly to what I am saying: the underlying syntax, which is indifferent to the way it is expressed, because it is a semantic 'order', as McCawley saw it, can be manifested both by 'hard' word order and by morphemes on words. The 'harder' the word order, the more the inventory of morphemes can be reduced. The underlying syntax is a semantic syntax, not a meaningless mechanical set of blind rules. This is why language can express thought at all.
Nothing wrong, its crime was to come about at the wrong moment, when Chomsky and disciples were adopting the view that all there was to semantics was interpretive. Since then, most of the ideas put forward in generative semantics were later adopted by other similar approaches including conceptual semantics and lexical semantics.
Language can express thought because sentence and phrase syntax carry schematic semantic meaning which is mapped back and forth to and from the inherent structures of chunks of thought. This is in fact better than what construction grammar offers, with its 'form- meaning pairings'.
Keep in mind that semantics (if not metaphorically used) deals with the meaning of what is or can be spoken or written.
@ Klaus: "Colourless green ideas sleep furiously" (Chomsky) is a sentence that has poetic meaning and also an interestingly problematic relation to meaning in thought. We might call it the 'absurdity effect' – which stresses the dimension of semantics, spanning from expressed meaning to cognitive meaning.
"Colourless green ideas sleep furiously"
In my eyes the sentence is a good example for meaning which is impossible without language. Imagine a human being without language who creates the meaning of this sentence - impossible.
"Cognitive meaning" has no sense. Where is the something which bears the meaning? Especially the meaning ' Colourless green ideas sleep furiously"'?
The meaning is whatever the speaker uttering the sentence wants to give it. You need to ask them and then interpret it. Call it "cognitive" meaning or "whatever" meaning you like. There is no limit to meaning coinage, but we are restricted by the stock of words in the language. Meaning does not have a limit, words do.
@Klaus: Let me explain. If a sentence can be judged 'absurd' or more or less meaningful, there must be an instance that compares its language-borne meaning to meaning that is not language-borne. I called this 'cognitive meaning' in my comment, referring to cognitive semantics, a branch of cognitive science.
@ Per
O.k., but cognitive semantics, wether it is a branch of cognitive science or not, in my eyes is non-sense. It would be a system, parallel to semantics. Why?
@ Karim
We are not restricted by the stock of words in the language, because the combinability of words is unlimited. And there are possibilities, like oxymoron, to say "more than words can say". Bot even the oxymoron is dependend on words.
@Klaus: I did not know that you thought that cognitive semantics is nonsense. In this trend, we try to discuss generative semantics, which can be regarded as an early version of cognitive semantics. So why is it nonsense? If it is 'absurd', you can go back to my 'absurdity effect', namely that it expresses something that is contrary to your cognition (about semantics).
Per,
I THINK Klaus is trying to say that Semantics is the study of meaning. We don't need another layer of theory to evaluate/parallel/map onto what Semantics does. "Language-born" meaning, then, IS meaning.
Karim,
I might reword your statement to say that THOUGHT has no limit. However, once the communicative effort is made there are limited ways to convey the meaning of that thought into language (UG) and any particular language (Grammar, in the broadest sense).
That might just put the problem off one step, or it might help draw a clear line where linguistics starts.
Interlingual translation is not only possible, but is an endeavor that is of course essential to world civilization. As all translators know, what is translated is: sentences, and sequences of those. The guide of this process is the thought that allows us to hold the meaning of what we read and then reword it. There seems to be reasons to examine this 'holding' of meaning between source and target language, maybe in terms of a sort of Language-of-Thought, as Fodor proposed to call it (LoT). – So if translation is a subject in linguistics, it may have to address the LoT issue. There may even be a connection between LoT and the models suggested by McCawley's GS.
@Klaus
The sentence "more than words can say" says exactly the contrary of what you mean. It is used for lack of words...