There are several illuminating papers from language acquisition people. For instance there is a series of papers by the group of Daniel Freudenthal in Cognitive Science and in the journal of Language Learning and Development. They model language acquisition as a process that starts from the edges of utterances and compare German, Dutch, English and Spanish. Their results are much better than any UG based work (as is discussed in these papers). The differences of the languages (and their acquisition states) can be explained by the properties of the input, that is, in an input-based acquisition model.
For me this paper by Rens Bod (2009) was an eye opener: http://staff.science.uva.nl/~rens/analogy.pdf
Working with data from child-directed speech (CHILDES corpus), he shows that a very simple procedure (Data-oriented Parsing) can learn auxiliary inversion in English from the distribution in very few examples. The procedure also gets the structure of auxiliary inversion with relative clauses in the subject right. This is the classical argument of the Poverty of the Stimulus that Chomsky proposed in this form in the 70ies and that still plays a role in his most recent paper in Lingua (2013).
There is also a target paper by Adele Goldberg in Cognitive Linguistics (http://dx.doi.org/10.1515/COGL.2009.005) in which she summarizes her 2006 book. It is interesting to read the replies and her answer to them.
My conclusion from all this is that mainstream generative linguistics has to change. One cannot motivate empty elements in a grammar of German by pointing to overt elements in Basque or empty topic morphemes in a lot of languages on the basis of Japanese. Instead one has to motivate grammars language-internally, that is, a German grammar should be motivated by German utterances and so on. It is then viable (and necessary) to capture generalizations cross-linguistically by comparing the grammars on came up with on the basis of the data of the respective languages. If there are several ways to describe a language, one should use the one that is most compatible with the findings of other languages. This is a bottom-up approach and more compatible with what we know about language acquisition now.
I wrote about this here: https://www.researchgate.net/publication/258338696_The_CoreGram_Project_Theoretical_Linguistics_Theory_Development_and_Verification
The paper is a draft and comments are most welcome.
If you read German you may also consult my Grammar Theory book, which has 40-50 pages on Universal Grammar, Universals, Language Acquisition, Recursion, and so on. It can be downloaded here:
If you do not read German you may extract pointers to the literature from the book and read the original sources.
As for recursion (the Piraha issue) and non-finiteness of languages: Chomsky assumes two basic operations: Internal Merge and External Merge. External Merge basically combines two linguistics objects, for instance "John" and "laughs". The combination rules are very abstract. They just say combine X and Y. In Categorial Grammar this is written down like this:
X/Y * Y = X (an X looking for a Y combined with a Y results in X)
If one believes that such abstract roles are part of our linguistic knowledge as those who work in Chomskyan Linguistcs do and as most of the researchers working in Construction Grammar and Simpler Syntax do not, then such simple combinatorical rules would be something that is universal. Personally, I think that we need such rules and the reasons are described here:
However, this does not mean that the combinatorical rules are part of a UG in the Chomskyan sense. People like Jackendoff question whether the capability of combining two things into a larger object is language specific. Jackendoff (2011, What is the human language faculty? Two views) points out that we can build complex structures in vision, in planning, in music, so it is a domain general capability. We clearly are different from rocks and kittens, but in a more general way, it is not just language, but language profits from this.
As for infinitude of languages and recursion: You can have combinatory schemata like the one above without having recursion in your grammars. If you have lexical items like the following you can build structure for simple sentence but not for ones with self-embedding:
John np
Mary np
laughs s\np
sings s\np
John laughs
Mary laughs
John sings
Mary sings
So, in such systems the lexicon decides whether there is recursion in a particular grammar or not.
This is discussed on page 304 of the Grammar Theory text book.
Best wishes
Stefan
Article The CoreGram Project: Theoretical Linguistics, Theory Develo...
Book Grammatiktheorie
Article Unifying Everything: Some Remarks on Simpler Syntax, Constru...
Hello Cesar, Personnally, I don't think so. However, a first problem, in my opinion, is already the definition of the notion of "universal grammar": in a purely linguistic sense ("à la Chomsky") or in a more general - semiotic or formal - sense ("à la Husserl" as a sort of mathesis universalis).
If your question is related to the problems posed by the peculiarities of Pirahã, this paper may shed some light on this issue: http://web.mit.edu/linguistics/people/faculty/pesetsky/Nevins_Pesetsky_Rodrigues_Piraha_Exceptionality_a_Reassessment.pdf
There are several illuminating papers from language acquisition people. For instance there is a series of papers by the group of Daniel Freudenthal in Cognitive Science and in the journal of Language Learning and Development. They model language acquisition as a process that starts from the edges of utterances and compare German, Dutch, English and Spanish. Their results are much better than any UG based work (as is discussed in these papers). The differences of the languages (and their acquisition states) can be explained by the properties of the input, that is, in an input-based acquisition model.
For me this paper by Rens Bod (2009) was an eye opener: http://staff.science.uva.nl/~rens/analogy.pdf
Working with data from child-directed speech (CHILDES corpus), he shows that a very simple procedure (Data-oriented Parsing) can learn auxiliary inversion in English from the distribution in very few examples. The procedure also gets the structure of auxiliary inversion with relative clauses in the subject right. This is the classical argument of the Poverty of the Stimulus that Chomsky proposed in this form in the 70ies and that still plays a role in his most recent paper in Lingua (2013).
There is also a target paper by Adele Goldberg in Cognitive Linguistics (http://dx.doi.org/10.1515/COGL.2009.005) in which she summarizes her 2006 book. It is interesting to read the replies and her answer to them.
My conclusion from all this is that mainstream generative linguistics has to change. One cannot motivate empty elements in a grammar of German by pointing to overt elements in Basque or empty topic morphemes in a lot of languages on the basis of Japanese. Instead one has to motivate grammars language-internally, that is, a German grammar should be motivated by German utterances and so on. It is then viable (and necessary) to capture generalizations cross-linguistically by comparing the grammars on came up with on the basis of the data of the respective languages. If there are several ways to describe a language, one should use the one that is most compatible with the findings of other languages. This is a bottom-up approach and more compatible with what we know about language acquisition now.
I wrote about this here: https://www.researchgate.net/publication/258338696_The_CoreGram_Project_Theoretical_Linguistics_Theory_Development_and_Verification
The paper is a draft and comments are most welcome.
If you read German you may also consult my Grammar Theory book, which has 40-50 pages on Universal Grammar, Universals, Language Acquisition, Recursion, and so on. It can be downloaded here:
If you do not read German you may extract pointers to the literature from the book and read the original sources.
As for recursion (the Piraha issue) and non-finiteness of languages: Chomsky assumes two basic operations: Internal Merge and External Merge. External Merge basically combines two linguistics objects, for instance "John" and "laughs". The combination rules are very abstract. They just say combine X and Y. In Categorial Grammar this is written down like this:
X/Y * Y = X (an X looking for a Y combined with a Y results in X)
If one believes that such abstract roles are part of our linguistic knowledge as those who work in Chomskyan Linguistcs do and as most of the researchers working in Construction Grammar and Simpler Syntax do not, then such simple combinatorical rules would be something that is universal. Personally, I think that we need such rules and the reasons are described here:
However, this does not mean that the combinatorical rules are part of a UG in the Chomskyan sense. People like Jackendoff question whether the capability of combining two things into a larger object is language specific. Jackendoff (2011, What is the human language faculty? Two views) points out that we can build complex structures in vision, in planning, in music, so it is a domain general capability. We clearly are different from rocks and kittens, but in a more general way, it is not just language, but language profits from this.
As for infinitude of languages and recursion: You can have combinatory schemata like the one above without having recursion in your grammars. If you have lexical items like the following you can build structure for simple sentence but not for ones with self-embedding:
John np
Mary np
laughs s\np
sings s\np
John laughs
Mary laughs
John sings
Mary sings
So, in such systems the lexicon decides whether there is recursion in a particular grammar or not.
This is discussed on page 304 of the Grammar Theory text book.
Best wishes
Stefan
Article The CoreGram Project: Theoretical Linguistics, Theory Develo...
Book Grammatiktheorie
Article Unifying Everything: Some Remarks on Simpler Syntax, Constru...
One more thing to add: There was a target paper by Geoffrey Pullum and Barbara Scholz on Nativism: http://dx.doi.org/10.1515/tlir.19.1-2.9 (also availible from Pullum's webpage). The authors do not say that Nativism is wrong, but they point out that after 50 years of research it was not shown that it was right. Pullum and Scholz are the first ones who layout a full argument with all the logical premises that are needed in order to support the conclusion that there is innate linguistic knowledge.
So for instance it is often claimed that necessary evidence is not available or not sufficiently available. Pullum & Scholz show that such claims are often false (if the claim is that structures do not appear in the input at all) or that it is unclear how much repetition would be needed to render a certain construction learnable from input alone. Legate & Young try to address these issues in their answer, but they fail as Scholz & Pullum point out in their reply. It is fun to read these papers. It is complicated stuff, but the counterargument by Scholz & Pullum is really simple and demonstrates to every student and researcher that it is good to have logic classes in the curriculum.