Do the formal languages of logic share so many properties with natural languages that it would be nonsense to separate them in more advanced investigations or, on the contrary, are formal languages a sort of ‘crystalline form’ of natural languages so that any further logical investigation into their structure is useless? On the other hand, is it true that humans think in natural languages or rather in a kind of internal ‘language’ (code)? In either of these cases, is it possible to model the processing of natural language information using formal languages or is such modelling useless and we should instead wait until the plausible internal ‘language’ (code) is confirmed and its nature revealed?
The above questions concern therefore the following possibly triangular relationship: (1) formal (symbolic) language vs. natural language, (2) natural language vs. internal ‘language’ (code) and (3) internal ‘language’ (code) vs. formal (symbolic) language. There are different opinions regarding these questions. Let me quote three of them: (1) for some linguists, for whom “language is thought”, there should probably be no room for the hypothesis of two different languages such as the internal ‘language’ (code) and the natural language, (2) for some logicians, natural languages are, in fact, “as formal languages”, (3) for some neurologists, there should exist a “code” in the human brain but we do not yet know what its nature is.
Please fill the questionnaire. There are 3 questions only. It takes 30 seconds.
https://drive.google.com/file/d/1bS0-YsCl_PnJSnI72mC7jVgZYv-_264v/view?usp=sharing
Dear Berislav,
I liked your triple [actor, sentence, communication type]. I would only add that in communication there is more than one [actor] : sender = [information source] and receiver = [information target].
It may happen that a natural language expression be recognised as valid in the same way as a formal one, but this is far from being the case in general. For the time being, unfortunately, a formalism with such expressive power does not exist, as yet. The reason is that neither the structure nor the mechanisms (underlying functions) of natural languages have been well understood. Today, it happens more often that logical statements can be more easily translated into natural language expressions than vice versa.
Obviously, in order to create artificially such a powerful (having the expressivity of natural language) device we will need to take many cognitive and psychological aspects into consideration. I believe that this task, though very complex, is nevertheless feasible.
HI, Sherif Yisrael Mikhail,
I totally agree with your conclusion about the need to farther investigate In Linguistics and Neuroscience and, of course, I am also aware that our knowledge about automata (and their languages) is not yet ready to fire with natural language expressions which are built out of heterogeneous elements and need very advanced distributed networks. However, the history of both machine translation (MT) and natural language processing (NLP) is full of discoveries in computer languages science. For example, the logic programming language Prolog has been invented in the 1970th by Alain Colmerauer and Roger Kowalski while trying to implement some context free grammar rules. There is much more to be done in the theory of computer languages, too. I am not comfortable with the opinion of one of the co-authors of a Japanese handbook of Cognitve Science who accused linguistic formal theories to be the reason why the efforts in MT failed. Indeed, at the time of the Japanese MT (1980th) only the generative grammarians tried to describe nat. languages using, in fact, results achieved in the automata theory and programming languages. It turned out that research in MT entered in a sort of vicious circle. Therefore, it is clear that computer scientists have expected too much from "formal linguistics"... ignoring all the typological knowledge about languages.
If we want to make progress we need to cooperate more closely.
Hi Andre,
Hm... Well, I don't think that natural language works in the same way as logic or formal languages. In analytic regime (listening or reading) - the meaning of the codded is decoded only in the context of the precise situation, personal semantic space, moment... So meaning itself is a variable. The same in the generative regime, word-forms and grammatical rules are accorded to some "internal language" units, perhaps thought, idea, mental unit... but once pronounced or written (the mechanism to do this is highly automatic at the brain processing level) the produced language code serves to others as triggers only, as a good wish to entail the same internal representation in their heads... Too many variables, right. The trials go on...
@Mikhail
In your response to Velina, you wrote "to get them closer to real languages" ? Could you explain what do you have in mind saying "closer". Is it something like the Montague Grammar ? It's really an important problem.
Preserving truth would not be problematic if formal languages could integrate meta-language. Pragmatics could become tractable if formal languages integrated meta-information. Again, we need much more data-mining effort in order to understand, for instance, such a foundational problem as predication (unlike in Classical Logic FOL where the term 'predicate' is in fact a "1st order proposition enabling quantification of arguments"; there is neither subject nor predicte in it). I mean, we need to clarify the distinction between agency (information) and subjecthood (meta-information) when building a sentence.
Automated communication is still machine-oriented. Thank you for the links.
In the triad you mention, as for today, each relationship has Yes-and-No answers. We need to have 'good' answers to all the basic questions before we try to build architectures for processing complex data structures. Don't we ?
@Velina
Yes, obviously, but what are your feelings about the three relationships ? There are 8 possible configurations of answer. Which one you would choose ?
Dear André,
My general idea was that natural language is a tool for communication which could be modelled more correclty by mathematical means if one does not apply a deterministic approach. but probabilistic models.
My feelings... are that language is biologically predetemined by the available inborn brain machinary which creates meaning on the basis of perception and action, that language is a communication tool for meaning and respects the inborn rules for semantic representation of the world, that the so called internal langage operates on this representations even without natural spoken anguage, that logics is very beutifull, but has very little to do with all this. That is what I think.
In communication a sentence is being used in a certain type of communication and therefore the identity of the language user together with the sentence being used within the communication type is the elementary unit of analysis. So, [actor, sentence, communication type] is "the atom" of logical pragmatics. I do not think that question whether the sentence being used is expressed in a natural (phonetic) script or formal script, i.e., concept-script, makes much difference.
The attitude that the "language is a communication tool for meaning and respects the inborn rules for semantic representation of the world" commits the "declarative fallacy'' or "representational restriction" of reducing the language with the collection of sentences in the indicative mood. The language is also used for the coordination of actions.
The relationship between [LS] language structure, [CN] cognition nature and [FM] formal method can be delineated as follows: natural language in use [LS] is the programming language of [CS] cognition and action, and this relation can be explicated by a formal method [FM] such as "dynamic logic".
Berislav, hi,
I think we are speaking about differen things.
Could you specify how is language used for coordination of actions? After or before the meaning is generated, based on the language-message content, inside the receiver's head? Or... are you speaking about body-language, excuse my confusion.
Dear Berislav,
I liked your triple [actor, sentence, communication type]. I would only add that in communication there is more than one [actor] : sender = [information source] and receiver = [information target].
It may happen that a natural language expression be recognised as valid in the same way as a formal one, but this is far from being the case in general. For the time being, unfortunately, a formalism with such expressive power does not exist, as yet. The reason is that neither the structure nor the mechanisms (underlying functions) of natural languages have been well understood. Today, it happens more often that logical statements can be more easily translated into natural language expressions than vice versa.
Obviously, in order to create artificially such a powerful (having the expressivity of natural language) device we will need to take many cognitive and psychological aspects into consideration. I believe that this task, though very complex, is nevertheless feasible.
Dear Velina,
I respect your choice. However, it is good to know that today very often deterministic knowledge once optimised (with data mining techniques) is equivalent to the probabilistic one. In mathematics, many roads lead to the same results.
As for the cognitive nature of language, I think that we agree roughly, but there are so many particular questions I would be eager to understand.
to Sherif Yisrael Mikhail about Machine Translation
To the best of my knowledge, Google translation algorithm, for example, is doing nothing besides mechanistic mapping of expressions in two languages (there is no deep data structure representation). The resulting translation is to a certain extent very useful, I agree. But, such an algorithm is completely useless for building an interface for communication. So, the success in translation is due to the tremendous power of computation (combination) capacities of today's computers rather than to the knowledge of language complexity.
To Velina (if I may): Imperatives are used to coordinate actions. The "generation of meaning" is a hypothetical process that not every theory must presuppose.
To Andre (if I may): Of course the receiver's role must be taken into account. The reason why this role has been omitted is the fact that it belongs to the "effect-part" in the basic formula of logical pragmatics: within the communication type C, normally if the sender S emits message P, then the effect E occurs on the side of receiver R. In short, if C, then [S:"P"]ER, where the formula [Cause]Effect is to be understood as a formula of dynamic logic. What is the scope of logical research in this context? Logic usually studies the syntax and semantics of a language in which a message P is formulated. Logical pragmatics studies language in use.
Regarding the claim "Obviously, in order to create artificially such a powerful (having the expressivity of natural language) device we will need to take many cognitive and psychological aspects into consideration." It would be useful to make the distinction between the further development of the Leibnizian concept-script and the extension of logical research to pragmatics. In the latter case, it is not only the case that psychological states must be taken into account, but also the normative or social dimension. There have been made significant contributions to the development of the logical pragmatics: illocutionary logic by J. Searle and D. Vanderveken, dynamic logic of J. van Benthem, normative pragmatic of R. Brandom.
I was convinced that today psychology is not limited to individuals but concerns also societies.
On the other hand, additing pragmatics (to syntax and semantics) was a tremendous step forward. And as far as I know, the authors of this idea were rather logicians than linguists.
André,
Answering your question as a whole would require unraveling half the secrets of the universe, so I will try only to chip away at little pieces of it.
You wrote: "Today, it happens more often that logical statements can be more easily translated into natural language expressions than vice versa."
A force that drove me from Philosophy into Linguistics was my constant uneasiness with the sanctioned "translation" of natural language statements into logical propositions. There seems to be a fundamental inadequacy in mapping into logic. Foremost, everything is considered deductive, but that only represents a small portion of our thoughts, and squeezing every statement into a quantified form seems inadequate at best.
For instance, "All dogs have four legs" has explicit quantification (and linguistically fits into something I like to call quantified aspect). It is obviously false since we can find a case of a dog that is a counter-example running around on three legs. On the other hand, the statement "Dogs have four legs" has no outward quantification (and linguistically fits into the opposite of quantified aspect: what I like to call characteristic aspect). It is a true statement simply because one case--or a few or several--does not cancel the fact that dogs characteristically have four legs. Logicians often try to conflate the two very different statements into the same quantified logical form or else ignore the difference, and that is a basic inadequacy of standard natural-language-into-logic translation (correspondence/mapping).
Progress needs to be made in this area before a serious attempt can be made to model the entire process.
In order to make another progress in the direction of 'integrated cognition', we need this time to step downwards (as opposed to the one which led form Semantics to Pragmatics) including those parts of the huge domain of Knowledge which would make it possible to provide the Grounding function for communication. Let us call provisionally this layer Systemics. Thus, our model of cognition would become a 3-layered architecture as follows: Systemics > Semantics > Pragmatics. In this way, we could integrate the results of investigations in the very rapidly developing domain of Data Mining.
But then, what about Syntax ? I presume that Syntax is of quite different sort than the three layers sketched above. All the more, functional linguists would easily agree that we integrate "paradigmatic relationships". Consequently, we could replace Syntax by Tactics with the three following (mostly combinatorial) domains Paratactics > Syntactics > Metatactics with obvious correspondences with respect to the above 3-layered architecture.
Of course, for such integration, we would undoubtedly need a distributed multi-network representation rather than tree-like (either "surface" or "deep") structures.
Hi, Glenn,
Thanks for your contribution. I am conscious of the fact that we are discussing here very complex problems, but I do not think that we need to analyse half of the Universe. Well, it is however true that my question concerns a very big complexity.
The Systemics I have posted a message about (before reading yours) covers a relatively small part of knowledge human brains contain or have expertise of. It is perhaps the smallest network of all the "3-layered architecture"; I mean: Systemics > Semantics > Pragmatics.
The linguistic problem you described belongs to a long list of similar problems which need careful insight from the point of view of logic starting with the revision of their axiomatic systems.
You analysed
(1) “All dogs have four legs” as having an (explicit) ‘quantified aspect’ and
(2) “Dogs have four legs” as having a (not outward) ‘characteristic aspect’.
Very briefly, I propose you the following terms for your interpretation of these examples:
The semantic motivation (of the pragmatic status “given”) in (1) has the feature ‘generic’ (informally defined as ,absolutely universal’) and in (2) it ihas the feature ‘general’ (informally defined as ‘relatively universal’ in (1) all and (2) nearly all spatio-temporally located situations.
Linguists and logicians should cooperate more closely, indeed. And even more, logicians (incl. philosophers and computer scientists) should cooperate with neurologists of brain (incl. psychologists etc.) and linguists. In such a setting, for instance, my more specific question would be as follows:
Is it reasonable to expect creating an enhanced symbolic language of logic which would provide an expressive power strong enough to simulate the neural code which, though still only hypothetical, is today more and more plausible ?
Dear André,
I do agree with you, "today very often deterministic knowledge once optimized (with data mining techniques) is equivalent to the probabilistic one". But mining techniques and machine learning approach are" feeling" the tendencies of the probabilistic behavior in the practicing of language as they rely on data from huge corpora...So, data-driven methods and the existence within them of "data-feeling" layer, even as modern as hidden Markov models or deep believe approaches is ... statistical modelling. The data-trained engines are exclusively language dependent and do not capture the general cognitive layer that I am speaking about and that you agree with.
II am not aware of approaches that explain analytically the uncertainty aspect in language communication which is related to the each-moment-creation of meaning, in a explicit way... One should take Shannon (communication theory) as basis and develop it with regards of entropy reduced by means of semantic units and rules. This approach could give a language independent picture, providing an idea of the insides of the cognitive basis of the language faculty. For the moment this has not be done, I guess this is because we don't still know how to include some basic universals underlying language semantics in the play. So, figuratively speaking, maybe Chomsky's general approach has to be "married" with Shannon, Saussure and Barsalou at time?
A lot of work to be done. Who starts? I will follow.
Please fill the questionnaire. There are 3 questions only. It takes 30 seconds.
OK, I will do it !
http://goo.gl/forms/Bsj1wHZXkJ
I'm persuaded that no such thing as "mentalese" exist, since any good speaker of a language may attest for him/herself that, with enough practice, you will "think" directly in a foreign language.
[And I find a little diffficult to believe that our entire cognitive and brain structure may reassess (not so fast, anyway)].
At the same time (even if I'm a logician) I agree with neuroscientist in their search for some primitive code, but I think it's something related to the ability to extrapolate new symbols and significance (and generalization) without having to wait for evolution to properly adapt a related instinct.
Formal language and composition rule (or significaance rules in natural language) are, on the other hand and from my point of view, emergent phenomena that states (in our intersubjective knowledge) general feature of the real world. That's the reason why we could, even with some difficulties, connect formal language to natural ones.
To Berislav: Pragmatics concerns a different SORT of information than semantics
see attached file
A Velina,
--- André, ça va?
Oui, merci, pas de problème. J'habite à Paris.
André,
As far as the scope of the project, my wife says to divide everything I say by 10. ;-)
If I understand your ploy correctly, you have taken what I stipulated as linguistic terms--quantified aspect and characteristic aspect--and mapped them into logical terms: absolutely universal and relatively universal. (Correct me if I'm wrong.) I know Peterson and Peterson & Carnes (c1970) developed squares of opposition for intermediate levels of quantification, such as "almost all," and "more than half," etc., to preserve such categories as parts of formalized deductive logic. Is it in your mind that absolutely universal and relatively universal are captured under deductive logic, or does that division push us toward separating deductive from inductive logic as a model for natural language? Does relatively universal mean something like "almost absolutely universal"?
Glenn, you made me enjoying a lot ...the division by 10... Here some deep insides of the human comprehension of proportions, sumerian and babylonian: https://en.wikipedia.org/wiki/Sexagesimal They calculated square root of two with a very good precision using this... (they wore solving quadratic equations)
Speaking about language, it has been shown convincingly that the speakers of the famous Piraha are very well oriented in proportions despite they have not words for numerals other than one and two. So I will not join the Wittgenstein point a bit supported here that "my language is my world"...
Berislav , hi, yes you may...
"To Velina (if I may): Imperatives are used to coordinate actions. The "generation of meaning" is a hypothetical process that not every theory must presuppose."
Yes. I pressuppose it. Very seriousely. Convinced!
To Velina:
There are at least two attitudes possible. Either one accepts working within the existing framework or s/he seeks for innovative solutions.
"explain analytically the uncertainty aspect in language communication which is related to the each-moment-creation of meaning, in an explicit way..."
It is very challenging, indeed. But in order to progress, because you must
decide what data structures will be available to you, answering to my question(/s) might be helpful, don't you think so ? (-;).
To Berislav (the function of imperatives):
Roman Jakobson, a Russian functionalist linguist, defined six communication functions (referential, aesthetic/poetic, emotive, conative, phatic, metalingual). It is obviously possible to add more such 'functions' (ex. magic etc), but R. Jakobson pointed out to the 'referential (denotative, cognitive) function' as the dominant one, beause appearing in most frequent message transmissions. Obviously, the determination of the dominant character of any function depends on the viewpoint For example, for some Jakobson's epigones, from the creativity point of view (generation of meaning aimed at communicating with others), the most dominant is not the 'referential' function but 'aesthetic/poetic' one. From my point of view, however, it is extremely interesting to reflect on how all these functions are related to information. Consequently, one should quite naturally point out to such 'functions' as informative, para-informative, meta-informative, pseudo-informative, dis-informative etc. - cf. Polish cyberneticians Henryk Greniewski (1968) and Marian Mazur (1970), Polish linguist Bożenna Bojar (1972) and Hélène & André Wlodarczyk (2006, 208, 2013). And then, also quite naturally, informativity should be seen as the most characteristic function of language with remarkable similarity to the 'referential (denotative, cognitive) function'.
The reason I mention R. Jakobson's theory is this: Linguistic 'imperatives' and 'vocatives' play CONATIVE roles. Therefore, it is inadequate to consider that language pragmatics can be reduced to the NORMATIVE function since this function is not the most frequent one. All the more, imperative sentences - like all the others - are 'vehicles' of many sorts of information, too. They are not exclusive, neither.
Don't you think so ?
https://en.wikipedia.org/wiki/Jakobson's_functions_of_language
Article Agents, roles and other things we talk about : Associative S...
Book Meta-informative Centering in Utterances - Between Semantics...
In the "[cause]effect perspective" it is not necessary to assume that the effect of an utterance (within a certain communication type) ought to belong to a single category of intentional states. The effects are complex. For example, by uttering an imperative (*) "! you (=receiver) see to it that P" in an "egalitarian communication" the sender creates (i) an obligation for the receiver either to see to it that P or to let the sender know that the s/he (=receiver) will not see to it that P, and (ii) an obligation for the sender her/him-self not to prevent the receiver from seeing to it that P. Since the act is performed by the use of language, and language is a logical structure, there is an infinite number of linguistic commitments resulting from this fact. For example, after the utterance (*) the sender is forbidden to assert that it is impossible for the receiver to see to it that P (that P&Q; that P&Q&R; in infinitum). In short, one cannot utilize a singular sentence, but must use the language as a whole (since it is a structural entity).
It is probably the case that the "generation of meaning" is somehow involved in communication process but this concept must be made explicit .
There is the "message transmission" in communication for sure, but "being a message" presupposes "the existence of language".
There are many typologies of language functions. If we agree with late Wittgenstein, there is an infinite number of language games, some die out, some new are born. If we agree with this, then there is an infinite number of language functions, each for the language game it makes possible.
To Berislav:
In your last paragraph, as you wrote "If we agree with this, then there is an infinite number of language functions, each for the language game it makes possible."
As for me, I do not agree. I do not think that the "game theory" is a good metaphor for language communication. Signs are also objects (they can be classified). Therefore, language 'uses' are tokens of language 'usages' which are TYPES. The example of the difference between 'sounds' (TOKENS) and 'phonemes' (TYPES) suggests that there is no reason that it be not likewise with information (meaning).
Obviously, besides utterances (which express the visible part of icebergs) there are texts they are parts of. My informatistic (neologism from 'informatism' coined recently by Paweł Stacewicz) view of language does not prevent me from studying USAGE.
By the way, usage is not exclusively concerned by pragmatics. Every linguistic 'unit' can be typed. If not, how humans could learn languages. Let me quote R. Jakobson again, because he distinguished between diachrony of language evolution and dynamics of language change in human beings lifecycle. Important analytical view, isn't it ?
There is no obligation in communication. The hearer may think about quite different things whlle the speaker expects him to listen to his speech. Freedom ! For this reason, when the speaker expects to transmit an information with a "given" or "new" meta-informative status, it is not necessarily understood this way buy the hearer. Both are processors (acting agents) ...
I think a (separate) language of thought is not possible because it needs its own semantics (like Wittgenstein’s ‚private language’), but meanings can only come into being (or rather stay so) by conventions. And conventions need a social system and social situations by which they are built. So the expressions of an internal language either have no meaning (as there are no conventions) or they have the meaning of expressions of natural language(s). In the last case internal language is about the same as natural language – surely in a shortened form like Wygotski's ‚inner language’.
However, there are subjective, intersubjective and objective truth validities possible. Whenever we say "language", we used to consider that it must have its own (deep) semantics. It might turn out that the natural 'language' is just an interface which has an 'interface semantics' only. Hence, for example, many languages have a Gender category which does not really match the concept of Sex.
Convention changes within the separate (cognition) code would be made easier because they could be done without great changes of interface-language. In neuroscience, this phenomenon is known as 'brain plasticity', I think. This would prevent nat. language to be prone to continuous changes and explain the famous speed in changes of meaning without outré changes of form.
If I see something and I decide that it is a quert, I will remember the next day that it is a quert. But if I try to name a thousand things, I see, and a thousand thoughts, I have, by my own inner language, I will forget most of the names soon, if I cannot communicate them with other people. So without the help of (social) conventions it will be impossible to keep an internal language alive.
And without semantics there is no language.
It is not convention which makes problem. I did not suggest that there is no semantics in a language even if it is just an interface. But I can prove that we understand in our internal code (whatever be its nature) much more than we can express in language. It is the task of linguistics to analyse the semantic function between the nat. language and the internal code. I presume that language has an extremely fragmentary semantics as compared to what we seem to understand. From my own experience: I have never had thousands of objects to remember in one day, so ... But I can speak a few languages. In all those languages, I always feel the same person even if conventions change. Languages help me to think, because without language abilities I would not accumulate "thousands of things" in my life (but I did !). However, I am convinced that at least 2/3 of what I think is not in any natural language (how many times I need to search for an expression that I do not find immediately in any language when I talk, although I know very well what I mean !). Obviously, I would also like to have the results of neurological analyses but we will have to wait a little more for this. It has been announced that thanks to the newest fMRI technology, this question will be soon answered. Well...
In brief, there is no social life without conventions. Language is probably half rule-based half idiom-based. Both are conventional.
Please fill the questionnaire. There are 3 questions only. It takes 30 seconds.
http://goo.gl%2Fforms%2FBsj1wHZXkJ
Dear André,
do you remember some of the 2/3 of your thoughts which you thought in internal language? How do you do that?
Can you rethink them and discuss them internally? How do you do that?
Are among those thoughts some which cannot come into mind like a picture (because they are in some way more abstract)? How do you remember them, rethink them, and discuss them internally?
Dear Klaus,
Yes. Sure. I do remember many things the names of which I forgot.
Let me quote an example. I can speak six foreign languages and understand texts in at least eight. It happens often that I need to go back to some documents I have read in the past. In order to recall their authors, I got used to start trying to find out the language of those documents. It is really a helpful clue in retrieving some contextual information about what I am looking for. Unfortunately, it is not always possible. Conclusion: I don’t remember the language in which I read about a problem, but I know the problem.
Obviously, my own experience might reveal to be an exception. However, I guess psychologists have studied such cases and perhaps they can confirm.
Dear Glenn,
>> Does relatively universal mean something like "almost absolutely universal"?
Yes. Seen from a very general point of view, scaling quantities results in names which can be used as qualities. So, "relatively universal" is a scale parameter.
I do not think that the distinction deduction/induction has something to do here.
How your answers to this thread question might contribute to the elucidation of the more general question below ?
Are the structures of natural languages complex or complicated ?
Recall that complexity concerns systems while complicatedness goes beyond systems. Complexity can be analysed while complicatedness cannot.
about complexity and completedness, please see the following TED presentation:
https://www.ted.com/talks/eric_berlow_how_complexity_leads_to_simplicity?language=en