Back in 1982, Japan's Ministry of International Trade and Industry, begun the project “The Fifth Generation Computer Systems project”. The idea was to find the new architecture of computer, i.e. NON Von Neumann architecture, with Sequential Inference Machine Programming Operating System (SIMPOS) operating system is released. SIMPOS is programmed in Kernel Language 0 (KL0), a concurrent Prolog-variant with object oriented extensions. Similar project was in US, the results were various Lisp machine companies and of course, Thinking Machines.
I’m interested what happened with Japanese “Prolog machine”? Does anyone know something about that?
Wilfried, is my water the same as your water?
Of course our minds are different. I'd like to define understanding
(here: of a word) as activation of the appropriate concepts.
There is also a measure for evaluating how well this is done.
Attached are two semantic maps. The left one is my notion
of water, the right one that of another person (might also be
in the mind of a computer). I expect that when I say "water" the
other person activates "swimming", "ocean", "drink" and so on.
How strongly other words (symbols, concepts) are affected
depends on the semantic distance of the word to the center.
The other person, upon hearing "water", will also activate
nodes in his semantic network. Every understanding of
a word is paired with a unique activation pattern, activations
of related words. How well these activation patterns match
among two persons is a measure for how well the communication
partners understand each other
Regards,
Joachim
I remember this time very well - A book by Ed Feigenbaum brought it all to our attention. At the time Prolog (UK) and Lisp (US) were the exciting new tools, Fuzzy sets suddenly appeared. Neural networks were resurrected , Expert Systems and IKBS was the new future. At ICL the new Knowledge engineering depart was created. The Alvey projects kicked off looking at Parallel systems (ALICE) Formal methods and prooving (Oxford, Imperial College, Manchester, Stirling, Edinburgh and Glasgow were all selling their ideas to ICL.) Languages such as Hope, Lispkit and ML were being used at ICL. It lasted 2-3 years then the funding dried up and so it all faded away. The influence of Prolog lead to other pattern-match cum functional languages such as HOPE which was to be the main language for the ALICE parallel system based on graph-reduction or re-write rules. The copying of recursive structures across a number of transputer-based processors. I don't know whether Parlog (parallel prolog) came from this but it too rose to prominance at that time. I would guess that the Japanese Prolog Machines went the same way as ALICE. Listening to the major Japanese speaker at Neural Network conferences a couple of years later his companies interest seemed to be on tomography to be on what was known as "Wet-ware" and there was the hint at cybernetics which seems to be the direction that emerged in latter years.
The prolog camp diminished in UK with LIsp systems (eg Xerox loops and Symbolics Flavors) becoming the trendy thing to use - but these all disappeared a decade later. Do you know think that much of this was sales-pitch and the inevitable hyperbole?
@John Sanders: Yeah… Good old AI!!! Unfortunately, “…these all disappeared a decade later…” Bloody IT, but, I think (I'm almost sure of it) those Japanese fellows have something in sleeve, and they just waiting… for the better time (on market of course).
FGCS is generally not perceived to have been a success - their very early choice of parallel Logic Programming maybe had something to do with it .... or you could just say, like a lot of AI initiatives it just didn't live up the sales hype. That and the hardware world moved away from specialist kit (like Lisp machines) towards Risc which delivered a lot of Mips for the dollar (or Yen)
What happened with “The Fifth Generation Computer Systems project (FGCS)”?
Exactly the same thing that happened to the AI as a whole: almost NOTHING!!
The more interesting question is: WHY?
For a hint, see http://www.amazon.com/Quest-Artificial-Intelligence-Nils-Nilsson/product-reviews/0521122937/ref=cm_cr_pr_hist_3?ie=UTF8&filterBy=addThreeStar&showViewpoints=0
By the way, I forgot to mention about the incredible promises made by some of the leading Japanese projects at that time, including RIKEN's Brain Science Institute: that in 10 -15 years they would achieve (in software) human level intelligence. At first, I couldn't believe my eyes, but then I realized that they have learned about making promises (with vengeance) from their American AI counterparts. ;--)
They now talk about 3 "waves" of computing. First wave is One Computer ; many people. Second is one person one computer (personal computer). Third wave yet to come will be one person ; thousands of Computers around (Invisible Computing). Computers have hardware, software etc . Humans have one more entity "Subtleware" which cannot be embeded in present day Computers
Lokanath M.P
Quote from Tom Demarco at that time - "The fifth generation will be as important as the fourth, whatever that was."
@William: Yeah… but the same will be with next generation… whatever was the previous one. Good old AI! But, as 10,000 Maniac sing: “Those were the days when we walked on clouds...”
I remember reading a book that analysed the failure of FGCS. Can't remember the title - you might need to Google for it.
There was a similar European project, Espirit. That too failed to deliver on the high expectations.
There appears to be a consensus that the software aspects of the project overreached what the hardware of the day could comfortably deliver. Processors such as Inmos Transputer and Intel iAPX 432 failed to provide the levels of functionality and performance that the software would have been able to demonstrate viable advances. Some analysts have also suggested that the strictures of the von Neumann architecture might have also been a factor.
Perhaps some forward thinking research body could revisit that aims of the FGCS & Espirit projects using not only current multi-core CPUs, but also FPGAs and current crop of GPUs. I suspect that the hardware advances of the 20+ years since might be able to deliver worthwhile results.
It was not about performance - although {Fifth Generation and to some extent Alvey and Esprit ) initiatives have often been re-interpreted as primarily performance orientated. it was really about new paradigms. That why prolog and functional languages (and a many others) became the focus. For example,Connectionist machines were about association rather that deterministic throughput. It failed because of moribund philosophies which still haunt us to this day. Computers are not a natural analogy for real time environmentally orientated systems (ie Us).
Cite
1 Recommendation
11th May, 2013
Louis Brassard
I am of this generation that was young and excited about the novelties of the computer and dreaming of limitless possibilities. Computers and robots never became smart but the access to the information and to each other that they have provided has surely made us smarter and open limitless possibilities!!!
Cite
1 Recommendation
11th May, 2013
Lev Goldfarb
University of New Brunswick
Louis: "I am of this generation that was young and excited about the novelties of the computer and dreaming of limitless possibilities. "
There is nothing wrong with being "young and excited about the novelties of the computer and dreaming of limitless possibilities."
What has been and still is very wrong---and it is the reason for the past failures---is not to have enough respect for such "holly" undertaking. One must have sufficient self-education and sufficient respect for the undertaking not to jump on the 'trivial" bandwagons. There is no "cheap" way of getting to AI, i.e.we will not be able to do it in a scientifically familiar, "incremental", manner. We should take seriously only sufficiently "crazy" (radical) but promising ideas (necessary but not a sufficient criterion). However, if we would have followed this simple wisdom, we would have been much farther than we are now, simply because we would have had *very* few proposals to consider.
Cite
2 Recommendations
11th May, 2013
Louis Brassard
Lev,
I agree with you. I enthusiastically embarked on a trivial bandwagons, the one that was in fashion at the time of my graduate studies and it took me quite while realizing that the foundation on which I was standing were crumbling under my feets. Then, if you are serious, you jumb out of the train and examine what is wrong and tries to find firmer foundations. I did that and change multimple times my Ph.D. topics. It cost me a scientific academic carreer but I have no regrets.
Cite
1 Recommendation
12th May, 2013
Clive Spenser
LPA
So, I ask .... is Google smart? does it make us smarter? And what about Watson which recently beat the human experts in Jeopardy?
Cite
1 Recommendation
12th May, 2013
Louis Brassard
Using google I can access very quickly to very specific information within 1 or 2 hours and twenty years ago would have taken me about two weeks in the university library assisted with the paper delivery services from other university. I can learn much more quickly because of the quick access. Learning more quickily, giving me more access to the information I seek ehanced tremendously my possibilities. I noticed that the quality of the seach by google search engine has improved in the last few years. They are presently helping the search engine by building a gigantic semantic knowledge network . Google is no more smart than a door knob, more sophisticated, more complex, but not smarter. I reserve the use of the word "smart" or "intelligence" to a process that is not completely automated. A mechanism, whatever complex and usefull is a mechanism.
When the tractor was invented we were not treathen that the tractor is stronger than us. Why react differently toward a Watson mechanism. If a human activity can be mechanized then what is the point of not doing so and feel treathen. The job that is rendered not necessary should make us collectively richer provided that we are politically organized in such a fashion so to distribute the benefits. Usually the benefits are privately claimed during the first phase of the introduction of a technology and gradually distributed later on under the pressure of economic competition. Apple's share holders have benefit for a while of the smart phone but eveybody is copying it now and the share values will go down soon.
Cite
2 Recommendations
12th May, 2013
Lev Goldfarb
University of New Brunswick
Clive, what do you think?
Isn't it just the result of sheer hardware power?
Search engines don't understand the meaning of a *single* word in what they search, and if humans would have "played" Jeopardy the way Watson does hardly anyone would be interested in watching such show.
Cite
1 Recommendation
13th May, 2013
J.-C. Spender
Kozminski University
Hey Louis, I love that doorknob analogy. So we run into the difference between tools and 'real world' or human value.
Academics think of the advance of the discipline - what REALLY was learned? Your notion of 'smartness'.
But there are other milieu of human existence. You say 'not completely automated' as a way of keeping human beings and their condition within the discussion.
Yes, for sure, smartness should have some connection to betterment or what Catholic Social Thought theorists call 'human flourishing'.
Surely Google can help and so earn some smartness points. Look at the Bush (43) re-election, surely one of the most pressing questions for the field of political science. He was helped by the fact that something like 40% of the US populace bought the story that Al Quaeda was connected to the 9/11 events - even though a moment's work on Google would have disabused anyone prepared to be disabused. Thus Google's potential make people smarter - and earn some smartness points - was overwhelmed by people's smartness - they knew Powell was right and that Al Quaeda was responsible.
Tools are not smart, by definition. But they can help make us smarter - so long as we allow it.
Cite
2 Recommendations
13th May, 2013
Joachim Pimiskern
Lev, Wolfram Alpha understands requests very well.
Would you please be so kind to show me a diagram
of sinus of x for x = minus two pi to plus two pi?
http://www.wolframalpha.com/input/?i=Would+you+please+be+so+kind+to+show+me+a+diagram+of+sinus+of+x+for+x+%3D+minus+two+pi+to+plus+two+pi%3F
Regards,
Joachim
Cite
1 Recommendation
13th May, 2013
Wilfried Musterle
Max-Born-Gymnasium
The question is: What do we mean by using the word 'understanding'?
Suffice as a adequate response on Jeopardy or Wolfram Alpha, or is there more necessary? What is the meaning of 'more'?
Smart is a popular adjective to signalize cleverness. Is 'smart' 'more'? In the human mind hundreds of associations are occuring if ther is a sentence like Joachims one. Is this the meaning of 'more'.
Watson generates also more (!) than one result sortet by statistical weights. Is this what we mean by using 'more'? What structure is intended by using the fuzzy expression 'more'?
So many questions ...
Cite
2 Recommendations
13th May, 2013
Clive Spenser
LPA
I am believer in that it is not what you do but how you do it that determines whether or not something is 'smart'; i.e. is the knowledge represented explicitly internally; unfortunately, the answer in complex systems is not binary; certainly some aspects of Watson are represented internally using (Prolog) rules; but there is also a lot of brute force.
Personally, as someone who spends a lot of time Googling for information, I love Google - it cuts out lots of time talking to intermediaries but it strikes me that it is incredibly dumb - but consistently so.
Cite
1 Recommendation
13th May, 2013
Joachim Pimiskern
Wilfried, is my water the same as your water?
Of course our minds are different. I'd like to define understanding
(here: of a word) as activation of the appropriate concepts.
There is also a measure for evaluating how well this is done.
Attached are two semantic maps. The left one is my notion
of water, the right one that of another person (might also be
in the mind of a computer). I expect that when I say "water" the
other person activates "swimming", "ocean", "drink" and so on.
How strongly other words (symbols, concepts) are affected
depends on the semantic distance of the word to the center.
The other person, upon hearing "water", will also activate
nodes in his semantic network. Every understanding of
a word is paired with a unique activation pattern, activations
of related words. How well these activation patterns match
among two persons is a measure for how well the communication
partners understand each other
Regards,
Joachim
fWater.jpg
347.99 KBCite
10 Recommendations
13th May, 2013
Lev Goldfarb
University of New Brunswick
Wilfried Musterle: "The question is: What do we mean by using the word 'understanding'?"
There has been very, very much confusion in AI: useful programs are not (necessarily} "intelligent" programs. In fact, most of programs are useful in one respect or another, but hardly any is "intelligent".
Which programs are intelligent? My short answer is this: those that rely on the classes (of objects/processes), i.e. classification, as the basis, since the "reality" is simply an evolving and interacting collection of classes.
However, the main difficulty with this short answer is that the conventional forms of data representation do not support the *concept of class*, i.e. the concept of "class representation", which is a generative form of class description. In other words, ironically, all our classification programs do "classification" without relying on the concept of class. But that is a separate issue to which other questions in RG are devoted.
Joachim,
If I look at the words that are in these two semantic net and that I reflect to my own experience of the relations in between these words, it strike me that the type of relations are very diversified and bear no resemblance to a generic relation with a distance. Some relations are class/category relations (What Lev refer to) , some other relations are related to a specific roles in specific type of situation/action and the same words may have other relations when considered under different type of life situation.
Joachim an Lev,
thank you for your clear answers. I agree, a semantic map or classification is an adequate translation for 'understanding'. A system that can autonomously create such a classification is actually smart.
The initial question is easy to answer. The 5th Generations project was loudly proclaimed and buried quietly. The approaches were never to meet the demands of the announcement in the situation. However, there was a lot of money, so the birth celebration had to be noisy.
Brain 2 is never deliver what was promised. Of the robot cat Robokoneko is heard anything more. This is a shame, but obviously not to change. You could run with the wasted money much more meaningful research.
Louis: " Some relations are class/category relations (What Lev refer to) , some other relations are related to a specific roles in specific type of situation/action "
Give me, please, one example of those "other relations". I claim that all of them are of the class/category kind.
Wilfried: "You could run with the wasted money much more meaningful research."
This is what has been bothering me for a long time: Why is it that, at the time our society needs real progress in the direction of AI, it has been so easy to full it into the funding of non-starters? Well, as I have mentioned elsewhere, this is the great tragedy of our time: so much incompetence at the time when we are really in great need of competence.
Lev,
I "swim" into the "sea". The relation between these two words is not a class relation.
Allocation of money is a political problem. Physics received a lot of money when the US want to dominate the nuclear war race. The rapid rise of the economic power of Japan, its rapid technological advance in electronic and automobile in the early 1980's triggered a panic among a few polititians in the US and a technological race towards the creation of robots deemed essential for the future production of goods was deemed necessary for the developed countries for competing against low wage under developed countries. This situation was exploited by the AI community in the US for getting funding. You would only get funding if you pretend to deliver what was supposed to save us. There was no place in this funding frenzy for those trying to be realistic. Is the situation really change?
Louis,
Did you mean "I swim in the sea"?
First, the issue is not the "relation", but the "meaning". And the meaning is this: "I swim" where "in the sea", where each belongs to the corresponding class. Actually, "swimming in the sea" is also a class.
Louis, do I understand you correctly? Attached is a semantic
network where the relationships bear a meaning.
Regards,
Joachim
Joachim, semantic nets---as well as many other so-called "representations"---are not really representations, because they do not deal directly with the objects/processes. An example: How does a semantic net represents a handwritten character or some stone or a face?
Lev, it depends on how detailed we want it. There is an infinite
number of ways to paint an A, but if only the information is
needed later that an A was recognized, a single neuron is
sufficient to represent it. That's only an example, in reality there
are probably groups of neurons that fire in concert, forming
a peridic signal, a strange attractor that stands for the symbol A.
On the other hand, if details are important, e.g. when we've
seen a codex of the middle ages and we want to remember
the calligraphic depiction of a character, the pattern storage
comes into play. We are able to store patterns. But this ability
is limited by the famous 7+-2 rule. This rule says that we can't
store patterns comprising more that about 7+-2 items
(symbols) at a glance (I have to mention that some see this
constant at only about 4).
The consequence is, that images can't be represented
as pixels. Some people think the eye works like a
digital camera, but photographic memory is a myth.
Images are encoded with a kind of language, tuples
of symbols.
Regards,
Joachim
Joachim: "images can't be represented as pixels. Some people think the eye works like a digital camera, but photographic memory is a myth. Images are encoded with a kind of language, tuples of symbols."
I agree.
However, when you talk of "how detailed we want it" I disagree. The issue is not here, the issue is in the universal form of (structural) representation that we need. And the semantic net is not it.
Lev,
> The issue is not here, the issue is in the universal form
> of (structural) representation that we need. And the
> semantic net is not it.
the example net with named relationship edges
is a special type of semantic network. It is the common one.
Most widespread is the representation as triples,
like (rome, is-capital-of, italy).
In my opinion the brain does not only use triples.
It uses symbols and tuples of symbols. Such tuples
must fit into working memory, so the size of a
tuple must also be around 7+-2.
A symbol can be assigned any meaning.
Especially a structural meaning. A symbol
in a tuple can express how the other symbols
are to be distributed within a structure.
For example, if we have three schemes for structure,
sch1, sch2, and sch3, our examples could be expressed
in the form of boring tuples:
Example A: (sch2, apple, orange, banana, strawberry, cucumber)
Example B: (sch3, peter, mike, sabine, tom, lisa)
Example C: (sch1, dog, mouse, cat)
There is no need for storing structured containers
for information when we can simply reference the
structure by a symbol. Tuples are sufficient, IMHO.
Regards,
Joachim
I am not an expert in semantic nets. They are obviously useful for some engineering applications such as google search otherwise they would not spend million $. But I am convince that our comprehension of human language is not internally close to such semantic net. I think that human language evolved and is still based on the structure of our nervous system particularly our visual perception and our sensory-motor system. When you think, or ear, or read the word "human" there is after an intial conversion process a self-enactment of a visual schemata corresponding to hiearchical schemata tree that is normally activate if you look at a human being, it is self-activated in the same fashion it is in a dream. In my theory of vision, this hiearchical structure correspond to the abstract graph of this structure into an image and it correspond to the optimal sequence for the structural creation of the surface of this body through a sequence of symmetry breakings. In other word, the internal implicit representation of the body that is enact by the word "human" correspond to the ontogenic evolution of the body surface. Comparison and analogy can then be done based on the actual object surface topological history.
Johann & Joachim,
I will be brief, and I suggest for simplicity to focus on the *natural objects* only.
I believe that a convenient way to think about the *process* of representation is as a generalization of the conventional numeric processes of representation (via measurement devices). What I mean by this is that if you are suggesting a new form of representation you should be expecting, at least at some not too distant future, to see that it can be implemented via some 'hardware'. I.e. this hardware, without any human participation, should be able to actualize this representation. If you apply this logic to your favorite repres. you should be able see why it fails this basic (necessary but not sufficient) test.
The positivist presumption is that there is a reality, that it is seamless (without discontinuities), that it is constructed logically. All of which leads to the idea that we can understand it through the exercise of our rationality - i.e. that aspect or part of our mind that can function logically. It follows that those aspects of our mind that do not hew to such standards can be disregarded. Forget about non-rationality. The positivist ambition (somewhat like Icarus's) is to 'represent' reality as it is, not merely model it with the languages available to us. I see many of the previous posts as implying this ambition.
All of which is most curious. It is said that since Kant, philosophers have understood that the problematic is not reality, but the nature of the human mind. Representation is a code-word for understanding of a particular sort. Kant argued that the ambition of representing reality is actually a grave misunderstanding about human knowledge is what what it means for us to 'know' anything. Kant's ideas were carried forward - not rebutted -by later philosophers, culminating, perhaps, in Wittgenstein. We, not the world, is what is interesting.
The bottom line here is that our logical faculties are certainly of interest, but that they are not put to their highest and best use by trying to represent reality. There is not, and never will be, any direct unproblematic link between anything we know and the reality so many presume. Reality, by definition, is beyond our grasp and so pursuing it is a hopeless tragic endeavor that is generally co-opted to some less savory projects.
In which case what should our thinking energies best be directed towards? Since Kant - especially in the case of the American pragmatists - the agenda is not representing reality but coming to grips with the human condition and the possibilities of its betterment - What is Man, and What is the Good (or Moral) Life?
In which case the semantic web project is not about representing a reality that is presumed to exist independent of us and our thinking - rather it is an attempt to develop a language that grasps the human condition, what is actually real for us - our existence. What we seek is a language that is that is 'truthful' - transparent in the sense that it is not occluded by interests such as political or religious interests. Put another way, the semantic web is certainly a modernist project about truth, but what truth? A non-human truth - of Mother Nature or some higher being - or a human truth?
If the first, of course there is some possibility of building a machine (maybe not based on brass - as in Kelvin's day - or silicon, as today, but more likely to be quantum mechanics or biologically based) but a machine that is amenable and sympathetic to Nature and 'reality'. The machine will be speaking to and about its own. But given the human condition as something constructed otherwise, in ways that we have yet to fathom, we would not be able to hear and comprehend what our Truth machine has to say.
In back of this kind of argument is my assumption, which many will reject, that the human condition is somehow discontinuous with Mother Nature and her logicality. And that is the point. If Mother Nature speaks logic then, since Kant, we know we do not. As Simon said, we are boundedly rational. Mother Nature has capabilities and resources enough to be logical all the time - as far as we know. On the other hand the slightest consideration of human affairs leads us to realize we are not like Mother Nature in this respect. We are limited, Mother Nature is not. Her projects are not ours and vice versa.
Dear JC,
"The positivist presumption is that there is a reality,"
Positivist such as Poincarre on the contrary followed Kant in saying that reallity in itself cannot be known in principle and all that can be known are relatiional model of specific aspect of reality. Reality is a word pointing towards a common ground from where all phenomena emerges but there science will never enter.
Let's forget about the positivism.
The situation, as our ETS formalism suggest, is as follows. There is indeed reality out there but we are simply not equipped to see it fully: there was no absolute evolutionary pressure for that. In other words, we have been equipped to deal with *some* classes in our environment, for which purpose we have enough 'equipment'. But the important point is that the structural basis of this 'equipment' is the same that permeates the entire Nature, i.e. all classes in it. So the underlying basis of all intelligent beings, including all organisms, is the same. What differs is the 'environment' which one can effectively cope with (in view of the 'hardware' limitations).
Johann, note that, for example, generative grammars do not satisfy the above criterion.
This discussion is very interesting for me since, coming from a different discipline, I am clearly utterly unqualified to take part. But all the same ...
If we focus on the language of analysis it must stand on some presuppositions about the relationship between statements and their consequences. Representation of reality might be one such relationship. OK, we know the shortcomings here. But 'forget positivism'? Alas if it was that easy to make statements whose import did not turn implicitly on the pervasive positivist ideas we have all imbibed.
I think we can see several bases for valuable statements. (a) is reality and the notion of representation. OK (a) has been widely attacked even though it remains pretty much in place for various reasons that I have written about. Go with the flow and/or what you (and your students) know.
But if we dismiss the direct causality epistemology we have some options. (b) probabilistic. Things can be explained in terms of general determined behaviors. Much social science hews to this method. But recently we have (c) evolutionary theories which are kind of intriguing in adopting causality but without specifying the cause - we never get to define Nature and so become able to analyze what succeeds. And we also have (d) chaos/emergence theory, where the cause is ultimately tied up with the identification of the 'strange attractor' for this or that particular field.
Each of these offers an intriguing language - each of which has its own syntactic/semantic structure. But there is no pulling them all together precisely because they are axiomatically distinct.
Into this stew I would throw (e) human agency - treating the imagination as the cause on which explanation is based. This, I fear, would not attract many defenders in this thread! But all the same ... we are human beings, living in uncertainty and so agentic by necessity, so maybe there's some connection?
Any representation is limited to a domain and the agent needs to constantly anticipate based on these representations and so to detect divergences and evaluate the representation model adequacy. JC, your statement :imagination as the cause on which explanation is based" is particularly difficult to decode.
Aha, when you say 'agent' is guess you are referring to an actor - not to an entity in the sociological structure-agency discussion acting under uncertainty. The difference being I suspect your agent can/should be fully rational - whereas the sociological view is that the entity is being 'agentic' precisely because its 'rationality' has failed - for a bunch of reasons which were somewhat examined in Simon's 'bounded rationality'.
As a result I use the notion of imagination being the cause is a rhetorical device in lieu of what some have used to describe agency as 'the cause that has no cause' i.e. cannot be determined or explained - in just the same neither you nor I can be fully explained. Partially of course, but about the BIG decisions, very little. About what might or might not determine what we imagine - perhaps there is an explanation somewhere sometime coming out the MRI project, but academic activity cannot wait until then. Indeed we have been thinking about imagination for a while (3000+ years) without doing any better than saying we cannot explain it even though we have deep personal knowledge of it.
So embracing (e) above presentes it own methodological challenges - part of the semantics and syntax of the language of analysis.
JC,
The agent in my sentence could be a robot, a machine with ways to sense and to act, or it can be a biological organism and in that case the agent's behavior cannot be reduced to only mechanisms but should be a realm source of creation and causation. Human are the only organism were this creativity exist at another level, the level of imagiination and language. A lot has been said about human imagination. I have my own pet theory but the core creative part is still very mysterious although we all deeply experience it. But it is like everything else. I do not know how i move my arm. Learning the relevant physiological knowledge would not improve my moving arm performances. For that I have to dance, to play basket ball. For imaginattion, it is the same, if I want to be more creative, I have to create in all kind of different activities and adopt attitudes favoring it.
Our imagination will always run ahead 20 years to the hardware. Therefore we have to adjust to many disappointments of AI enthusiasts.
@Louis - we seem to have a degree of agreement about 'agent'.
"But it is like everything else??" As you noted previously, "this is particularly difficult to decode".
The fact - or sense - that everything is NOT like everything else generates the great achievements of human knowing. Seeing that imagination is not like reason is a huge achievement - going back millennia of course. But it has the merit of sharpening our sense of reason even as it presents us with a profound complementing puzzle - imagination.
Yet the contrast between the two (especially fundamental to John Locke's thought) gives us remarkable insight into the human condition. Since we cannot enter the mind of God and have any certainty, uncertainty is fundamental to our living. And inevitably imagination is drawn into the analysis because of the uncertainty of our condition.
I want to resist making everything subordinate or subservient to machine-like reason because I want to preserve the aspects of the human condition that are not open to machines. As Heidegger - and Dreyfus, H - would point out, machines do not live in or 'inhabit' our world so cannot have the resulting intelligence about how we occupy our world. The AI rules have to be coded by humans if the sustem is to speak to the human condition for the systems are not able to learn and code new rules for situations that were not already implicit in the existing program.
For decades my focus has been on the human responses to Knightian uncertainty in the realm of business. Not everyone is interested in this, of course, and it takes all sorts to make our world. But I still cannot fathom how or why anyone would want to pursue the idea that we shall eventually be able to 'explain' the human condition in terms of machine-like causal mechanisms. What would we then be able to make of Shakespeare or Da Vinci's art or Steve Job's insights? Would that all vanish as 'old-style' mistakes and misunderstandings like alchemy?
As you say - humans are the only organism where the creativity we indicate with the term 'imagination' exists at another level. Well, we can't be sure of that can we? What we can be sure of is that we cannot grasp the nature of imagination in organisms other than ourselves. Maybe cyborgs do dream of electric sheep.
JC,
At the quantum level, quantum physics do not claim as classical physics used to claim that nature is all mechanism. The next step will be to build a solid scientific theory of life with a place for free will, goals and intelligence at the most basic level of life. Then we will have to see how free will, intelligence, creativity increased at the different stages of life evolution. In order to be sure, we have to follow the path of evolution, it is the path from the simple to the complex.
There are different biological sciences that look at the same phenomena of the evolution of life. I privilege the theoretical/mathematical biological approach because it is the simplest and the less ambiguous. And there are a few scientists along that line which tries to find a place for creation in nature.
Louis,
I recall Roger Penrose moving in this direction. Quantum physics seems to open up the necessary space - which is in the right direction to move beyond simple causality - but what comes into the space? Evolution (Mother Nature's agency) or human agency? - or some other 'force' (extra-galactic agency?).
JC,
Cognitive Biology: Dealing with Information From Bacteria to Minds, 2011, Gennaro Auletta
Chapter 8,The Organism as a Semiotic
and Cybernetic System
Auletta even defines a scientific teological causation. I am not finished with this interesting reading.
I await your further interesting comments - but isn't teleonomic pretty much the same as (c) in my earlier posting? The ex-post appearance of purposiveness that leads us to believe some exterior (undefined) cause was acting on the organism? Or am I missing the point?
The teleological causation in primitive organism is induced by inherent necessities. There is no mystical force.
er well ... Wilfried, can you - in the spirit of 'decoding' - unpack the notion of inherent necessity a bit, please?
To clarify my concern I should add that my question is really around the statics and/or dynamics.
If we take a purely functional approach we can argue that organisms have an 'inherent necessity' to feed -take in energy or whatever - simply because that is the way they are - else they perish. This is not the issue.
I sense we are debating the nature and source of the dynamics that lead the organism - or species, or whatever - to change.
Is this change a response to externally induced change? Like requisite variety? - which covers (a), (b), (c) and (d). Or is the change induced by human agency - the act of human Will?
JC,
" is reality around the statics and/or dynamics.""
Here is a story that poetically conveys my way of seeing an answer to this question.
Science is only about the static, what does not change among the changing. It is impossible to conceive the changing from the unchanging but it is possible to conceive the static from the changing. So the changing is primary. The universe is based onto an underlying chaos out of which some aspects/structure/entities have gradually stabilized. Even the laws of nature can be conceived as emergent. Thinking the opposite lead to the question of the origin of these laws into a surnatural realm. Biology is also the history of the stabilisation of structural creation events. So all that exist, still exist as part of the original chaos with mirage of what look like mechanism and laws of nature etc but all that are just temporary pattern into this chaos from which imagination/creation emerge.
@ Louis. Lovely - this I buy! though it smacks of eventual entropy death when the last vestiges of disorder die away.
JC,
The best book I know for inspiration about science, creativity and art is the Book on Painting by Leonardo da Vinci.
Louis,
I am not sure, that I understand you fully. The phylogenetical proces of organism is not a teleological process. The result depends of circumstances in the environment of the population where the evolutionary process is active.
JC,
you asked 'er well ... Wilfried, can you - in the spirit of 'decoding' - unpack the notion of inherent necessity a bit, please?'
It is not a cryptic formulation. To get an organism alive, conditions must be met, which feed this organism with energy, building blocks of life (amino acids), water, minerals, vitamins and many other things. The lack of such essential substances represents a necessity that must be satisfied if it is not to be fatal for the organism. Only an organism, which is able to satisfy his needs, will survive.
Wilfried,
I do not understand fully Auletta yet. In the previous quotes he said that what he defined as teleological causality (downward causation) dominates at the level of the behavior of individual organism and he said that Teleonomy dominates the phylogenetic processes.
I am totally lost ... I dunno what are you guys talking about.
Still, at least you seem to be enjoying your discussion!
Hi Clive,
You are quite right, of course. We seem to have wandered far far off track from the question that began this thread. But some will ses as closer that one might think because the sub-text of the FGCS was to develop computing capacity that could - among other things - simulate life. Our discussion has been about what precisely would have to be coded; my position being that the capacity to change (and survive) ultimately cannot be coded because it lies beyond the grasp of 'formal' language. Computers are not able to engage in the kinds of 'natural' languages that you and I speak.
Louis,
I don't know Auletta but I have some problems with the expression 'teleological causality'. In my opinion is in this expression an opposition. Causality means the effects depends of a certain cause. Teleology means that there exists a goal and there exists a supervisor who take care that this goal will be reached.
On the other hand we have the random evolutionary process. There is no teleological plan, no supervisor, no goal. The process is blind an just running.
Hi Wilfried,
it is back to Aristotle and the thumbprints he has left all over us:
Aristotle held that there were four kinds of causes:
A change or movement's material cause is the aspect of the change or movement which is determined by the material which the moving or changing things are made of. For a table, that might be wood; for a statue, that might be bronze or marble.
A change or movement's formal cause is a change or movement caused by the arrangement, shape or appearance of the thing changing or moving. Aristotle says for example that the ratio 2:1, and number in general, is the cause of the octave.
A change or movement's efficient or moving cause refers to things apart from the thing being changed or moved, which interact so as to be an agency of the change or movement. For example, the efficient cause of a table is a carpenter, or a person working as one, and according to Aristotle the efficient cause of a boy is a father.
An event's final cause is the aim or purpose being served by it. That for the sake of which a thing is what it is. For a seed, it might be an adult plant. For a sailboat, it might be sailing. For a ball at the top of a ramp, it might be coming to rest at the bottom.
From the Wikipedia entry.
Wilfried,
One of the characteristic of living organisms is that they are surrounded by boundary which distinguish the interior from the exterior. This separation has also created a gradual (with evolution) reduction of the impact of external causes over internal causes. So this spatial separation has gradually created a split of the causality field (free expression) into internal (teleonomic) and external (efficient causation). It is the same sort of causality but with a distinction of internal vs external.
Another important characteristic of living organisms is that they have/want to survive and to have to manage this internal world to survive but also to achieve other goals and for that they have to anticipate and to have an enourmous amount of knowledge on the world. Managing this internal world does not proceed from the lower aspect of this world but proceed from the higher organisation aspect. So the boundary of life seems to correspond to the entering of downward causation (teleological causation in the language of Auletta). The process is no blind.
JC,
It is interesting to note that modern science begins with Galileo/Descartes/Newton rejection of fouth cause of Aristotle. This is a necessity from the geometrisation of the world program. Aristotle was particularly interested to life and he took the ontogenic process as a model of his general philosophy. Although science has progressed a lot on the geometrical model of efficient cause only, the understanding of life requires a neo-ariototlean science. When we will answer the question: What is Life, only then we will have this neo-Aristotlean science, a science of the organism.
JC,
thanks for the note to Aristotle with causa materialis, causa formalis, causa efficiens and causa finalis . His division, and his philosophy is formative. But there are also other great minds, Anaxagoras, Heraclitus, who formulated not only the inherent causality like Aristoteles, but the logos as transcendent, goal-setting power. There are plenty more philosophers talking about this subject. This are very interesting investigations to understand the universe.
We may assimilate this ideas, we may belief them or not - what is the message? Aristotle did not just right, because his thoughts are in books. He was wrong sometimes - especially in scientific investigations. For example Kant is against any causality in nature. So do I. Nature is organized mathematical but all processes are blind. There exists no point omega were we (or nature) should go.
Louis,
I agree with you in most utterances. The distinction of the external world and the inner world in organisms is formulated plausible. Yes, there are many inherent goals that must be achieved so that life does not end suddenly. Perhaps this 'teleology' can be compared with the Aristotelian causa materialis and causa formalis . Whether this distinction is meaningful can be doubted. But causes are not goals in terms of telos. Objectives need to be defined from outside the subject and from a meta-level.
We do not know whether there is a logos or not. This question can not be decided by scientific principle. You can believe it or not. But this we leave the scientific discussion getting lost in fuzzy meaning. Therfore I want to terminate these thoughts in this blog, since it would only be an exchange of views.
Humbly I would like to note that I can not agree Auletta, since our terms of telos and causa are not compatible. Would the terms chosen differently, would certainly be much of value in his utterances.
But we will then have transformed the meaning of the term 'science' - indeed we cannot arrive at the destination you describe until we let go of science's current meaning. This hinderance is especially obvious in the social sciences. I don't do 'wet' so I have nothing to say about the biological sciences.
Wilfried,
Until the time someone spell out in clean terms : what is life, or how downward causation can exist, real doing can exist, how creation and will can exist, how a living organism goes about its goals, how the ontogenic process is related to the philogenic process, and until an empirical question is settled with this clear concept, yes until that time this question cannot be decided scientifically, I agree. Aristotle is the great synthetiser of the ancient greek philosopher, more precisely the synthesizer on the platonic branch which locate mind as primary as opposed to the atomists which located atoms as primary. Aristotle managed to created a biological platonism. Descartes is a mathematical platonism and an atomist at the same time but he realized that his new science could not deal with the Mind. We are still in this neo atomistic-platonism phase waiting to get to the neo-Aristotlean phase. So many attempts failed so far.
@All: Thank you for the very interesting comments and the whole discussion so far…
@Johann Kelsch: You've hit my right idea, when I asked this question.
In fact, in my opinion, researches in Artificial Intelligence have seemed stuck in a dead end. After great success in the eighties and nineties of the past century (such as 10,000 Maniacs have said, "Those were the days when we walked on clouds..."), now I do not see new ideas, new "math", new “computer”...
Consequently artificial intelligence seems stuck in "the twilight zone", as well as the entire Computer Science. Bloody Information Technology has dominated.
However, as the old Latin proverb goes “Omis Imber post solis”, after every rain sun comes.
Thus, we need a new math, new ways of thinking and understanding of human memory, reasoning, cognition... New ways of knowledge representation, a new way (not exclusively algorithmic) processing of this knowledge, but perhaps some combination of algorithmic and heuristic processing... Some kind of combination of inductive and deductive reasoning...
Fine, to have something to begin with...
Ljubomir: " After great success in the eighties and nineties of the past century (such as 10,000 Maniacs have said, "Those were the days when we walked on clouds..."), now I do not see new ideas, new "math", new “computer”..."
As I mentioned above, this *perception* about "great success in the eighties and nineties" is mainly to blame for what we see today. ;--)
There has been no real "success", except for the successful propaganda!
http://www.amazon.com/review/R1PYIIY121MJOX/ref=cm_cr_pr_viewpnt#R1PYIIY121MJOX
Our discussion may be fruitful again if we leave aside all the philosophical remarks and specifically address to the problem 'consciousness.'
We do not need new math and can also work with the existing technical aids. But we need courage. Your reputation depends on the success of our own ideas. If we published these suggested ideas, they can be taken from any of us presening as own idea. So everyone speaks only in vague intimations and we didn't come closer to the actual question. Maybe we need a new citation ethics at first, so will good ideas come closer creating something REALLY new.
I might be wrong (I'm not the God) ;-- ), but . . . I have already put my offer on the table. See for example, http://www.cs.unb.ca/~goldfarb/FQXi_5.pdf (I just finished this essay for a physics essay competition).
Johann,
I think that the so-called off topic discussion was not so off-topic as it appears.
Most of the first generation of AI scientists have minimized the challenge of making a machine intelligent, and they minimized the complexity of : the vision problem, the representation problem, locomotion problem, planning problem, etc etc. Most important, they did not seriously studied biological systems which are dealing with these complex problems. The most important part for solving a problem is always the specification of the problem. If the problem is arbitrary set, it is most likely that it is not relevant and most importantly it is not solvable. I do not exclude the possibility of creating a theoretical science call "AI" whose field of study is the creation of sense-acting systems for artificial agents acting in the world. In order to be this theoretical science will need a general architural framework that is closed to the one that evolves in living organisms. So I see the possibility of a theoretical AI close to a theoretical biology. It has to be theoretical, I do not beleive in hacking type of AI. That type failed and will continue to fail.
Johann: "Why, the hell, we have no artificial intelligence in our house now?"
Because of:
1. The inability to identify the central information process (induction, pattern recognition).
Within the PR/machine learning field:
2. The inability to give up the conventional formal machinery, especially the probabilistic framework as the main panacea.
3. This is a radical step in the history of science: To seek a fundamentally new---non-numeric, or structural---form of *data representation* that is supposed to reveal completely new side of reality (of objects) inaccessible to the numeric representation.
Hi Johann, this is circling us back to the original question - which is good.
Several posts back we were talking about 'life' and the possibility of simulating it. Remember the excitement of the Santa Fe folks at 'simulating life'. In this sense 'life' and 'intelligent' are alluding to the same property, the ability to go (live or think) beyond the logical implications of the current knowledge or form. I suspect the materiality of the living versus thinking is actually immaterial to the discussion.
The real issue is whether we distinguish - in Kantian manner - analysis and synthesis, for then simulating life or intelligence taker us into trying to simulate or model synthesis - yet we suspect machines are 'lifeless' and 'unthinking' in that they are only capable of analysis.
Of course this turns on your definition of 'intelligent'. Not everyone will want to relate intelligence and synthesis as I do. Recall that the 18th century of intelligence was often 'the ability to hold two contrary ideas in mind at the same time and still function'.
This analysis-synthesis balance is what the Turing machine debate is about. An expert system can often appear to be intelligent because it is capable of surprising us - but I would argue that is because of our own 'lack of intelligence', of our not realizing the analytic implication of what we know already.
Hi Johann,
Doubtless among the readers of these posts are some who know enough about the latest advances in mathematics and/or computing to help me see further - but my imaginings bounded by the following:
Option A. If we had a searchable database of all human knowledge - the kind of thing Diderot was working on with D'Alembert, or the book of knowledge the Abbasids were assembling in 800 AD - then we would be able to 'synthesize' by using one item of knowledge in a new way by applying to a novel lived and experienced context. This is what some think of as metaphor.
The possibility of doing this arises precisely and only because the database imagined is incomplete and riven with discontinuities and contradictions - just like human knowledge. We have no touchstone of certainty to use as the tool to resolve these knowledge-defects.
To use previous knowledge as a metaphor - think Kekulé and the dream of the serpent biting its own tail - is to adopt the 'nothing new under the sun' position, to presume that the basic modes of thinking of which we are capable have already been explored pretty fully in the great civilizations that preceded ours - Greek, Chinese, whatever. Here I'm thinking along the lines Joseph Campbell suggested - that all great human stories follow the same format from Homer through to Star-trek. As Kant implied, we are prisoners in our limited modes of knowing - which almost certainly prevent us from knowing some things about our situation - physical, psychological, sociological, psychic, and so on, that we could know if only we knew better.
Our challenge is to use our limited inventory of modes or forms of knowing as best we can to capture as much of what we experience as we can and thereby pull that into the realm of human discourse and make it available to shape our practices.
Option B. Perhaps the human mind is a generative machine. A random number generator picks from the range of all possible numbers. But a generative machine has no notion of 'all possible', so we imply the human mind has no constraints and can always be open to "boldly going where no human mind has gone before", for ever.
Since the new thought has to be brought into the realm of human discourse it will probably 'leverage' from from some already-known discussion. Thus Kuhnian 'new paradigms' leverage from known 'anomalies' that enable us to relate and position the intellectual innovation. For example, this thing hanging in the meat shop is a electrostatic bug-zapper - it does what you used to do with a fly-swat, but much better. But there is nothing about the fly-swat that can help you grasp how the bug-zapper works
Once we have imagined the generative machine anything becomes possible, by definition, it becomes a tautology. The import of the definition is that the generative machine's mechanism is defined as undefinable - for if we could define or model it, then it would lose the open nature we say characterizes it. Synthesis would then be simply a term used to differentiate the workings of the generative machine from the workings of our analytic machines (computers as we presently understand them) - i.e. logic.
Option C. If we doubt the utility of Option B - which is no more than an expression of faith - but still want to imagine the possibility of coming up with something truly new that would amaze the Ancient Greeks, we might focus on our own ignorances or what I call knowledge-absences. Even if the modes of human thinking were known long ago the challenge is to extend what we think about.
This presumes a distinction between cognition and experience, separates abstractions from knowledge of experience, the conceptual discourse from the empirical. Heraclitus imagined atoms, but using the established modes of human thought we have probed into sub-atoms, microbes, viruses, dark matter, and all manner of intriguing things the Greeks knew nothing about. This shows the power of the empirical program as opposed to that of the abstract or mathematical program. Plainly its power is largely derived from and driven by our advancing technologies of observation - microscope, telescope, and so on - and takes us towards Bridgman's operationalism. Knowledge is what we can do. Synthesis is a term from advancing this frontier in the empirical realm.
Option D. Personally I do not like any of these options very much.
I would prefer to use the term synthesize in a way that turns on our knowledge absences, yes, but not quite in the same way as Option A which looks at the gap between novel experience and newly capturing that in language used, at least initially, as metaphor.
I am more intrigued by the gap 'between' what we know that results from our adoption of the different axioms that must underpin discourse. The nature of discourse is that it does not stand on true representations of anything - facts - but on assumptions we choose. These become the discourse's axioms. Since all discourse is partial, like all politics is local, no discourse can grasp the entirety of the human condition. We inhabit a partially known context and discuss it with several incommensurable languages.
Synthesizing might then mean re-adjusting the axioms underpinning a certain body of knowledge so as to absorb another body of our knowledge that was previously taken to be quite disparate. Physics offers famous examples - Maxwell drawing electricity and magnetism together into a single discourse, Einstein drawing mass and energy together.
The synthesis here is less to do with discovering something new, even when impelled by new technologies of observation, and more to do with clarifying or correcting our previous misunderstandings, so is more focused on our knowing and less on what we might discover. The positivist aesthetic is to presume that all our knowledge efforts are directed towards a single coherent and logical body of knowledge - the Truth. I do not think we have to adopt this kind of mysticism to have a workable notion of synthesis as the continuous re-adjusting what we know so that we eliminate what seem in retrospect to have been misunderstandings - all driven by the needs we find in our day-to-day living.
Overall then we have a few options and can ponder which are most available to computing machines as we presently understand them - we are obviously on slippery ground here for there are circuits of re-conception. But it seems clear that machines suffer a major deficiency compared to human beings - suggested by Dreyfus's use of Heidegger. Since machines do not inhabit our lived world they cannot experience surprise - and that, it would seem to me, is what drives the entire notion of intelligence, the human ability to continue to function when surprised. Machines process what is given to them to be able to process, or stop. We seem to do better.
JC,
Google search is getting there.
The human mind is a cultural engine of thinking together. The new networking technology will be us to more and more effective means of thinking together conspiciously, like a common dream.
Machines, as well as any form of (machine's) AI are man made, after all. Would a simple calculator machine be more accurate and faster than an average human? Of course it would. Is it smarter than an average human? Of course, it isn't. Computers are far complicated that calculators, true, but basically, they are still machines.
There are main and vital concepts that differentiate living creatures from anything else, such as self awareness, knowledge, and most importantly, life.
It is a fact that mathematics (or logic, in general) is an undeniable reason, when logic says something is true or false, it must be. So, can logical machine, with very fast operation, and hugely wide database of knowledge, can it have a guess based on (instinctive knowledge)? I don't believe so. Such behavior is not only necessary to sustain (organic) life on Earth, in fact humans depended on this to develop science at certain points.
Instinctivity is not the only source of knowledge, in fact it is the simplest form of it, while (Experimenting) is the most important one. But, experimenting needs senses besides logical interpretation. Would this be possible to put into a machine, of any kind?
The discussion is very interesting, but pointing in the wrong direction. The question of an intelligent machine (smart) can not be solved with pure logic and exact methods. This would be a god-machine.
If man is the model, then there is a machine which analogies suspected (even if there are none), creates the associations, operates with imprecise values, vague conclusions, deals with suspect probabilities. This is far from perfect problem solving, but it works (somehow).
As long as a machine can not cope with these uncertainties, it is not really smart. Accuracy is only in the ivory tower of mathematics. But who wants to understand the world only with mathematics, becomes insane, because real life is full of surprises, which are not mathematically manageable.
The fifth generation concept only thought it wanted intelligent systems. What resulted was a drive to further logically-orientated paradigms. The functional and logic languages for example. There was some drive towards wetware - biological systems but little came of that. Neural networks reared their tired heads again, but in the end there was no clear philosophy to steer the direction. There still isn't . When it comes I suspect it wont be deterministic, logical, computer language orientated, commercial or even desirable. The key values will not be computer but sociological, psychological and ecological (including economics).
If you possess all the knowledge you ever need you don't actually need intelligence.
This is where it starts - what do you mean by knowledge... intelligence....need/want...meaning....and what has all of this got to do with thought?
We need these terms defined before you can describe thought to break out the loop that its all about computers.....
In a number of questions on this site I have kept asking for these terms to be defined but we always go straight back to damn computers. That is why the Fifth generation initiative failed.
Johann,
yes, WE need pure logic and exact methods the nut to crack. The result will be a smart machine which will be able to calculate also smart (and not pure logical).
All the complex systems that engineers design/build/manufacture/operate are specified to the last little details and the bugs remove during prototyping phase. The AI dream is to create a meta-engineer methodology. Unstead of specifying the design to the last details, we would concentrate the design at a meta-AI level and the system itself would work out the billion of details. To create systems more complex than the space shuttle we need to create a AI meta engineering method.
I read a rather interesting book last night as I was crossing the Atlantic:
Vattimo, Gianni, & with Piergiorgio Paterlini. (2009). Not Being God: A Collaborative Autobiography (W. McCuaig, Trans.). New York: Columbia University Press.
It picks up on several of the posts in this now long thread. It also turns out that Vattimo's discussion of knowledge ( re Ahmed's post) and God (re Wilfried's post) also relates some of my earlier comments.
The issue we are turning over again and again is whether 'smart' is to mean something beyond machinic thought (John Sanders's point) - even if there is something to Louis's notion of 'meta-something or other' (which I do not understand - mea culpa).
My argument, and Vattimo's, is that humans are simply not like machines and, in consequence, we have some other modes of thought available, on which turn our notions of 'smart' and 'intelligence'. Vattimo also advances along Heideggerian lines, re my earlier point about machines not living the lives we lead and therefore being unable to connect to some of the challenges we face in being human and acting in a 'human' space.
I would recommend Vattimo's charming little book to anyone interested in these approaches.