The question concerns ontological, epistemological, methodological and praxiological relationships.
Philadelphia, PA
Dear Rinke & readers,
Thanks for your contribution to this thread of discussion.
It seems doubtful that we should regard consciousness as a kind of knowledge, as you have it--though knowing is a way of being conscious The standard definition of "knowledge," is going to be something like, "justified, true belief" --at the least, on accepted philosophical account, the concept of knowledge involves these elements. (I won't go into technical problems in the traditional definition--deriving from Plato.)
The chief point is that consciousness need not involve belief, conceptually formulated or "justified belief." Perception or even simple sensation involves consciousness, but does not seem to be a matter of "knowledge" or justification or belief. There seems to be some anthropomorphism in your talk of consciousness--attributing characteristics of human mentality and its peculiar, developed and interesting forms to consciousness generally.
Perhaps a counter-example will make the point. Imagine an infant which sees its feeding bottle and reaches for it. The child is certainly conscious, but perhaps at an age still lacking in conceptual development. We might say the infant doesn't know what a bottle is. What is going on in the seeing and the reaching would not seem to be a matter of "knowledge," it is instead that we too easily easily reach for our own familiar concepts in describing the situation. Something is definitely going on between the seeing and the reaching for the bottle, and we may suppose that there is some sort of pre-existing "action potential" involved, but it would seem to be sub-reflective, and based on simple past association of the bottle with the pleasurable experience of drinking.
I suppose that even a flat-worm with simple "eye patches" may be aware of the difference between lighter and darker, and this would seem to be a kind of sensation. But I would think it implausible to describe such consciousness in terms of knowledge, belief, or justification. The general point is that there are grades of consciousness and we should not attribute "knowledge" which is a conceptual form, in every case of conscious experience.
I suppose that in order to give any account of "artificial consciousness," we first have to be pretty clear on consciousness and its varieties.
H.G. Callaway
---you wrote---
Consciousness I interpret as knowledge as well, but it is focused about your self and about your surrounding and its interactions in between. When you are able to generate this kind of knowledge and a formal description of the interactions through some kind of algorithms. Using expert system technology or "problem solving algorithms" you can conclude using this knowledge. So you have "artificial consciousness", which (I believe) is under the hood of "artificial intelligence".
the intelligence is necessary for consciousness, but not sufficient. By consciousness, awareness of self. Artificial intelligence, such as the increasing abilities.There’s a lot of hype, excitement, and AI’s advancement. It’s understandable because the technology is quickly seeping into every corner of modern life, present in everything from Autonomous Vehicles and mobiles..
You can look at this article name: "Artificial Intelligence and Consciousness " by Goutam Paul. Clarify the difference and the relationship between the two intelligence and consciousness, and finally discuss the possibility of artificial intelligence and consciousness.
Sincere greetings
Stefan korn wrote:
Most of the recent advances in AI are undoubtedly “intelligent” efforts by machines – IBM Watson, deep learning, passing the Turing test etc. But these machines are not conscious. It is possible to simulate intelligence and hence pass the Turing test. However it is not possible to simulate consciousness.
What most people think AI means should actually be called “Artificial Consciousness” – i.e. the ability for a machine (created by humans) to be self-aware, have independent original thoughts and feelings, interact with the physical world, and be able to exercise free will. AI and all the wonderful things being developed right now by Facebook, IBM, Google and just about every software company on the planet have nothing to do with Artificial Consciousness.
The distinction between intelligence and consciousness is important as it makes all the difference when it comes to being evil (i.e. doing harm intentionally). To be truly evil a machine (or a human) must have a basic level of consciousness because an element of free will is required. Otherwise any harm is unintentional, a mistake or just bad luck. A very basic level of consciousness is also important for being evil as most people who are thought to have obtained a very high level of consciousness tend to be utterly benign, altruistic and deeply caring (e.g. the Dalai Lama).
No matter how sophisticated AI gets it won’t be conscious. The reason for that is – probably – that physical constructs such as the brain don’t give rise to consciousness (if anything it might be the other way round). Probably is a key word here because, truly - nobody knows for sure (cf. the hard problem). Consciousness also happens to be an area that research and science struggles with a lot. Our entire scientific system is based on a mechanistic and deterministic model of the world, and the scientific method relies on objectively verifiable processes. However human consciousness is inherently subjective so it’s a pretty tricky field of enquiry for researchers and scientists. Looks like we won’t make much progress in consciousness research any time soon. Another reason to relax about AI.
Finally - even if we did eventually find a way to create consciousness artificially it is likely that an artificially created consciousness machine would evolve so rapidly that it would become benign the minute it is switched on (as it would rapidly iterate through consciousness evolution not constrained by comparatively slow biological processes). It would probably just tell us to stop smoking and look after the planet better – but in a very empathetic and constructive way with lots of really useful suggestions.
Artificial Intelligence - it is the ways were machine imitate human like solving problem and decision making.(to apply and acquire knowledge)
Artificial Consciousness - is the triggering mechanism to interact to environment on whether to act on it or not. (human has it reason but machine have its triggers)
Philadelphia, PA
Dear Horvath & readers,
I take it that the question of "artificial consciousness" is basically an experimental question. (Whether we want to perform the experiment is a different question.) Computer scientists may try to engineer it and then wait to see the results of their work. Much the same might be said for the adequacy of projects in "artificial intelligence."
Of course we need some considerable clarification of what is to count as "consciousness" and "intelligence" --and how to detect them--and much of this comes up when people argue against the possibility. But even if, say, "artificial consciousness" proved to be beyond the possible scope of engineering, it would certainly be interesting to understand exactly why the project would fail. That, presumably would tell us something of significance about consciousness.
H.G. Callaway
Artificial Intelligence is debated. Progress has been made, but usually exaggerated by supporters. I did some IT work with AI in the 1980s, until support for it diminished. Within arbitrary limitations the computer code modifies the computer code using rules and resources.
Claims of artificial consciousness are made, but largely rejected by the research community. Real time data acquisition is done, but with no internal routine to decide if the data is real or simulated.
Roger Penrose gave much attention to problems of math that are considered to be non computable, suggesting that human consciousness and intelligence can deal with and tolerate such problems and ambiguity while computing algorithms never find an answer.
In human organizations values are adjusted for inclusion. Computer networks also have a type of inclusion, bit one regimented by a human manager. Human organizations find much time spent on solving problems because they are solvable, although they may not be what the organization needs at the time. Unsolvable problems continue unsolved until the available time expires.
For humans the passage of time resolves many ambiguities, and a system of values moderates the progressions. It seems unlikely that a computer would accept human values, even when programmed into AI limitations.
According to Vivekananda- the famous social reformer from India, consciousness is something related to our perception, it is not intelligence.
According to me consciousness is one of the features of Intelligence. Intelligence has no bounds . While we have to be conscious under a particular circumstance ( where our intelligence may not augur well) in order not to miss certain other outcome or to avoid some undesirable outcome.
Dear Colleagues,
Thank you very much indeed for these interesting answers! I think they are all useful. Having said that I would like to further articulate the personal interest behind my initial question (What is the relationship between 'Artificial Intelligence' and 'Artificial Consciousness'?). As the related clarification says, the intent is to learn more from various science philosophical perspectives. For instance,
from an ontological viewpoint:
What exists in those things called 'consciousness' and 'intelligence'?
Do they and how much do they reflect the mind-matter dichotomy?
Are they completely different concepts, same concepts, or overlapping concepts?
How are 'consciousness' and 'intelligence' interrelated?
Are they dependent on each other in one way or another? What is on what?
What exist in those things called 'artificial consciousness' and 'artificial intelligence'?
from an epistemological viewpoint:
What is the (“informational”) foundation for ‘intelligence’ and ‘consciousness’?
What are the sources of “information” for ‘intelligence’ and ‘consciousness’?
How are physicality and intellectuality interrelated in ‘intelligence’ and ‘consciousness’?
What is the generic informational foundation for ‘artificial intelligence’ and ‘artificial consciousness’?
What are the generic sources of input for ‘artificial intelligence’ and ‘artificial consciousness’?
How are 'artificial consciousness' and 'artificial intelligence' (including research and development) interrelated as domains of scientific inquiry and knowing (scientific disciplines)?
from a methodological viewpoint:
If ‘human intelligence’ manifests on four levels and in several types, how does ‘artificial intelligence’ manifest?
How and why is it possible to reproduce consciousness by non-biological structures?
How do ‘consciousness’ and ‘artificial consciousness’ manifest?
How are 'artificial consciousness' and 'artificial intelligence' interrelated as reproduced phenomena?
from a praxiological viewpoint:
What are the assumptions behind ‘artificial intelligence’ and ‘artificial consciousness’?
How is the ‘inner world’ of an individual associated with the ‘external world’?
How do artificial intelligence’ and ‘artificial consciousness’ relate to human action and conduct?
Can ‘artificial intelligence’ and ‘artificial consciousness’ be equivalent with their natural counterpart?
With best regards,
I.H.
Philadelphia, PA
Dear Horvath & readers,
Approaching "consciousness" via language. Might we find something of its general character in a special case?
Perhaps there is a version of the "origins" questions reflected in the experience of learning a foreign language. Surely, comprehension in the new language develops gradually, and in the meantime, there is much that is missed or imperfectly understood. One might say its a somewhat "flickering" awareness at first. This is a matter that is at least partly evident in testing for comprehension; and testing is something that language teachers frequently do, for there own specific purposes in instruction.
Also of similar interest, I suspect, is the ways in which people infer the meaning of an unfamiliar word from its usage and the context of usage. We might think of this as a process of becoming aware of the meaning of a word; and the process of becoming aware seems to be reflected in the formulation of dictionary entries on the basis of empirical studies of usage--the work of the lexicographer.
I think the general point is that we might do well in this fashion without need of wondering about the absolute origins of consciousness. Since, as is plausible, consciousness continually grows, and in consequence, its new forms must be available just about any time and place. Surely, if someone learns a new word, or infers its meaning from usage, then that person becomes able to do something that he or she could not do before: at the least, begin to use the word correctly. Isn't this also a difference in the world? Doesn't it imply other practical abilities? A key to this approach is that it makes little sense to consider linguistic meaning apart from empirical evidence of usage, which in turn, has all manner of relations to practices. We speak in certain ways, partly because it facilitates our participation in related practices and doings. The efficacy of consciousness seems to be evident in the relationship between understanding language and success in related practices.
Maybe the question of the absolute origin of consciousness is somewhat like the question of the absolute origin of life? Evolutionary theory gets along in understanding living things even in ignorance of absolute origins.
H.G. Callaway
---A correspondent wrote---
An even more basic question that requires more consistency in how we define and ultimately understand consciousness might be "when does consciousness begin", even considering as far back and primitive as fetal development in the womb. Consciousness itself cannot be just a single, all or nothing attribute, but a gradually developed, complex mechanism, continually updated and modified by daily exposure to internally varying brain plasticity, and exteriorly applied social pressures and human culture at large. Language would be just one example of a component to modern day consciousness that typically evolves within each person and between each person, from day to day.
That is an interesting question and I believe you find a lot of different answers based on different views and philosophical interpretations. My "personal" interpretation is like this: Intelligence as used in "artificial intelligence" I interpret as knowledge or background knowledge as in the term "intelligence agency". "knowledge generated by algorithms" - thus "artificial intelligence". Consciousness I interpret as knowledge as well, but it is focussed about your self and about your surrounding and ist interactions inbetween. When you are able to generate this kind of knowledge and a formal description of the interactions through some kind of algorithms. Using expert system technology or "problem solving algorithms" you can conclude using this knowledge. So you have "artificial consciousness", which (I believe) is under the hood of "artificial intelligence".
Philadelphia, PA
Dear Rinke & readers,
Thanks for your contribution to this thread of discussion.
It seems doubtful that we should regard consciousness as a kind of knowledge, as you have it--though knowing is a way of being conscious The standard definition of "knowledge," is going to be something like, "justified, true belief" --at the least, on accepted philosophical account, the concept of knowledge involves these elements. (I won't go into technical problems in the traditional definition--deriving from Plato.)
The chief point is that consciousness need not involve belief, conceptually formulated or "justified belief." Perception or even simple sensation involves consciousness, but does not seem to be a matter of "knowledge" or justification or belief. There seems to be some anthropomorphism in your talk of consciousness--attributing characteristics of human mentality and its peculiar, developed and interesting forms to consciousness generally.
Perhaps a counter-example will make the point. Imagine an infant which sees its feeding bottle and reaches for it. The child is certainly conscious, but perhaps at an age still lacking in conceptual development. We might say the infant doesn't know what a bottle is. What is going on in the seeing and the reaching would not seem to be a matter of "knowledge," it is instead that we too easily easily reach for our own familiar concepts in describing the situation. Something is definitely going on between the seeing and the reaching for the bottle, and we may suppose that there is some sort of pre-existing "action potential" involved, but it would seem to be sub-reflective, and based on simple past association of the bottle with the pleasurable experience of drinking.
I suppose that even a flat-worm with simple "eye patches" may be aware of the difference between lighter and darker, and this would seem to be a kind of sensation. But I would think it implausible to describe such consciousness in terms of knowledge, belief, or justification. The general point is that there are grades of consciousness and we should not attribute "knowledge" which is a conceptual form, in every case of conscious experience.
I suppose that in order to give any account of "artificial consciousness," we first have to be pretty clear on consciousness and its varieties.
H.G. Callaway
---you wrote---
Consciousness I interpret as knowledge as well, but it is focused about your self and about your surrounding and its interactions in between. When you are able to generate this kind of knowledge and a formal description of the interactions through some kind of algorithms. Using expert system technology or "problem solving algorithms" you can conclude using this knowledge. So you have "artificial consciousness", which (I believe) is under the hood of "artificial intelligence".
الذكاء الاصطناعي نوع من انواع الذكاءات المبرمج بآلية منظمة وفي الاونة الاخيرة بدأ العمل به بشكل مبرمج
Imre,
I would like to make one observation on your last comment. In pondering the question of 'Artificial Intelligence' and 'Artificial Consciousness' I would tread carefully and isolate these terms from the intelligence and consciousness debate. We cannot simply state that they are equivalent due to their intrinsically distinct nature. For example the mind-matter dichotomy reflects in AI as the software hardware problem which I think is more delimited than the mind matter problem. While they share similarities they are very different.
To tackle the questions that you have pondered one of the first things to do is to establish operationally what we are talking about in terms of technological procedures and start from that point. As I have pointed in several other threads, the definition of artificial intelligence has shifted through the years and artificial consciousness is tangled up in too much bias of what consciousness implies.
Regards
Dear Dr. Qassem,
Unfortunately I do not know Arabic, therefore I used Google Translate to have an English translation of your comment. I received the following interperation: "Artificial intelligence is a type of intelligences programmed by a structured mechanism". Please kindly confirm if this is correct. Thank you very much and kind regards,
Imre Horvath
Dear Imre Horvath and OTHERS (see below)
You say: " The question concerns ontological, epistemological, methodological and praxiological relationships. " Well, this is just plain too MUCH, TOO VARIED, AND too COMPLEX [(somehow)] A "WAY" to approach the Subject, IF you are indicating all 4 must be done and all had in mind (actually, that would be simply unrealistic -- based on the science on the Memories, which has good evidence, indicating what I just said already, that exists already). For your question, " What is the relationship between 'Artificial Intelligence' and 'Artificial Consciousness'?", you really need just a fully empirical perspective on the human (ultimately THE Subject), BROADLY CONCEIVED (what you might call: praxiological). AGI (artificial general intelligence) is supposed to thoroughly model the human in major senses, after all; AND, you will not otherwise get answers out of "thin air" by just thinking and imagining
All,
I certainly find very relevant and productive what philosophy has to say about this topic. Certainly, we have different methods but shunning another branch of knowledge leaves much to be desired.
I would like to ground the discussion a bit
Let us take as starting point that we choose a connectionist paradigm and design the system in the attached image. We program it to recognize Chinese symbols as output and upon output it receives feedback not just on what the result was but provides adjustments to the process.
Questions:
Notes for Philosophers: This type of problem and the questions posed is highly relevant to us in the computer science realm and merit your attention. The questions posed focus on epistemological as well as metaphysical aspects of AI
Notes for computer scientist: Coming up with a better example is the challenge. Also, I have not used deep restricted Boltzmann architectures due to the entanglement of the modules that can render the analysis extremely complicated
Lets work together
Dr. Jesness mentioned: "... really need just a fully empirical perspective ...". Many people committed to science can easily agree on this claim. However, my feeling is, the very issue of 'perspective' is already problematic. Namely, which one? In the attached figure I identified four dominant (coexisting and competing) perspectives. They try to explain the phenomenon of 'consciousness' (and ‘artificial consciousness’) based on the related bodies of background knowledge (and, perhaps, also based on some specific purposes). The challenge is: How can we integrate these perspectives and arrive at a synergistic view (description, explanation, prediction) concerning the phenomenon in question? Empirical (observational, experimental, simulational, etc.) approaches may not be able to properly and sufficiently address it due to its complexity and intangibility. That is why additional philosophical speculations may be helpful and useful. But they will also face the complexity and the multiplicity issues of the perspectives.
Dear Arturo Geigel and Imre Horvath
I am afraid I am of the view that shunning philosophy (EXCEPT good analytic philosophy which deeply involves itself with science, or other clearly consistently-motivated OTHER Subject areas) IS THE THE CORRECT THING TO DO. I believe there is a VERY good reason for why nowadays over 50% of philosophers consider themselves, and try to be, ANALYTIC philosophers. The rest of philosophy is overly armchair-based and incongruent with and inconsistent with science AND a menace. "Define", "Define", "Define" , yielding endless multiple pseudo-"interpretations" AND EXTREME, AND EXTREMELY WRONGFUL (either wrong or false), DUALISMS.
It seems for those philosophers who remain rogue (i.e. the rest of them, other than SOME good analytic philosophers), any particular word that SEEMS to refer to something distinct may well be deemed so, even when there is NO EMPIRICAL EVIDENCE FOR THIS; thusly, something can be an issue for them (and one they can come to no generally communicable agreement on with others) -- it is effectively of the standing (status) of undisciplined child's play.
If you want a longer and more detailed retort (with a LOT of specifics), see my Answers to the Question (in the thread) : https://www.researchgate.net/post/Can_philosophy_help_to_innovate_and_develop_scientific_theory#view=5cd20ba5f8ea5275016384fe (I have participated there for 2-3 years. Most traditional philosophy has been defeated THERE for claiming to be of ANY good/good-use to ANY SCIENCE, in reality -- CLAIMS notwithstanding.)
EEK, EEK; YUCK, YUCK, YUCK. You put all your efforts into thought which is not reasonably (more bit-by-bit) grounded and you use your thus-damned mindes to come up with garbage (garbage in, garbage out). Essentially: blither/blather, blah, blah, blah. You cannot force one to "play your game", and it IS, in essence, a game -- and that is all (perhaps as complex as chess, but who cares).
Brad Jesness,
Take a look at the model that I proposed and try to answer the questions without a philosophers help
It is more simple to prove your point to me in research than to just talk about it
Dear Arturo Geigel
I will look at no model which is not grounded/well-founded at each step _AND_ THIS REQUIREMENT HOLDING FOR EVERY SINGLE CONCEPT in the model (or actually involved in the model), with the ONLY exception(s) being some underlying processes, yet to be well-defined, BUT any of those must be CLEARLY RELATED TO truly empirically-related PRINCIPLES ACCEPTED BY ALL. If your model was like this, I would like to know about it (if it had some value to my study areas) -- and it would have some real and recognized usefulness, that demonstrated AS science itself ; see my posts in the other thread I have referred to for a general definition of science, (i.e. true of ANYTHING that is really scientific).
For something (any concept or any relationship between concepts) to be well-grounded and well-founded, it must be related MINIMALLY to some KEY (pivotal) directly-observable [set(s) of] overt phenomena (showing inter-observer reliabilities -- all necessary science reliabilities), these phenomena found (in space and time) together, AND these being very likely related to OTHER (later) phenomena, EQUALLY WELL-DEFINED (scientifically defined, as is the case for that first set(s) of phenomena) -- the relationship of the 2 separate sets of phenomena constituting VALIDITY.
No aspect (e.g. concept or relationship(s) between concepts) of ANY MODEL should precede/pre-date some excellent, clear agreeable observations (the nature of "agreeable" observations being scientific, as described above).
In the book entitled 'The Unity of Mind, Brain and World: Current Perspectives on a Science of Consciousness', published in 2013, the editors, Alfredo Pereira Jr. and Dietrich Lehmann, wrote: "A consensus appears to be emerging, assuming that the conscious mind and the functioning brain are two aspects of a complex system that interacts with the world." They regard consciousness as a complex phenomenon, which forces us to assume that explainable relationships exist among the minds, the bodies, and the world, and to expect a kind of unity. They claim that consciousness cannot be studied by experimental approaches in its entirety, and propose a theoretical synthesis. In my view, this theoretical synthesis will remain 'theoretical' if the empirical facts provided by all of the concerned disciplines and their inductively and/or deductively generated theories are not considered as a starting point.
Dear Imre Horvath
Any "consensus" (agreement) not related to well-founded, scientific concepts (and somehow reliably demonstrated clearly or proven) are less than meaningless to me. Philosophers have for 100s of years been largely a menace to science and good thought -- we would all know if a case otherwise was true. Many philosophers have absolutely no significant experience with, and using, GOOD EMPIRICISM (see my post, above, and those in another thread I refer to, also in one of my posts above), thus they are "out of the game".
But, if this "stuff" is all a person does, and he/she knows nothing else, they and I shall have little to dialog about (but just mainly the deficiencies in/of the thought).
Brad,
You will not look at it because you do not want to or do not know enough computer science to evaluate the model itself. These steps are well grounded in machine learning and AI and have been extremely well studied (some of them for decades with hundreds of papers on them). Further, the problems posed are well established problems in philosophy.
If you do not know about computer science and philosophy which are the fields that you are trying to criticize, then I suggest to let people want to collaborate in pursuit of knowledge do it unhindered.
Dear Arturo Geigel
You want to be "unhindered" in your ways; I understand. I will now go (leave this thread); I cannot think I have anything more to say (and my assessments can neither be well-considered, or argued against, anyway, by your ilk). NOTE:
For your information, I have reviewed Anderson's ACT Theory and that is often used as the system for being the basis of General Artificial Intelligence. Thus, I likely know more that pertains to computer science than anyone here (or likely to come here). ALSO SEE my grand offerings to computer science, so all your more poorly grounded and poorly founded models can be properly disposed of: https://www.researchgate.net/project/Developing-a-Usable-Empirically-Based-Outline-of-Human-Behavior-for-FULL-Artificial-Intelligence-and-for-Psychology .
Like I just said though, I am going so you can enforce basically a philosophy-only policy to NO avail, like 100s of years of history have shown us. The philosophers, again, will surely have no meaningful success, but may once again distract and misdirect others, resulting basically in "noise" and a waste of time -- continuing the record of most philosophy at being a menace.
Goodbye.
Philadelphia, PA
Dear Horvath & readers,
The word "consciousness" appears in the phrase "artificial consciousness." The natural supposition would be, then, that artificial consciousness would have to be a kind, or sub-type or special case of consciousness. In trying to understand what may be intended by the phrase "artificial consciousness," one naturally starts with either a common-sense of psychological concept of consciousness--adding the prospect of artificial (engineered) production of the phenomenon. Whatever computer scientists tells us about how an artificial system functions, what feed-back loops there may be, how it adjusted to the environment or changes its parameters, etc., at some point all of this has got to match up with the notion we started with and our understanding of "consciousness." If there is lack of clarity on the concept, then it is important to work to improve or clarify the concept. This is one sort of thing that philosophers typically do.
Of course, the computer scientists must have or think they have an adequate concept of consciousness already at hand. But it is doubtful that they will easily convince the psychologists or the general, educated public of this without some considerable exposition of the concept of consciousness they employ or propose to employ.
What is true in the statement below, I think is that if (and I emphasize the "if"), if we had an adequate understanding of "artificial consciousness" (and of consciousness), then we would also have found "that explainable relationships exist among the minds, the bodies, and the world, and ... a kind of unity. But on the other hand, if we still do not have a clear concept of consciousness, it remains in doubt that we understand proposals regarding, or theories of, artificial consciousness. I suppose that your quoted authors are right, and some sort of "theoretical synthesis" will be required.
The theory or concept of consciousness involved in the desired synthesis must be one that is recognized in the related psychological literature or at least in common sense discourse.
H.G. Callaway
---you wrote---
"A consensus appears to be emerging, assuming that the conscious mind and the functioning brain are two aspects of a complex system that interacts with the world." They regard consciousness as a complex phenomenon, which forces us to assume that explainable relationships exist among the minds, the bodies, and the world, and to expect a kind of unity. They claim that consciousness cannot be studied by experimental approaches in its entirety, and propose a theoretical synthesis.
I think Artificial Intelligence is the control or artificial consciousness is a sense
I am new to this discussion and haven't read every statement before.
So I just would like to express my feelings about the question and apologize to those whose statements I might repeat or contradict in an indue manner.
I think the term "artificial" does not add something substantial to the question.
It expresses a technological flavor, a sort of passion for development of internet or whatever. I find "artificial intelligence" or "artificial consciousness" is nearly a contradictio in terminis.
Why don't we get back to classical humanities and important questions like "What is proper to humans?", "What is moral behaviour?", "What should I believe?", "How to behave in society?" rather than prepare a future slavery of ourselves under technological giants. (Sadly George Orwells 1984 is no longer of the domain of science-fiction.)
There is an age at which we should leave too much passion for technology and adopt a responsible behavior.
Artificial intelligence is a term which is back in favour now, covering machine learning and autonomy and robotics. Artificial consciousness seems to be out of favour, covering what it would mean for a machine to exhibit consciousness and what utility that would have. I wrote a paper on artificial consciousness, but the reviewer was scathing about my use of the term. Igor Aleksander did pioneering work in the area of artificial consciousness in the 1990s, producing a neural network computer program called Magnus to illustrate the concepts. There has been a reasonably large literature since then, David Gamez and James Reggia producing survey papers.
My own view is that the ability to self-monitor, make decisions and to communicate are the essential precursors to artificial consciousness, so that in theory a computer operating system that could monitor some of its own parameters, modify those parameters, manage computer processes and send messages to a human user could be considered artificially conscious. This is basically a self-patching operating system running in debug mode. I also accept that the richer the network of sensors that a program has, the more testable the presence of artificial intelligence is.
Andrew Powell , Dr Frankenstein tried to give life to an artificial creature. The creature then followed its natural tendencies and turned against his creator. Do you think a computer would have something like an ethics and act to the benefit of his creator when you have some autonomously acting robots in war zones e.g.?
Not everything that motivates scientists, is really good for humanity.
Dear Thomas Krantz
I strongly doubt that a computer in the form of a self-patching operating system would have ethics by default. Any computer program would need an initial programmer, although I suspect you could use machine learning to create computer code that could be evolved from training samples by playing a game with a computer component. You could include ethical constraints in any such initial program as modifications of an objective function that the program seeks to optimise.
Of course, I don't think anyone has actually built an artificially conscious computer, which would be the real test of whether a self-patching, decision making, communicating operating system would exhibit artificial consciousness. I guess whoever did build such an operating system would be a digital Dr Frankenstein.
Artificial intelligence techniques atre not too differente to other mathematical schemes. The Word of Consciousness seems something ambisous.
Dear Andrew Powell and readers,
I am convinced 'consciousness', in its usual sense, is not simply something obtained by letting animated self improving beings defend their existence in a sometimes hostile sometimes friendly environment. I think it is not really productive to try to redefine it without reading again integral history of philosophy of mind. In my opinion it is really proper only to humans and expresses a 'pneuma', a creator's mark, an original gift. Otherwise you might use the word 'consciousness' for a an automatism ass you would call a puppet a 'human being'.
A little story:
Descartes was also very interested by automatons and wondered if humans were simply very complex machines. Descartes even possessed an automaton that followed and served him during his travels. It would be interesting to read again about it.
Another little story:
Descartes wondered if a dog would feel pain like a human being when one cut his tail, and his answer - as a good rationalist - was: the dog cries but it does not suffer. Is suffering not also related to consciousness?
Dear Thomas Krantz
I am not claiming that humans can create an artificial consciousness, but it is worth trying to experiment to help humankind through new support systems (although currently science fiction). Artificial consciousness, if possible, may not be like human consciousness, in the same way that machine learning is inspired by but different from human learning.
Animals are conscious in my view, and I have to say that I respect Descartes as a mathematician and philosopher, but not his views on the distinction between humans and animals or, indeed, his actions.
Dear Colleagues,
Why is artificial conscious indeed needed (to be created). Just because there is a chance for it, or are there any other reasons?
Best regards,
Imre Horvath
P.s. It is obvious that AI tools will extend human capabilities in the perceptive and cognitive domains, as traditional tools have done it in the motor (physical) domain. But in this respect, does AI depend on AC?
Dear Imre Horvath ,
I believe artificially intelligent machines can do subordinate tasks and as such are likely to replace humans in these tasks, which would probably be forced to find a different occupation and means to live. This is the social impact of artificial intelligence and it needs more than an artificial intelligence to find how to regulate this social impact.
But in order not to see things completely negatively, I believe there are also uses for artificial intelligence in assisting medical staff in diagnostics or surgery. But would a doctor then not rely too much on what the machine tells him and neglect the human intuition which is often not so bad.
I believe also that AI coupled together with huge databases will create if we are not very careful on ethical, moral, social, humanistic, teleological principles, on collective and individual duties and rights, a huge 'monster' (so to say) that we will have difficulties to get under control again. And this monster will not only be able to play chess better than a human.
A citation that is useful to recall:
"The heart has its reasons, the reason ignores"
A computer has no heart, it calculates.
Imre Horvath ,
I will expand on your quote " the very issue of 'perspective' is already problematic. Namely, which one?"
Let me start from the Computing perspective and then move from there since it is there that the genesis of "artificiality" originates.
In the development of AI there have been two major branches, the first being symbolic AI and secondly the connectionist approach. To give you an idea of how this is problematic, lets put as example of the Chinese room argument. This argument was done during the 80's where the dominant research was on symbolic AI (neural networks were still recovering from one of its winters). If you currently look at the field of AI it is currently having deep learning neural networks as its key research area. If you pose the Chinese argument from a connectionist perspective you will find it much more complex and in my opinion some flaws in the analysis. In addition, backpropagation is one of the main neural networks on all stages of the connectionist history (even for deep learning) and its biological usefulness has not been widely accepted. This has drawn a big wedge between biological simulations of the brain and artificial neural networks (backpropagation is viewed as an optimization algorithm and there is no current widespread research to couple artificial Neural Networks and Hodgkin–Huxley model for example) . Further, symbolic AI has its own problem as a candidate of being a model of 'artificial intelligence'.
The above explanation can provide a view of what is happening in AI in computing and neuro biology. Let us now view computing and Quantum physics. Some recent interest have driven the adoption of algorithms known as quantum machine learning algorithms. these implementations can provide some useful models on which to explore a common ground on these two fields (as opposed to just trying to put together the two theories).
Finally, you state "
Empirical (observational, experimental, simulational, etc.) approaches may not be able to properly and sufficiently address it due to its complexity and intangibility. "
I will disagree with you on this one
If you do not involve yourself with the minutiae of the field and have the experience of simulations you will not address some of what I think are terrible errors that Searle committed (in part he was a victim of his own time). While I completely welcome fellow philosophers to engage with me in solving problems from my field, in my opinion these isolated exercises will be off the mark by a long shot. The example system that I gave poses challenges to the Chinese room argument analysis which unless you do the simulation and know how this system works you will not get the analysis right.
Let us engage the problem as an interdisciplinary one, not from each others isolated fields.
Hope this helps
Revisiting the unfinished discussion on the fundamentals and possibility of consciousness ... The advocates of quantum theory have tried to move from the traditional triplet of 'mind', 'matter' and 'world' to a conceptualization (reasoning model) that contains an additional element, called events, in the middle of the triangle formed by the above triplet. The events are supposed to have both physical and experiential (psychological) characteristics, and to carry 'information' between the elements of the traditional triplet. Thereby, this model creates the need for efficacious conscious choices. The working mechanism of the quantum elements (quantum events) is associated with the actions of acquiring, transferring and utilizing 'information'. The quantum events provide some freedom of choice and may lead to some sort of conscious agent, which may manifest also in forms other than the human agent. In the second half of the twentieth century, it was assumed by many scientists that capturing the behavior of the quantum particles may capture the information or knowledge of this behavior. If the point-like quantum entities are seen as quantum clouds, it gives the opportunity to believe that they are spread in the world, and even in time, rather than localized, and may interact. The interaction establishes a causal dynamical connection between (i) the conscious choices of how to act, (ii) the (consciously) experienced increments in knowledge, and (iii) the physical actualizations of the neural correlates of the experienced increments in knowledge. The role and the task of the brain is to accept clues from the quantum clouds of the physical environment and to exercise a neurological activity that generates (neural) signals that will cause mental activities. This fundamental, dynamic, and uncertain process is believed to underlie the quantum interpretation of consciousness and a unified quantum theory of brain dynamics. Quite a number of scientists and philosophers based and conjectured explanatory theories on these main lines such as David Bohm, John von Neumann, Roger Penrose, Stuart Hameroff, Hillary Putnam, Mari Jibu or Kunio Yasue.
I like the mind-challenging question posed by prof. Alfredo Pereira Junior very much. He asked: What is the place of a theory of consciousness in contemporary culture? Is it strictly scientific? Is it religious? Or is it a branch of philosophy? My humble question is: What is the most promising platform for a purposeful argumentation in the context of this particulatr thread?
Intelligence is a subset of an organism, consciousness are the summable effects of each set of setable organisms.
Artificial Consciousness is what you have when you have AI's in network. Their adversarial programming is the first basis of group dynamics. Multiadversarials, for example, can create looped competitiveness hierarchies. AI's that begin to operate in looped hierarchies have what I would call artificial consciousness, because consciousness is the awareness of (the awareness of the awareness of the awareness of...) the self in an environment (of self's).