Will it be possible to build an artificial consciousness similar to human consciousness in digitized structures of artificial intelligence if in specific structures of artificial intelligence will digitally reproduce the artificial structures of neurons and the entire central nervous system of humans?
If artificial intelligence that mapped human neurons was built, then it would be a very advanced artificial intelligence. If artificial intelligence was built in such a way that all human neurons would be reconstructed in digital technology, it would mean the possibility of building cybernetic structures capable of collecting and processing data in a much larger database capacity than at present. However, if it would only be the reproduction of simple neural structures and their reproduction to the number of neurons contained in the human organism, then only or mainly quantitative and not necessarily qualitative factors that characterize the collection and processing of data in the human brain would be achieved. Without achieving all of the qualitative variables typical of the human nervous system in a cybernetic counterpart, it might be doubtful to create in this cybernetic structure an artificial nervous system of cybernetic consciousness which is the equivalent of human consciousness.
Do you agree with me on the above matter?
In the context of the above issues, I am asking you the following question:
Will it be possible to build an artificial consciousness similar to human consciousness in digitized structures of artificial intelligence if in specific structures of artificial intelligence will digitally reproduce the artificial structures of neurons and the entire central nervous system of humans?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Dear Andrew Powell
You are certainly not alone with your skepticism with regard to the Penrose argument. I had recently a discussion about this on a different thread with Joachim Lipski (link below). I took the advocatus diaboli position and I shall continue to do so. My motivation is that I’m rather puzzled by the argument, in particular with the mathematical part of it. Penrose is a major mathematician and it seems unlikely to me that this part contains holes (but this is certainly not good enough). Before I address your objections, let me state a slightly (but still very superficial) reconstruction of the argument.
We start with some set of rules of inference A, and we assume that this set is sound. Then we take some computational procedure Cq(n) and we assume that this procedure is enumerable.
We think of A as a computational procedure that given a pair of numbers tries to assert whether Cq(n) does terminate or not. So, if A(q,n) does terminate, Cq(n) does not.
Now, we set q=n and we get if A(n,n) does terminate, Cq(n) does not. Since Cq(n) is enumerable (a list of all computations performed on n) and since A is a computational procedure performed on n, there is a case such that A(n,n)=Ck(n). Now, we examine the case where n=k and we get A(k,k)=Ck(k). This entails that if A(k,k) does terminate, Ck(k) does not. By the identity we get if Ck(k) does terminate, Ck(k) does not.
And so, we are forced to conclude that Ck(k) does not stop. But because of the identity, A(k,k) cannot stop either. And so, we know that our set of rules will never be able to prove that Ck(k) does not stop.
Now, we have assumed A to be sound, the crucial step is to assume that A does contain all rules of inference available to a human mathematician (note that besides soundness, we have made no other assumption for A, and thus we have shown the result for an arbitrary set of rules and so for all rules).
As far as I understand your objection (and it may very well be the case that I don’t), I think that your claim that:
“In essence the reason is that if we restrict ourself to total programs, such programs will include all formal proofs and can be input into a universal Turing machine as programs, and an output will eventually be produced. Thus the universal Turing machine is no worse than a human.”
is contradicted by the counterexample Penrsoe constructs. Assuming that A can generate all computational proofs.
Best,
Sven
https://www.researchgate.net/post/What_is_consciousness_its_physical_processes
After the invention of “sophia”, everything is possible
Hello,
Sophia is not a representation an artificial consciousness it is an advanced chat bot. However the proposed question is the dream target of all researchers in the field, each from his point of view or technology.
Today there is two major target for AI, the first is to reach a real AI and the second is to find a bridge between artificial and biological systems. The main obstacle for both scopes is the lack of information about how the bain and the neurons work. This leads to a gap in defining cousousness.
Thanks,
It's not necessary to mimic biological structures to have autonomous computers, that exist today, already. ``Cybrnetic consciousness'' doesn't mean anything.
I dont know for sure, but i believe that it should be possible. My article Artificial Consciousness and Security contains the arguments, but in essence i think that a program that can partiallly self monitor, frame and compute the truth of propositions, and communicate would be a good candidate. I am not sure how strong the artificial consciousness would be, but to produce a more efficient and secure operating system the consciousness does not have to be like human consciousness.
Dear all,
if I understand the question correctly, then a positive answer would require to show that one can implement consciousness within hardware by algorithms. There is an interesting argument presented by Roger Penrose that – if it goes through – would entail that this is in principle not possible. In a nutshell, Penrose argues that the processes responsible for human consciousness are non-computable. Hence, no representation within the aforementioned framework of current hardware and algorithms could represent the processes that underlie consciousness.
As it has been pointed out, there is, however, an interesting possibility: Suppose, the technological development would allow for the implementation of algorithmic procedures in the right causal substrate, i. .e in wetware, then one might inadvertently create a system that has properties that we might identify as “being conscious”.
Best,
Sven Beecken
At least not now..as per the current understanding of AI. A consciousness is an intelligent agent which can constantly sense the surrounding and have its own thought process to have a minimum understanding of logical reasoning.
Current state of the art AI or deep learning based methods aren't capable in reasoning except classification.
Dear Sir,
You raise a very challenging speculating question. This is obviously possible because the working knowledge gap between brain and neuron try to minimize by the researcher. And the cycle: pattern=>precepts=>simulators=>concepts is improving day by day with the aid of deep learning which lead to build artificial consciousness.
We shouldn;t only focus on imitating and replicating the human brain neuron system, after all life and the brain evolved from a single cell prokaryotes, just a membrane and some capable of reproducing DNA or RNA.
What is more important (considering you have 1 billion years to spend..) is the environment in which life .. and then intelligence was created I think.
So starting with a simple machine learning algorithm and putting it in a vast environement (e.g. the Internet..) having it learn simple stuff but having it also evolve in data first, and complexity, will it come to be alive in a billion of years?? or hopefully with the speed of computers in fewer years! (maybe give it some sensors too to use the "real" environment data and "stimuli" as well. will the e.g. Autonomus antivirus with deep learning, learn so much .. it will come alive? Start to learn itself? What would be the amount of "synapsis" it would have to have and what kind of those ... or other links of similar use would have to make?
Would it have to be mortal as well? :)
It is not necessary to reproduce the entire central nervous systems of humans proposed in your question. We are speaking about artificial consciousness, which is prosthetic consciousness. It is possible because consciousness is measured by specific reactions to inputs, and those reactions can be reproduced in essence. The outcome of such a bot, would be probably worrying, because consciousness implies some level of selfishness and other 'attributes' we may not want to 'create' outside humans. The so-called AI creation ethics.
Yes, I agree with the above opinions that it is unlikely and rather impossible to build artificial intelligence equipped with artificial human-like consciousness in the future. However, constantly improved artificial intelligence without artificial awareness can in the future process much larger information resources and solve much more complex problems, tasks, etc. than human capabilities. This will allow, e.g. building autonomous, technologically advanced machines, robots that will be used to explore hardly accessible environments, such as sending robots to work in mines, caves, ocean depths, to areas affected by fires and other climate disasters, sending unmanned spacecraft on many years of interplanetary travel, etc.
What do you think about this topic?
What is your opinion on this topic?
Please reply
The experiences that are transmitted to the brain and it registers in the mind which create consciousness but there are others that the body learns which do not register in the mind, and they remain unconscious.
So, AI techniques:ANN, ML and Deep Learning have the capability to register experiences which belong to explicit leaning. Hence, speculation is that it is possible to build artificial intelligence equipped with artificial human-like consciousness in the future.
Dear Colleagues and Friends from RG, Technologies of advanced data processing Industry 4.0, including above all Learning machines and Artificial Intelligence is also used in the attempt to build machines equipped with the ability to self-improve the performed tasks and programmed activities. Perhaps in the future there will be an attempt to build artificial awareness in which supercomputers will be equipped. In my opinion, consciousness can only be mathematically modeled in theory. Even if a mathematical model of artificial consciousness were built using ICT and Industry 4.0 and in the future Industry 5.0 and based on this model artificial intelligence would be created in quantum computers installed e.g. in autonomous robots, androids, it will still be only artificial intelligence without emotions and the essence of human consciousness. An analysis of the nature of human thoughts is necessary to distinguish between human intelligence and various artificial intelligence technologies being developed. In advanced computerized systems of neural networks, artificial intelligence systems are created, whose task will be to solve tasks consisting of complex sequences of many algorithms and self-learning systems for solving complex problems with the help of many algorithms. In these systems, man will try to create a structure that solves complex analytical tasks and learns from his mistakes. The advantage of artificial intelligence systems over their creator, i.e. man, is to rely on a much smaller number of mistakes made during repeated processes of solving complex tasks and learning new complex formulas to apply specific increasingly complex algorithms. However, after developing these artificial intelligence systems and applying them in many computerized fields of modern economies, what will be the next stage of technological progress in this field? Therefore, will the age of artificial intelligence and artificial consciousness come after the age of artificial intelligence? In my opinion, this is impossible. In my opinion, despite the rapid progress in the development and creation of new generations of artificial intelligence, it will never be possible to create an artificial creation that can be the equivalent of human intelligence taking into account human emotional intelligence and the specifics of human thoughts, human consciousness, human feelings. Therefore, the thesis can be formulated that in some respects artificial intelligence will probably never match human intelligence. The machine will be able to solve very complex problems and tasks but will not know why it does it, who it is, in what world it operates, it will not be able to realize its existence in the Universe etc. Machines in the form of autonomous androids can perform physically difficult works that a man cannot is able to perform. Quantum computers equipped with Big Data Analytics will be able to solve analytical tasks many times faster than the most powerful human minds. However, they will not be aware of their existence. Human awareness of its existence has been evolved in millions of years of evolution of the human mind and also of human ancestors that preceded the human being, i.e. human-like primates belonging to primates. Human consciousness was created in a process of evolution lasting millions of years, during which the process of continuous interaction of a complex biological organism with the environment has evolved. While artificial intelligence is based on systems of neural networks in a simplified way, to a small extent mimicking the human central nervous system and the computational power of performing specific elementary tasks exceeding the analytical abilities of a human being, however, the level of complexity of the living organism of mammals is still many times higher than the most advanced computers.
In view of the above, the following question arises: Will artificial neural structures become such advanced artificial intelligence that artificial consciousness will arise? Theoretically, you can consider this type of projects, however, to verify it realistically, you would need to create this type of artificial neural structures. Research on the human brain shows that it is a very complex and not fully understood neural structure. The brain has various centers, areas that manage the functioning of specific organs and processes of the human body. In addition, consciousness is also complex and consists of elements of emotional, abstract, creative intelligence, etc., which also function in separate sectors of the human brain.
Do you agree with me on the above matter?
In view of the above, other important questions arise in this area:
- Will research on the human brain and progress in the construction of ever more complex structures of artificial intelligence lead to synergies in the development of these fields of science?
- Will the development of these fields of science lead to the integration of research into the analysis of the functioning of the human brain and the construction of increasingly complex structures of artificial intelligence equipped with elements of emotional and creative intelligence, etc.?
- Besides, can the improvement of artificial intelligence lead to the creation of artificial emotional intelligence and, consequently, to autonomous robots that will be sensitive to specific changes in environmental factors, factors in the surrounding environment?
- Will specific changes in the surrounding environment trigger evoked reactions of advanced artificial emotional intelligence, i.e. activation of pre-programmed algorithms of implemented activities and learning processes as part of improving the learning processes of machines?
- As a consequence, is it possible to create an artificial consciousness that will function with the structure of an artificial, electronic neural network constructed in such a way as to reflect the structure of the human brain?
- Will the structures of advanced artificial emotional intelligence built in such a way be able to improve themselves on the basis of acquired knowledge, e.g. from external online databases?
- Will artificial neural structures become such advanced artificial intelligence that artificial consciousness will arise?
- Will it be possible to build artificial emotional intelligence?
What do you think about this topic?
What is your opinion on this topic?
Please reply
I invite you to discussion
Thank you very much
Best wishes
Dariusz Prokopowicz
Dariusz hello,
As a physician and an anatomist, I think that artificial intelligence can be created, but it will not be a strict imitation of human intelligence.
The problem is not the imitation of structure so much as it is the inability to reproduce the complex effects (e.g. energy harnessing, saving and distribution; cooling; protection; repair) of the interaction between human hardware (esp. integumentary, skeletal, muscular, digestive, endocrine, nervous systems) and human software (bio-chemistry, symbiotic live matter exchanges - e.g. bacteria, viruses, molds) on one side, and the resulting complex with the external environment, on another.
These interactions are the basis of not only human physical survival and development, but also of learning and developing one's mental processes.
If we push for an anthropomorphic AI, it will be almost easier to create a cybernetic organism or a robot body with artificial intelligence, rather than a physical-body-disassociated AI.
Ironically, the imitation of live matter has already been experimented on in other types of machines (e.g. airplanes (birds), ships/ submarines (water fowl, fish)), but at what cost? Energy and cost inefficiency, lower structural dependability.
Yet, AI is doable in a different form, if one considers why do we want to create an AI - to assist humans.
As to emotional intelligence, I think not.
Ethics yes.
Emotional intelligence requires endocrinological and other bio-chemical input, which a machine would not have access to or can process in an organic manner.
Neither does it need to. The strength of an AI would be in non-ageing and -sickness, endurance (if powered), its emotionless and objective analysis of complex, dynamic data and accurate prediction (as long as the AI does not harm humans, following a modified version of Isaac Asimov's three laws of robotics).
:)
Living beings are the "artificial intelligence" created by nature over billions of years, it is a very complex and evolutionary process.
Progress in building neural networks and technologies has been taking place for many years. Learning machines, artificial intelligence and other advanced technologies. Industry 4.0 has become a source of futurological considerations on the possibility of creating artificial awareness in the future. Perhaps technologically in the future it will be possible to build such complex macroprocessor IT structures (consisting of thousands of microprocessors), however, even if in the new computing structures of the new generation of neural networks operating in these high computing power, highly advanced artificial intelligence will be equipped with knowledge comparable or greater than the knowledge resources accumulated in the human brain, the simulation of consciousness created in these structures will be, at most, only artificial consciousness, i.e. only a simulation, and not what can be called a cybernetically created human consciousness. Let us (humanity) have enough time to verify this thesis.
What is your opinion on this topic?
Please reply.
I invite you to discussion.
Thank you very much for participating in the discussion.
Regards,
Dariusz Prokopowicz
It is possible, to some extent! But it will not reach to human’s perfect consciousness.
Perhaps as a result of future technological progress it will be possible to build a new generation of artificial intelligence that will be equipped with knowledge comparable to the resources of knowledge accumulated in the human brain. Perhaps the new generation of artificial intelligence will be equipped in the future with something that is currently referred to as artificially created consciousness. If that happened, the consciousness simulation created in these structures would be at most only artificial consciousness, i.e. only a simulation, and it could not be called cybernetically created human consciousness.
What is your opinion on this topic?
Please reply,
I invite you to discuss
Thank you so much for participating in the discussion,
Greetings,
Dariusz Prokopowicz
"There is a real danger that computers will develop intelligence and take over. We urgently need to develop direct connections to the brain so that computers can add to human intelligence rather than be in opposition."
Stephen Hawking
The above words of Stephen Hawking confirm that he was also a great visionary. When, in what years, the development of artificial intelligence may get out of control. How should technological progress in this field be conducted so that the risk of getting out of man's control of constantly improved artificial intelligence technology is small?
What is your opinion on this topic?
Please reply,
I invite you to discuss
Thank you so much for participating in the discussion,
Greetings,
Dariusz Prokopowicz
Yes, it's possible to build artificial consciousness in the digitized structures of artificial intelligence.
Dear Dariusz Prokopowicz , we still do not know exactly why certain objects of the world are endowed with consciousness (brains yes, stones no). There are, of course, different hypotheses.If consciousness is only deriving from complexity of functional relanshionships between components of a system, then a machine with sufficient complexity could be endowed with artificial consciousness (not only artificial intelligence).If, on the other hand, also the specific matter of brains is important for consciousness, then it will be necessary to use organic matter to produce machines with artificial consciousness. This is a classic functionanalism vs materialism debate on the nature of consciousness. In one case, consciousness derives from the software (functionalist hypothesis), while in the other it derives from the hardware (materialist hypothesis).
In both cases, artificial consciousness could be achieved. Of course, if a machine is conscious (and can, for example, feel pain), this will rise important ethical issues. It will be important to think carefully about these issues before realizing such machines.
"We will be able to entrust computers with monitoring tasks and when specific circumstances arise, the computer will take specific actions and will inform us afterwards." Steve Jobs
Steve Jobs said the above words many years ago, but not only are they still current but in the future they may become even more current and visionary.
What is your opinion on this topic?
Please reply.
I invite you to discussion and scientific cooperation.
Thank you very much for participating in the discussion.
Best wishes.
Dariusz Prokopowicz
https://www.scienceandnonduality.com/article/true-artificial-consciousness-is-it-possible
Decision-making by artificial intelligence is based on the set of values coded by its programmer.Generally machines are built for specific purposes and intelligent machines do not actually require consciousness to be efficient. But if one day we will build fully conscious machines, we must consider that their behaviour will depend not only on the programmed set of values, but also on a series of different factors, like self-preservation.
Dear Dariusz Prokopowicz
the following article might be of interest to you Universal Gödel Statements and the Computability of Intelligence.
Essentially, the author provide a detailed argument (derived from the infamous Penrose-Lucas argument) for the non-computability of the mind. Thus, if the argument holds, this would mean that nothing like the human mind can be implemented on (current) computers, because they just are implementations of Turing machines.
Best,
Sven Beecken
https://arxiv.org/pdf/2001.07592.pdf
Dear Sven Beecken ,
Arguments for the non-computability of the mind have been around for a long time, usually arguing that the human mind is capable of framing the notion of arbitrary formal system, whereas a computer is a fixed formal system, which will not be able to deduce a coding of the consistency of the formal system (Goedel sentences). But this is clearly not a sensible claim. You might argue that the human mind is a universal Turing machine (a universally programmable finite computer), but a Turing machine has an unlimited finite amount of memory and processing capability, unlike real computers or humans.
What Goedel showed was that for subsystems of number theory and richer, truth in a deductive theory cannot be completely axiomatised by finitely many axioms or axiom schemas. It does not really matter whether you are a computer or a human, that fact remains the same. For practical computing, you do not need a complete axiomatisation of anything, but a few slow growing recursive functions (like addition and multiplication) that can take in sensor data and process it, making judgements about it.
In summary I think that the computational powers of humans (and real computers) are far weaker than that of a universal Turing machine. We can of course define richer structures (the natural numbers and pure sets) using second-order theories or via a hierarchy of semantic definitions (like Goedel's constructible universe of sets), but a computer could follow the same steps (being represented as inferences in number theory or finite set theory).
Underlying arguments about the non-computability of intelligence are views that computers are not creative (being glorified calculating machines). But I think in light of advances in machine learning (reinforcement learning in particular) that view is highly questionable.
Best wishes,
Andrew
As a rider to the answer above, I think you should be able to implement a basic self-monitoring and decision making programme without special hardware, but computers may be made faster or have lower power consumption if they use techniques from physics (not only quantum physics), biology and analogue computing.
This thread question is “Will it be possible to build artificial consciousness in the digitized structures of artificial intelligence?”
And such assertion as, say
“…the following article might be of interest to you “Universal Gödel Statements and the Computability of Intelligence”. …Thus, if the argument holds, this would mean that nothing like the human mind can be implemented on (current) computers, because they just are implementations of Turing machines.….”
- has only rather indirect relation to some real answer on this question, by at least a few reasons.
First of all, to write the above it is necessary to understand/to define – what is “Intelligence”?, “what is mind”? - and what is “computer”?
In the last case it is principally necessary to understand – what is one of the utmost fundamental for humans now phenomenon/notion “Matter”, because of computers hardware is purely material; and, of course,
– what is one of the utmost fundamental for humans now phenomenon/notion “Consciousness”?
At that these both utmost fundamental and basic phenomena/notions in the mainstream philosophy , and so further in other sciences, are fundamentally transcendent/uncertain/irrational,
- just therefore in the mainstream there exist innumerous different doctrines, sub-doctrines, etc., which all are based on corresponding principally non-provable, non-disprovable, and different/opposite, postulates about what Matter and Consciousness are,
- and so, say, the questions above in framework of the mainstream quite logically inevitably can principally have nothing else than a innumerous non-provable, non-disprovable, and so principally non-grounded rationally, different “answers”.
The indeed philosophy is able only in framework of the Shevchenko-Tokarevsky’s “The Information as Absolute” conception https://www.researchgate.net/publication/260930711_the_Information_as_Absolute DOI 10.5281/zenodo.268904,
- where it is rigorously proven that there exist nothing else than some informational patterns/systems of the patterns that are elements of the absolutely fundamental and absolutely infinite “Information” Set; including in the conception the utmost fundamental phenomena/notions above are scientifically defined: that are nothing else than some informational systems;
- which are made from the one stuff “Information”, and so are based in the depth on the same absolutely fundamental Rules, Possibilities, Quantities, etc. [members of the "Logos” set, see the link above], and so both are some “computer+program” systems;
- however the “hardware”, and “software”, including basic modules that organize and control the “computers’” operation, which, again, principally is nothing else than creation of, and exchange by, some informational patterns/systems,
- are principally different, and so, say, the “consciousness on Earth”, including this consciousness’s version “homo-two sapiens consciousness”, is fundamentally non-material system, which principally cannot “emerge” from any material structure.
The utmost fundamental difference of Matter and consciousness is in that
- Matter is extremely rigorously organized simple logical system, which exists and constantly changes basing on a rather small set of rigorously defined laws/links/constants, and being so closed system in the Set. Just therefore Matter exists stably seems near 14 Billions of years; when
- consciousness is open in the Set system, which is able, in principle, to obtain and logically to analyze, any information about any element of the Set; correspondingly, e.g. , Matter exists and changes in the absolute spacetime with [5]4 space and time dimensions [more see the SS&VT informational physical model https://www.researchgate.net/publication/273777630_The_Informational_Conception_and_Basic_Physics DOI 10.5281/zenodo.16494]
- when the consciousness’s on Earth spacetime has infinite “number” of the space dimensions, and only partially intersects with Matter’s spacetime.
That is another thing that the consciousness has limited ability at obtaining and analyzing of information, and so really every consciousness operates in own spacetime with limited number of the space dimensions, however, again, that isn’t principal in this case.
And just this ability to elaborate arbitrary information – in contrast to Matter, where material objects elaborate only rigorously determined information, and actualization of this ability in humans’ lives, is “Intelligence”,
- which, though, is possible because the additional reason – though most of living beings, including, say, bacteria, process information about the environment outside Matter, i.e. “abstractly” and so have some intelligence, however the beings have for that rather small capability, which is qualitatively lesser than the capability to process information of the utmost developed version of the “consciousness on Earth” – human’s consciousness, which makes that also in well developed “mind mode” of operation.
So, returning to this thread question - again, any material structure, including a passed the Turing test AI machine, fundamentally cannot be “intelligent”, but that is only because of fundamental difference of the logical organization of informational systems “Matter” and “Consciousness”, and has no relation to, say, the Gödel incompleteness theorems.
See also SS posts in the threads
https://www.researchgate.net/post/What_is_the_next_paradigm_shift_in_respect_to_neuroscience ,
https://www.researchgate.net/post/Can_we_mathematically_model_consciousness#view=5ebbd1cef29a0c2fa845599b ;
- and, of course, the SS&VT functional model of the consciousness in at least two SS comments Dec 9, 2019 and Dec 10, 2019 in
https://www.researchgate.net/publication/329539892_The_Information_as_Absolute_conception_the_consciousness/comments?focusedCommentId=5ded35bacfe4a777d4f8a648&sldffc=0 .
Cheers
Dear Andrew Powell
While I agree that the idea that the human mind-brain requires a non-computable component is around for some time, I’m not so certain that I agree with the way you frame the argument - at least not for the way Penrose presents it. It is this argument that I’m most familiar with, so I refer to Penrose instead to the paper linked.
The basic idea is that the human mind implements functions, representable by Turing machines (this does not entail that the human mind-brain is a Turing machine). This view, loosely called “functionalism”, is not very controversial.
What Penrose shows, using a simplified version of Gödel’s theorem, is that a human mathematician, using some set of theorem proving procedures R, can establish the truth of a statement and it is not the case that R can prove this very statement. And so, the human mathematician can prove something, no computer will ever be able to.
Now, the crucial point is: If the cognitive processes involve a non-computable component, then no current computer will be able to fully simulate these processes.
That’s the gist of the argument as I understand it. It’s certainly not foolproof in it’s full version (and absolutely not in this brief exposition), but it requires non of the points you raise (which I largely agree with - except for the implications of machine learning, but for different reasons).
Best,
Sven
Dear Sven Beecken ,
I think your exposition of the key idea of the supposed limits of a computer is correct, but I don't think the inference is correct that there are truths that can be recognised by a human and not by a computer. In essence the reason is that if we restrict ourself to total programs, such programs will include all formal proofs and can be input into a universal Turing machine as programs, and an output will eventually be produced. Thus the universal Turing machine is no worse than a human. The total functions these programs represent form a hierarchy, and there is no total function which enumerates all of the functions in the hierarchy (the undecidability of the halting problem). But the diagonal function which can be used to generate new total functions given a bounded part of the hierarchy, is as computable as any other function. It only results in a contradiction when it is applied to the entire hierarchy of total computable functions. (You can of course allow more powerful notions of computation, in which case the diagonal function yields a new "jump" function which is higher-order/oracle computable but not Turing computable.)
You can probably tell I was never convinced by Penrose's argument.
Best wishes,
Andrew
PS I agree that functionalism is not particularly controversial.
Dear Andrew Powell
You are certainly not alone with your skepticism with regard to the Penrose argument. I had recently a discussion about this on a different thread with Joachim Lipski (link below). I took the advocatus diaboli position and I shall continue to do so. My motivation is that I’m rather puzzled by the argument, in particular with the mathematical part of it. Penrose is a major mathematician and it seems unlikely to me that this part contains holes (but this is certainly not good enough). Before I address your objections, let me state a slightly (but still very superficial) reconstruction of the argument.
We start with some set of rules of inference A, and we assume that this set is sound. Then we take some computational procedure Cq(n) and we assume that this procedure is enumerable.
We think of A as a computational procedure that given a pair of numbers tries to assert whether Cq(n) does terminate or not. So, if A(q,n) does terminate, Cq(n) does not.
Now, we set q=n and we get if A(n,n) does terminate, Cq(n) does not. Since Cq(n) is enumerable (a list of all computations performed on n) and since A is a computational procedure performed on n, there is a case such that A(n,n)=Ck(n). Now, we examine the case where n=k and we get A(k,k)=Ck(k). This entails that if A(k,k) does terminate, Ck(k) does not. By the identity we get if Ck(k) does terminate, Ck(k) does not.
And so, we are forced to conclude that Ck(k) does not stop. But because of the identity, A(k,k) cannot stop either. And so, we know that our set of rules will never be able to prove that Ck(k) does not stop.
Now, we have assumed A to be sound, the crucial step is to assume that A does contain all rules of inference available to a human mathematician (note that besides soundness, we have made no other assumption for A, and thus we have shown the result for an arbitrary set of rules and so for all rules).
As far as I understand your objection (and it may very well be the case that I don’t), I think that your claim that:
“In essence the reason is that if we restrict ourself to total programs, such programs will include all formal proofs and can be input into a universal Turing machine as programs, and an output will eventually be produced. Thus the universal Turing machine is no worse than a human.”
is contradicted by the counterexample Penrsoe constructs. Assuming that A can generate all computational proofs.
Best,
Sven
https://www.researchgate.net/post/What_is_consciousness_its_physical_processes
Dear Sven,
I do not dispute that Roger Penrose is a major mathematician, but I do think that the argument is incorrect, in part because it is only partially a mathematical argument. The problem with it is that the argument assumes that a human can produce a predicate that 1) is computable and 2) solves the halting problem (assuming that C ranges over all programs). This is what Turing showed to be false. If we drop 1), then humans (and computers - by coding syntax) can define predicates which can decide the halting problem. But, as far I know, there is no predicate which humans have access to that computers do not.
It is possible to argue that proof is an informal notion, and humans (mathematicians at any rate) know a proof when they see one. But while this is true in the sense that humans can shift from one formal language to another, the awkward truth is that as far as computability is concerned, there is a formal language, which is a the language of first-order Peano arithmetic, that suffices to represent all computable functions (the set of programs is a sigma(0,1) set involving existential quantifiers over elementarily decidable predicates, a very famous result of S.C. Kleene from the 1930s, and totality is definable at pi(0,2), using the same construction).
(In my previous post I was making the point that if a set of total programs is constrained in complexity, say by being represented in typed system of the lambda calculus, then the diagonal function C(k,k) will be total but more complex than any program/function it is diagonalising over. I think I have that correct. The reference here is to works on recursive hierarchies, such as by W. Pohlers or S. Wainer.)
Best wishes,
Andrew
Dear Andrew,
I do agree that Penrose’s argument contains parts that are – at least prima facie – not mathematical. I also think that it would be interesting to explore this further. For the time being however, I do think it is important to get a grip on the argument as Penrose presents it. And I’m sorry to say, nothing what you have said so far even addresses it. Penrose does explicitly not think that Halting problem is decidable. (And you are welcome to show where it is required in my short synopsis, or even better, in Penrose’s own writings.)
Furthermore, because the argument holds for an arbitrary set of rules. No switch of calculi will get you out of this (at least not for any calculus as expressive as Q arithmetic).
Best,
Sven
Not fully digitized as that would rely on the assumption that electrical resistance doesn't in any way impede or contain the AI structure...
The last SS post in the thread https://www.researchgate.net/post/If_every_neuron_in_a_human_was_accurately_simulated_in_a_computer_would_it_result_in_human_consciousness
- is relevant to this thread question.
And a few last SS posts in https://www.researchgate.net/post/Can_rational_thought_exist_without_language#view=5ef5e703d7bdca76c90ab935
- as well, though.
Cheers
Dear Sven Beecken ,
I am sorry I somehow missed your post of 20 May. I think it is my turn to be a little confused (by Penrose's argument I think). I really cannot see how it is possible that there exist any special attributes that humans have that cannot be represented by the notion of Turing machine. It is true that it always possible to diagonalise out of a fixed Turing machine (i.e. one which has a fixed program), and therefore no fixed Turing machine can represent informal reasoning. But I really do not see why a universal Turing machine (that can simulate any Turing machine) does not do precisely the same job as a human.
My point in the last post was primarily about the fact that axioms systems and consistent systems of typed lambda calculus are ways of extending knowledge as a system of ever richer programs that will halt for all inputs when they are input to a universal Turing machine. The origin of these programs could be a human mathematician or an automated proof assistant, and is at the heart of Turing's own Ph.D. thesis.
Best wishes,
Andrew
Dear Andrew Powell
I apologize for the belated reply. I was busy and also I wanted to think about how to proceed. You wrote:
“no fixed Turing machine can represent informal reasoning. But I really do not see why a universal Turing machine (that can simulate any Turing machine) does not do precisely the same job as a human.”
I think it is important to see that Penrose’s argument does not rely on informal reasoning. It does, however, rely on the meaning of the rules and it also relies on the recognition of truth of the propositions within the argument.
Furthermore, the argument actually depends on the usage of universal Turing machines. (Here is a footnote from Shadow of the Mind, giving the reference to the technical discussion: “In fact this is achieved precisely by the action of a universal Turing machine on the pair of numbers q, n; see Appendix A and ENM, pp . 5 1-7.)”
Now, at this point, I think I have to ask: Did you go through the argument? The argument really depends on you seeing the truth of the propositions and the validity of the rules.
This is also the aspect where I’m a bit puzzled. Penrose relies on the mathematician understanding the rules. So, there must be a grasp of meaning involved. This is in some sense obvious, in particular when taking into account that he is a mathematician. On the other hand, most non-mathematicians I have met may be able to follow the rules, but without having any grasp of their meaning (in the sense of Searle’s Chinese room). So, in some weird sense, there is an aspect of the argument that has (at least for me) a non-mathematical flavour to it. Meaning is mostly discussed in the context of philosophy, not so much in the math and logic courses I had.
Hope that helps.
Best,
Sven
Dear Sven Beecken ,
Many thanks for your reply. I went through the argument as you presented it, and remember the argument from years ago. I have never understood Penrose's argument. What I meant by "informal reasoning" is the reasoning humans use. No human thinks in terms of first-order logic or any other formal logic for that matter (unless they are a specialist in proof systems). I am fairly sure that diagonal arguments give either new objects/functions or show that a function/object does not have the desired property. You certainly get contradictions when you use a universal Turing machine to determine the properties of a universal Turing machine, the problem of when a universal Turing machine halts being a good example. But the real question is whether humans are any better than universal Turing machines. At this point I see no difference between a universal Turing machine and a human. Humans cannot decide the halting problem either. For all practical purposes, given a particular program (number) humans and computers will agree on the same answers if the program returns an answer at all (for given input). Alan Turing himself seems to have been a strong functionalist. I don't have a strong opinion on functionalism, other than to commend its testability, but I do think that Turing has been proved correct (so far) about functionalism with regard to computability, which is no more than Church's thesis.
Best wishes,
Andrew
Dear Andrew Powell
Thank you for your answer. I think I’m getting a better grip on your objections (or so I hope). I shall assume that the technical aspects of Penrose’s argument are sound (it has, as far as I can tell and according to Penrose, nothing to do with the Halting problem or the other issues you mention, it is just a technical argument).
If your objection is that for all intend an purposes, Penrose’s argument is too far removed from what humans are doing, then I - and I think Penrose too - would agree. That is, I think, the reason why he gives other examples, such as chess positions or tiling problems, that are, according to him, closer to home, so to speak. I have to admitted that I haven’t thought the other examples through, so I can’t say anything about their validity.
Furthermore, I do agree, and again, I think so does Penrose, that most of what is going in in the mind-brain is best captured by some version of functionalism. However, if we want to understand the principles that determine our cognitive capacities, then we are looking for formulations of laws. Laws are typically expressed in terms of universally quantified statements. It is here where Penrose’s argument unfolds its power. If the argument holds (again, a technical issue), then no such law can be formulated in terms of Turing computation, since there is one instance that falsifies any such formulation.
Again, I think that for anything that we can say about the mind-brain today is subject to this problem, primarily because we are barely at the level of generalizations, a notion that allows for exceptions, but Penrose is not interested in this level, he is interested in actual understanding.
Does this address your concerns?
Best,
Sven
Dear Sven Beecken ,
What makes Penrose believe that humans are able to access a set of inference rules A that a computer cannot? I think the answer is that meta-computationally we (humans) are able to formulate a diagonal argument and apply to a set of computational rules (or formal deductive system). But my primary objection is: I do not see that a human is different from a computer with regard to diagonal arguments. In both cases, if the diagonal argument does not yield a contradiction we (a human or a computer) could construct a sequence of richer formal systems (or sets of programs) that effectively encode the consistency of the formal systems (programs that we created so far). If the diagonal argument does yield a contradiction, then both the human and the computer will conclude that one of the assumptions that led to a contradiction must be incorrect.
I will try to draw out these arguments a bit more. We know that a Turing machine given program numbers representing the axioms of some fragment of arithmetic or richer, say PRA=primitive recursive arithmetic, will not be able to compute an arithmetical statement encoding "for all computations from program numbers p representing axioms in PRA and instructions representing the inference rules of PRA, p is not a computation of 0=1" or its negation (unless PRA is inconsistent, which is false since we understand the computable functions of PRA very well). But if a meta formal system were able to formulate diagonal arguments, then it could simply add such a universally quantified arithmetical proposition to its axioms (and program numbers). As I said before, this approach is due to Turing himself as his Ph.D. thesis under Alonzo Church.
Where contradictions arise is where the set you are diagonalising out of has might have same property as its members. So let P(n,x) hold if the n-th problem in a list of all computational decidable problems with input x were computationally decidable. Then NOT P(n,n) is not decidable, as if it were then we would have NOT P(n,n)=P(m,n) for some m, which yields a contradiction if n:=m. You can therefore conclude that the property "is computationally decidable" is not computationally decidable. The same argument applies to "is a total computable function" and "is computably enumerable" for example.
But the human and the computer would agree on the conclusions. I do not dispute that computability theory requires a Turing machine that can take input from a number of programs, which is a meta-theoretic construct. But the only position that becomes untenable is the belief that a human is a Turing machine with a fixed program. But who argues that? No, humans are like Turing machines computationally with a choice of programs and inputs, and we are probably a little bit like universal Turing machines which emulate the behaviour of any other Turing machine.
I hope I have illustrated my objection to Penrose's argument.
Best wishes,
Andrew
...The question of whether machines can have consciousness is not new, with proponents of strong artificial intelligence (strong AI) and weak AI having exchanged philosophical arguments for a considerable period of time. John R. Searle, albeit being critical toward strong AI, characterized strong AI as assuming that “…the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have cognitive states” (Searle, 1980, p. 417). In contrast, weak AI assumes that machines do not have consciousness, mind and sentience but only simulate thought and understanding... Hildt, E. (2019). Artificial Intelligence: Does Consciousness Matter?. Frontiers in psychology, 10, 1535.
… According to the Artificial Consciousness Hypothesis: An artificial intelligence can be said to possess consciousness, if it displays both the psychological and physiological symptoms of an existential crisis. Thus it is considered an artificially conscious intelligence, or ACI … Charles, J. (2019). Discerning Artificial Consciousness from Artificial Intelligence-A Thought Experiment.
Extrapolating to artificial consciousness, it may be possible to recreate consciousness in minimally intelligent artificial systems. Furthermore, it is likely that we will create minimally intelligent conscious systems ... Crosby, M. (2019). Why Artificial Consciousness Matters. In AAAI Spring Symposium: Towards Conscious AI Systems.
… Consciousness associated with questioning the reality is similar to the thoughts of Rene Descartes … It is seen that modern artificial intelligence studies are a reflection of these historic imaginations … drive that lies behind this may be to create an artificially conscious creature … Oleksiewicz, I., & Civelek, M. E. (2019). From Artificial Intelligence To Artificial Consciousness: Possible Legal Bases For The Human-Robot Relatıonships In The Future.
Hello everyone, thank you for participating in the discussion.
Attempts to build artificial awareness in the future will become an important element of the next generation of technological development of artificial intelligence.
Do you agree with my thesis?
Best regards,
Dariusz Prokopowicz
Considering the fact that developments in the field of artificial intelligence are advancing rapidly and that humanoid robots with a wide range of facial expressions now exist, the question of the possibility of artificial consciousness is highly relevant. So, if a humanlike robot looks happy or pretends to be happy, is he/she/it really happy inside, or will he/she/it ever be able to make sense of the meaning of happiness?
With the initiator of this discussion I share the opinion that it is by no means sufficient to replicate the functioning of individual neurons using digital technologies. Such a view (“neurons can be replaced by digital components”) is based on the assumption that consciousness is ultimately the result of computational processes, i.e., the brain is thought of as a complex processor and the secret of consciousness is attributed to the unique characteristics of the brain’s neural architecture. This “mechanistic” view is neither theoretically plausible nor supported by empirical evidence.
I have been dealing with the fundamental mechanism underlying conscious processes for years. From my point of view, there is one approach to the understanding of consciousness that looks quite promising so far, even if there remains a lot of work to be done to develop this approach into a full-fledged theory. This approach is based on the notion that the organizational characteristics of the neural correlates of consciousness are indicative of the brain’s interaction with and modulation of a ubiquitous, inherently sentient field, resulting in the conclusion that coherently oscillating neural cell assemblies acquire phenomenal properties (consciousness) by tapping into the universal pool of phenomenal nuances immanent in this field. My works aim at showing that this approach circumvents the problems of other approaches, that the omnipresent field can be identified with the zero-point field, and that the proposed modulation mechanism is supported by a substantial body of empirical evidence.
From this perspective, the brain functions as a very flexible and adaptive resonator that couples to the modes of the zero-point field. To create artificial consciousness, i.e., to breathe consciousness into a humanoid robot, we therefore have to achieve a coupling of the robot with this field. In this respect, we must first gain a better understanding of the brain-field interface in order to take the first steps toward the realization of artificial consciousness.
Philadelphia, PA
Dear Keppler & readers,
I wonder if you might fill us in a bit more on your hypothesis.
You wrote:
From this perspective, the brain functions as a very flexible and adaptive resonator that couples to the modes of the zero-point field. To create artificial consciousness, i.e., to breathe consciousness into a humanoid robot, we therefore have to achieve a coupling of the robot with this field. In this respect, we must first gain a better understanding of the brain-field interface in order to take the first steps toward the realization of artificial consciousness.
---end quotation
I wonder in particular about your mention of "the zero-point field." I take it that you have in mind a quantum vacuum field? If so, I wonder how it could be that that the brain "couples to the modes of the zero-point field" and any existing humanoid robot, so far, does not. Wouldn't such "coupling" arise simply from the quantum physics?
I notice that you have an impressive number of related publications, so perhaps you have already spoken to this sort of question?
H.G. Callaway
Consciousness can not be built.
Because-
Quantum state explained by Penrose and Hammeroff Orch-OR theory will explain the origin of physiological and psychological conditions as desired by definitions given D A Gayan Nayanajith .
But Orch-OR theory has been pointed out by me in its basics which says that thresh hold time gives rise to separation of eigen states and finally it retains a eigen state for observation of a particular object .(The process is without decoherence)thus it is object oriented. But as per me that object under observation is a part of whole universe. decoherence will not be there between two such quantum states.This will stimulate neurons and keeps the system in conscious condition and will be continuous with the whole universe.
This condition makes the difference between a living thing and non living thing.and all such systems will be connected by a net work.
So to make a conscious system one has to create a coherent system developed by it itself until it gets the qualities defined for consciousness on its own. what ever we make from outside, it will be a decoherent system and can not be conscious.
Internally , the matter particles and the force creating some other particle or field will be entangled (this we can not create by machines)to originate consciousness including life.
Please refer the following paper for details:
"New Hypothesis on Consciousness–Brain as Quantum Processor- Synchronization of Quantum Mechanics and Relativity" ,July 2019,DOI: 10.12691/ijp-7-2-1
Article New Hypothesis on Consciousness–Brain as Quantum Processor- ...
Siva Behold I posted a comment confirming this idea in mathematics similarly but I ask you were is the Math...??? I.E. the bifurcation of n in math...
Dear Joachim Keppler, I am glad that you share my view on this issue. Yes, due to the progress made in the development of artificial intelligence and its applications, the possibilities of building structures that could simulate something called artificial consciousness are becoming more and more real. But we are talking about structures for simulating artificial intelligence, not its real creation. We are building more and more perfect humanoid robots, more and more resembling a human figure, talking to people, processing more and more amounts of data, able to perform more and more activities, etc., which, equipped with artificial intelligence, can function as autonomous, so the possibility of equipping these autonomous humanoid robots with something we'll call artificial consciousness simulation. However, it will be "just" a simulation of artificial consciousness and not of building something that would be a computerized, digitized equivalent of human consciousness. In addition, this problem is related to the question of full or incomplete autonomy of robots equipped with artificial intelligence and possibly also a specific form of simulation of artificial consciousness. It cannot be ruled out that if robots have simulations of artificial consciousness and give such robots a high degree of autonomy, it would be possible for such robots to slip out of human control and perhaps also out of human control of further development of artificial intelligence technology. Perhaps in such a situation catastrophic visions currently known only from science fiction novels and films, ie for example "Terminator", could become real. Thank you very much for your substantive contribution to our discussion. I am glad that you agree with my opinion on this point, that in any case it is not enough to recreate the functioning of individual neurons using digital technologies to create a real artificial consciousness that is equivalent to human consciousness. Mechanistic views based on the formula that "neurons can be replaced with digital components" that suggest this possibility are based on a very simplified assumption, simplified premises that human consciousness is ultimately the result of computational processes taking place in neurons along which pulses of electric current flow and, that the brain is interpreted as a complex multifunctional and powerful (maxi) microprocessor and the mystery of human consciousness arises in this unique mechanistic-electric-biochemical characteristic of the brain's neural architecture. But this kind of "mechanistic" view of the essence of human consciousness is a very simplified interpretation, not very credible when we consider the achievements of various sciences that study the human brain. Accordingly, you have raised an important point. Thank you for your participation and your valuable valuable contribution to our discussion.
Best regards,
Dariusz Prokopowicz
Dear H.G. Callaway, Thank you for your participation in the discussion. You added very valuable comments and questions to our discussion on the issue of considering the possibility of creating artificial awareness as a continuation of technological progress in the field of artificial intelligence. Greetings,
Dariusz Prokopowicz
Dear Siva Prasad Kodukula, I'm glad you share my opinion. I agree with your opinion that an artificial consciousness similar to a human cannot be built. I also find it impossible. I am glad that you share my view on this issue. Thank you very much for proposing an article on this important issue of the essence of consciousness. Best wishes,
Dariusz Prokopowicz
Dear Zachary Knutson, You added an interesting point to our considerations. Thank you for participating in our discussion. Best wishes,
Dariusz Prokopowicz
It is possible to create something similar, but never same as natural human. Great question.
Thank you.
It's too early to give evidence for pro and contra.
Time will show.
I would like to first address the Penrose argument on one of the posts where the reference given by Yasha Savele. The paper has nothing actually refers to intelligence and computability which is different from consciousness and computability.
Even then I would be careful on the notion of intelligence chosen by either Yasha Savele or Penrose to prove their point.
Now given the question about consciousnes, I woud qualify my answer and give it based on the descriptive question posted in:
https://plato.stanford.edu/entries/consciousness/
Now the articles refers in section 4 to the following characteristics and I will state whether it is doable or not:
Now one might disagree with me on:
Regards
Dear all,
How do you think, who could pass Turing test - stupid human vs smart machine created by wise human? Thinking?
Philadelphia, PA
Dear Prokopowicz & readers,
My inclination is to go at your question, above, in a more round-about manner, focusing first on the psychology and neuroscience of consciousness. We need better account of consciousness in psychology before asking about the prospects of artificial consciousness. Perhaps that prospect will depend on the possibility of "artificial life"?
Pursuing a functional account of conscious experience, I've long thought that whether it might be possible to create artificial consciousness is an open question --and in part at least an empirical question. There is simply much that we do not know. "multiple realizability" is almost an axiom of computational conceptions of functionalism and the "computer model of mind." But the most interesting advances in understanding consciousness are coming from neuroscience and investigation of the neural correlates of consciousness. There is much talk of "information" and flows of information, in relation to neurophysiological processes, but I am puzzled and doubtful concerning the differences and contrast between the notions of "information" in philosophy of language and in computer science. I doubt that a purely physical or thermodynamic conception of information will do in cognitive psychology. I think we need to look into a specifically psychological or psycho-semantic conception of information.
See:
https://www.researchgate.net/post/What_is_Introspection_and_what_is_its_relation_to_language_Is_it_a_valid_method_of_research_in_psychology
H.G. Callaway
H.G. Callaway ,
' There is much talk of "information" and flows of information, in relation to neurophysiological processes, but I am puzzled and doubtful concerning the differences and contrast between the notions of "information" in philosophy of language and in computer science.'
You have nailed a considerable problem. The mathematical notion of information from a computer science perspective (different from physics) is aimed at quantification of information (e.g. during transmission) whereas in cognitive psychology aims at the management of the contents of the message (i.e information within the message and what can be done with it for further processing or action. Contemplating how much information is in the message is only part of the problem, what can be done with it if anything at all is another question).
Regards
Start from the algorithm.
Essential is basic understanding.
Then we can speak about possibilities.
Of course it is possible everything you can imagine, already exists in some worm.
So called cyber space is artificial form of collective consciousness ( K.G. Jung defined it long time ago).
Ethical or not, AI goes beyond imagination.
But I still believe in Human superiority.
At least for a while.
Dear H.G. Callaway, Thank you very much for your valuable participation in our discussion. The topic is therefore multifaceted, complex and developmental. Thank you very much for proposing an article on this interesting issue in terms of considerations possible to build artificial consciousness. Thank you very much for your valuable participation in our discussion. Best wishes, All the best, Stay healthy!
Dariusz Prokopowicz
Dear Arturo Geigel, Yes, it is interesting to compare the mathematical concept of information processed in various structures of information systems with the interpretation of the essence of information processed in organic neural structures. Thank you for participating in the discussion. Greetings,
Dariusz Prokopowicz
Dear Masa Radulovic, MSc, Thank you very much for your valuable contribution to our discussion. I also believe in the superiority of humanity over technology. Besides, it is man who creates technology, so it is unlikely that the development of artificial intelligence will escape from human control.
Best regards,
Dariusz Prokopowicz
When asked such questions, I turn to science fiction. In general, today’s science fiction is tomorrow’s reality.
https://www.youtube.com/watch?v=bggUmgeMCdc
Thank you @Dariusz,
Knowledge is power. If one understands the issue, one will not develop fear!
No body should be scared of technology.
Now it's a man who controls and develops maschines.
I purposely made few "spelling mistakes" in my last answer.
Read philosophers, read Jung, enjoy developing your own creative thinking capabilities. Read poetry or enjoy art.
Advice for all,
Don't just click and share, when on line, and think about what you actually consume via networks and Ctr.
Good luck and stay safe!
If child can not make simple painting like the rainbow or the sun light or the tree, then it is wrong use of technology involved (too much of smart phones usually).
The topic of discussion is and will be questionable. No doubt.
Philadelphia, PA
Dear Geigel & readers,
Thanks for your good words. I thought to explore a bit the question of what information is; and I think readers of the present thread will find the following video of interest for this question.
"Closer to truth --What is information." This video runs about 26 Min.
See:
https://www.youtube.com/watch?v=ekfG-PCk25g
Here's the producer's description:
What is information? Information is all the rage in science, changing how we think about fundamental questions. Information has many descriptions, some of them surprising. Why is Information so important to scientists and philosophers? Featuring interviews with Max Tegmark, Paul Davies, Seth Lloyd, Giulio Tononi, and Scott Aaronson.
---end quotation
I thought the interview with Seth Lloyd particularly suggestive; and we may want to follow up with Giulio Tononi, at some point. But I will reserve comments until other have a chance to look in on the video.
I think it would be useful to go over the differing concepts of information in physics, computer science (Shannon information) and elsewhere in the sciences. So perhaps you could expand a bit on the contrast between information in physics and in computer science --and their relation to entropy?
Comments invited.
H.G. Callaway
---you wrote---
You have nailed a considerable problem. The mathematical notion of information from a computer science perspective (different from physics) is aimed at quantification of information (e.g. during transmission) whereas in cognitive psychology aims at the management of the contents of the message (i.e information within the message and what can be done with it for further processing or action. Contemplating how much information is in the message is only part of the problem, what can be done with it if anything at all is another question).
The only way to create a real artificial intelligence will be when the system has its own survival and evolution as a priority without depending on external systems.
H.G. Callaway ,
This is one term that merits certain caution because it is a very loaded term (I have gotten into heated debates in RG because the person had not read the original work by Shannon which is crucial to understand what he was doing). Because of this it is easy to attach conclusions that do not derive from the theory. That being said, I will proceed.
Shannon’s original work [1] was to analyze a transmission on a noisy channel. This was produced independently from the notions of physics
“Shannon, who had no direct interest in thermodynamics, independently developed a measure of information.” [3]
and as can be seen by his selection of properties in his work [1, p10]. [3] Even states that the only reason it is Shannon-entropy is because of John von Neumann[3, p 180].
A quote that will clear Shannon’s problem focus with respect to my last comment is:
“The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen” [1, p1]
Now, the main problem that the entropy equation answered was:
“Can we define a quantity which will measure, in some sense, how much information is “produced” by such a process, or better, at what rate information is produced?”[1,p. 10]
He immediately refines the question in a more circumscribed and clearer manner (taking out the notion of information):
‘Can we find a measure of how much “choice” is involved in the selection of the event or of how uncertain we are of the outcome?’ (*)
If, for example, you have a choice of either 1 or 0, then the answer is that you require 1 bit of information (i.e. the amount of information required for the selection) if after this decision (whichever one you choose ) you have another decision then it would be log_2(2^2)=2 bits of information. From this observation and understanding that we are looking for an average we can arrive at Shannon's formula.
Hope this helps
[1]A Mathematical Theory of Communication By C. E. SHANNON
http://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf
[2] THEPHYSICSOFINFORMATION by F. AlexanderBais andJ.DoyneFarme
https://staff.fnwi.uva.nl/f.a.bais/boeken/PhysicsOfInfo.pdf
[3] ENERGY AND INFORMATION by Myron Tribus and Edward C. McIrvine http://www.esalq.usp.br/lepse/imgs/conteudo_thumb/Energy-and-Information.pdf
Philadelphia, PA
Dear Geigel & readers,
What you say seems fine so far; and I have no objections to make. But I am still left wondering about a chief question of interest: the relationship between Shannon information on the one hand and the concept of information processing in neuroscience. Obviously, it matters much more in neuroscience precisely what sub-system may be "communicating" with which other system and the specifics of what may be communicated; and it is not merely a matter of how much information may be transferred from "sender" to "receiver." But we want to know whether the concept of Shannon information is helpful in elucidation of neuro-physiological "information processing." Does it, for instance, set some useful constraints?
Secondly, I am still wondering about the precise relationship between Shannon information and thermodynamic conceptions of information--as in the thought experiment of Maxwell's demon. The Demon, you'll recall, operates a gate between two compartments, and by observing the details of molecular movements of a gas, in the separated chambers, the demon is able to operate the gate so as to induce greater order and reduce randomness. If the gate were simply left open, then we would expect eventually a random distribution between the two chambers. But by informed operation of the gate, all the molecules can be concentrated on one side. Information about a system allows local decrease of entropy--though (given the 2nd law) only by means of an increase elsewhere.
This is not the only "physical" concept of information; and I thought that the video interview with Seth Lloyd presented a distinct, very general concept of information in physics. This sort of thing seems to be getting a great deal of attention of late. My thought there was that any other, "special science" might, in a similar way, suggest a corresponding concept of, say, "biological" or "psychological" information --by relation to its own distinctive concepts and recognized laws or empirical regularities. It does seem, in some discussions of neuro-physiological "information" and information processing, that common-sense concepts also become prominent. One may suspect, in consequence, that Lloyd's concept of information or physical information is simply a reformulation of standard or accepted physical theories. This might be considered a version of semantic information.
Finally, I think to remark on the concept of information in the context of traditional philosophical contrasts of "form" and "matter" (or "stuff" which may be "formed"). In the Aristotelian tradition, in particular, as it is usually put, "the forms" are in the things, and "pure matter," if there were such a thing, would be a pure potentiality of being "formed." But detailing particular Aristotelian "forms" becomes a matter of qualities or characteristics and distinguishing the "essential" (or defining) characteristics from those deemed, incidental or accidental --thus a matter of definitions. (For instance, "man is a rational animal.") In consequence, we might regard "information" about physical nature (or particular domains of the special sciences) as simply implications of accepted theory along with supporting evidence?
H.G. Callaway
H.G. Callaway ,
I will give you now my opinion and try to address at least the first part of your last post. Also note that what I deal with are artificial neural networks and not biological neurons. Yet, in my view artificial neural networks(ANNs) have influenced greatly neuroscience intuitions.
You said:
'it matters much more in neuroscience precisely what sub-system may be "communicating" with which other system and the specifics of what may be communicated'
This where I diverge in my biased opinion from current mainstream thinking in that that assumption is more complicated to really state according to my view of ANN's. We still have not pierced into the neuron to know "what is is thinking" . From the ANN perspective what we can say is given an input an output is generated. This is given by the weights of a matrix that represents the mechanism of "excitation for firing" of a neuron. If an output neuron is sufficiently "stimulated" then it fires. Now, the way we achieve this is through "learning" by presenting samples and the "learner" will adjust his behavior until it reaches the desired accuracy.
Note that this"test" given to the learner is just a multiple choice between inputs and alternative outputs. Most situations that we encounter are more complicated than a multiple choice question (my suspicion from the fmri papers that I have read is that something similar is happening on neuroscience. I would like neuroscientist in this thread to provide counterexamples to refute the argument to adjust my position if necessary).
To this we can then ask in Shannon's words:
‘Can we find a measure of how much “choice” is involved in the selection of the event or of how uncertain we are of the outcome?’
Yes, the answer that we get from His formulas is informative in terms of the problem and its process but does not address more fundamental questions of the learner such why it chose the output and how to efficiently correct a "stubborn" learner .
The problem is similar with reinforcement learning and worse in unsupervised learning since the labels after the clusters are found are given by the learner.
------
The problem with alternate definitions of information is that they do not fit into the quantification mindset of the scientific community as they are currently presented. With this in mind I would say yes to your last question, the problem is how to quantify it (but is there a need for quantification?).
Hope this helps
Dear H.G. Callaway and Arturo Geigel ,
The relationship between humanity and AI seems to have a common link known as the physical constructal law.
I recently presented a simple human relationship to the constructal law at this year’s Thermodynamics 2.0 Conference:
https://www.youtube.com/watch?v=ZvD4DHMq1Y4
According to Heyer’s research, includes AI’s relationship to the constructal law relative to the following:
“The rapid evolution of AI is putting the human relationship with information machines under a bright spotlight. The tenets of the Constructal Law (CL) are that all systems of energetic flows will evolve to facilitate the flows through them, to increase access to currents, or to maximize the efficiency of flows through them. When applied to infonomics, the evolution of information systems can be studied on the same basis as the physical systems with which they interact. For example, the evolution and efficiency of the information flows of marketing and advertising can be directly linked to the physical flows of buying and selling merchandise, the measure of which is movement of physical objects through the environment. This is the study of constructal infonomics.”
https://constructalinfonomics.org/
The way I see it, the flows between “buying and selling merchandise” is a “baby step” towards tomorrow’s convolution of flows when AI’s sensory perception includes: sight, touch, smell, taste, and sound relative to “constructal infonomics”; hence, “artificial consciousness”.
Philadelphia, PA
Dear Geigel & readers,
Thanks for your reply and the sketch of your approach to the present question. I appreciate your openness on the question of the applicability of various concepts of information as relevant to neuro-physiological processes.
I do understand something of the broad analogy involving artificial neural networks and neuro-physiological architecture. No doubt, there has been some influence going in either direction. It seems to be chiefly a matter of analogies and disanalogies and suggestive hypotheses arising from similarities and differences. I have come across expressed interest among the neuroscientists in getting at the activities of individual neurons. This is only very rarely possible, partly due to ethical concerns and reasonable restrictions.
You wrote:
We still have not pierced into the neuron to know "what is is thinking" . From the ANN perspective what we can say is given an input an output is generated. This is given by the weights of a matrix that represents the mechanism of "excitation for firing" of a neuron. If an output neuron is sufficiently "stimulated" then it fires. Now, the way we achieve this is through "learning" by presenting samples and the "learner" will adjust his behavior until it reaches the desired accuracy.
---End quotation
Here you seem to be concerned with AI (and massively parallel architecture?) as associated with "big data" and self-programing or "computer learning." But the emphasis in neuroscience is more on the interrelations of "modules" or sub-systems especially as this concerns the efforts at demarcation of the neural correlates of consciousness and the differences between conscious and unconscious psychological processes. Of particular interest are the various long distance connections between subsystems and the complexities of connections among neurons regarding both "inputs" and "outputs." I wonder if you are aware of Dehaene's work?
Familiar mathematics can sometimes be an impediment as well as a help. Consider that reaction to Heisenberg's "Matrix mechanics," (in contrast to Schrodinger) was partly conditioned by physicists' lack of familiarity with the mathematics of the matrix. I am reminded that even the most "conceptual" approach may fairly be viewed as a matter of model theory--closely related to set theory of course.
H.G. Callaway
“Artificial consciousness” does not imply real consciousness (the way humans perceive awareness). It implies, relative to the Turing test, a human should be unable to distinguish the machine from another human during a philosophical dialog on the subject of consciousness.
The question then becomes, what is that dialog between two humans that settles the philosophy of consciousness? Perhaps, we should solve that before AI answers the question.
https://www.forbes.com/sites/cognitiveworld/2020/02/11/our-reality-and-why-consciousness-is-important/?sh=66f323d7482f
Philadelphia, PA
Dear Geigel & readers,
Here is a short video from the "Closer to truth" series titled "Confronting Consciousness."
See:
https://www.youtube.com/watch?v=zGvcz-ht_MU
I quote the description of the video:
What is consciousness? Consciousness is what mental activity feels like, the private inner experience of sensation, thought and emotion. Consciousness is like nothing else. Featuring interviews with David Eagleman, Warren Brown, Keith Ward, Christof Koch, and Tim Bayne.
---end quotation
I thought the opening interview with neuroscientist Christof Koch especially interesting. He takes the view that consciousness is an emergent phenomenon arising from complexity, and that the best way to approach the understanding of consciousness is through information theory. (This should remind us of the question of what concept of "information" we are going to find most useful.) He is open to the possibility of "artificial consciousness."
Some of the gloss commentary on Koch is perhaps misleading. He speaks of a needed, new element in science, but explicitly rejects the idea that it must be something in "fundamental" physics. He rejects over-emphasis on the "hard problem" as "defeatist." He also rejects reductionism. The emphasis is on complexity of networks and information theory. Conscious experience is a kind of information?
I also thought the later interview with Warren Brown quite good.
Comments invited.
H.G. Callaway
Dear H.G. Callaway ,
The title of the video said it all, “Confronting Consciousness Closer to Truth”. Closer to “Truth” implies, we do not yet know the truth about consciousness. So, how can we talk about, or try to identify, “artificial consciousness”? Perhaps, humanity is at the artificial stage of consciousness, because we have not yet come to know the truth about consciousness.
One thing we do know, it was the physical laws of nature that created consciousness within the universe. In time, our dendritic configuration of neurons, or perhaps, the dendritic configuration of network computing, will one day evolve to discovering that “Truth”.
Artificial intelligence may create consciousness. But what does it mean to the biosphere? Human consciousness is destroying the biosphere. Artificial consciousness will play which type of role - is indefinite.
Our artificial world has destructed the biosphere and intelligently woven life processes on the Earth. Artificial intelligence has to prosper and intelligent organisms have to be perished - it is our scientific applications. Scientific applications without ethics.
Dear colleagues,
My answer to this question is "definitely".
Check my work on AI in my RG profile.
Regards.
Kindly check https://www.wired.com/story/how-to-build-a-self-conscious-ai-machine/amp
Also check http://www.cs.yale.edu/~dvm/papers/conscioushb.pdf
Philadelphia, PA
Dear Awuchi & readers,
Thanks for your suggestions.
Here is the abstract of the McDermott 2007 paper:
"Artificial Intelligence and Consciousness,"
Drew McDermott, Yale University
(This paper is essentially the same as that published as chapter 6 (pages 117–150) of Philip David Zelazo, Morris Moscovitch, and Evan Thompson (eds.) 2007 The Cambridge Handbook of Consciousness. Cambridge University Press.)
Abstract:
Consciousness is only marginally relevant to artificial intelligence (AI), because to most researchers in the field other problems seem more pressing. However, there have been proposals for how consciousness would be accounted for in a complete computational theory of the mind, from theorists such as Dennett, Hofstadter, McCarthy, McDermott, Minsky, Perlis, Sloman, and Smith. One can extract from these speculations a sketch of a theoretical synthesis, according to which consciousness is the property a system has by virtue of modeling itself as having sensations and making free decisions. Critics such as Harnad and Searle have not succeeded in demolishing a priori this or any other computational theory, but no such theory can be verified or refuted until and unless AI is successful in finding computational solutions of difficult problems such as vision, language, and locomotion.
---end quotation
See:
http://www.cs.yale.edu/~dvm/papers/conscioushb.pdf
I wonder if you might have any comments on the thesis or arguments of the paper? The idea seems to be that "artificial consciousness" is an experimental problem and question. I'm sympathetic to that kind of answer.
H.G. Callaway
H.G. Callaway ,
My post was more on the way of example of a deeper criticism that I have towards understanding in general demonstrated in my field of AI and from what I have read of neuroscience.
I go deeper on how we setup experiments and how do we construct models from them to understand. The experimental setups in both machine learning and neuroscience seem to take such a compartmentalised view on a very complex process in an aim to find the "modules" that do processing. For example, If you take an experiment aimed at showing a module, then a typical setup would be to reduce the inputs in an aim to reduce the number of variables for measurements. The question is how do we know that this will yield a correct interpretation of the phenomena. There is an urban legend on machine learning that is as follows :
"One example that always pops into my head is how one neural network learned to differentiate between dogs and wolves. It didn’t learn the differences between dogs and wolves, but instead learned that wolves were on snow in their picture and dogs were on grass. It learned to differentiate the two animals by looking at snow and grass. Obviously, the network learned incorrectly. What if the dog was on snow and the wolf was on grass? Then, it would be wrong."[1][2].
The problem with complex machinery is that we do not know if what is in this tale is happening inside. The counterargument would be that we use sufficiently controlled experiments that with the law of large numbers we can magically make these problems disappear. The other counterargument would be that we vary the conditions to exclude such phenomena. I have not seen an experimental setup that truly guarantees this, since the only way to know for sure is to know what the neurons are thinking. The root of the problem is that we are trying to understand the brain using techniques where all variables are observable and this is not the case with neurons and the brain (whether artificial or the real one).
Before casting me as an extreme skeptic, I do think that we are going to make progress as we put aside our current compartmentalised view and our attachment to closed formulas [3]. The reality that we live in as scientists is messy. In a sense, we have taken the first step by accepting computational methods that need a lot of data to get more accurate models (I hate the loaded term of "Big Data").. Now we need to evolve in doing more complex and realistic scenarios on which we can take snapshots of the brain processing and build more realistic models of the brain and its processing.
Hope this sheds more light on my position
[1]https://medium.com/veon-careers/dogs-wolves-data-science-and-why-machines-must-learn-like-humans-do-213b08036a10
[2] There are many variants to this urban legend including one on U.S. vs Russian, Tanks vs tress and the list goes on.
[3] This is a criticism of models in a similar vein to Nancy Cartwright's argument of how the laws of physics lie.
Arturo, I disagree completely but you have your right to keep your opinions, but;
The whole conception of the mind implies the rarity of complementary components of a neuron...
Zachary Knutson ,
Could you please explain what you mean with 'the whole conception' and also the premises that you are using to achieve your implication.
H.G. Callaway ,
"I thought the opening interview with neuroscientist Christof Koch especially interesting. ".
You are totally correct and I agree with you in that some of the things stated by him are misleading. I would not put them into the category of fundamental force. I still like his old posture.
Thanks for sharing the video and as always your insightful commentary, it is very much appreciated.
Regards
Philadelphia, PA
Dear Geigel & readers,
Thanks for your replies. I'm glad to know that some of what I've written here may prove useful.
I'm wondering what your take may be on the classic paper due to Crick and Koch, "Towards a Neurobiological Theory of Consciousness."
I quote the abstract:
"Towards a neurobiological theory of consciousness"
Francis Crick and Christo] Koch
Visual awareness is a favorable form of consciousness to study neurobiologically. We propose that it takes two forms: a very fast form, linked to iconic memory, that may be difficult to study; and a somewhat slower one involving visual attention and short-term memory. In the slower form an attentional mechanism transiently binds together all those neurons whose activity relates to the relevant features of a single visual object. We suggest this is done by generating coherent semisyru:hronous oscillations, probably in the 40-70Hz range. These oscillations then activate a transient short-term (working) memory. We outline several lines of experimental work that might advance the understanding of the neural mechanisms involved. The neural basis of very short-term memory especially needs more experimental study.
---End quotation
See:
https://authors.library.caltech.edu/40352/1/148.pdf
I recall that this work and position have much to do with analysis of optical illusions. Do optical illusions also enter into work on AI? Reproducing some of them would seem to be a pretty daunting task.
H.G. Callaway
H.G. Callaway ,
Before I state my opinion, I want to say that the paper is brilliantly put for the time it was written. I am able to make some comments based on the hindsight that I have and this should not belittle the contribution that they made in structuring the subject. Also as you already know, I am basing my biased comments as an outsider to neuroscience.
Comments
“There is also the problem of qualia” (p. 264). This, from my perspective is completely true, since, even in ANN’s we see this phenomenon on a continuous scale based on the number of epochs(iterations) of training and the weights acquired by the neurons in the ANN.
“Johnson-Laird1 ... that there is an operating system at the top of the hierarchy”(p. 265). I certainly think that this falls under what the authors wanted to avoid which is the computer metaphor of “the pernicious influence of the paradigm of the von Neumann digital computer”(p. 254). Also, from my experience with ANNs one can configure a system that is distributed without such operating system and merely actviate the desired regions by the neurons that have higher activation (“shout the most”).
“The problem at the neural level then becomes: 1.Where are these neurons in the brain? 2. Are they of any particular neuronal type? 3. What is special (if anything) about their connections? 4. What is special (if anything) about the way they are firing?” (p. 266). These are precisely the questions that are relevant to me when I say the “thinking of the neuron” . I do not mean to take it literally, but how does the chemical impulses lead to a particular strength in activation and the type of transmission that takes place to activate a bundle of neurons. While most models in ANN neglect the particular workings of the neuron, I think that they are important. The reason being is that with such a complex system the more we deviate from the actual physical description the more the error will accumulate. This error will be in need of compensation and this is where we start the drift away from the reality of the system.
“We suggest that convergence zones may mainly refer to the neurons (or a subset of them) that project 'backwards' ”(p. 267). I would love more up to date information on these type of feedback especially differential equation modeling of such processing, so, if anyone has references on this available they will be welcomed. I am familiar with recurrent NNs but would love other particular neuroscience examples of feedback loops.
“The binding we are especially concerned with is a third type, being neither epigenetically determined nor overlearnt. It applies particularly to objects whose exact combination of features may be quite novel to us” (p. 269). I think that their focus is the most interesting one to pursue since in my opinion, this constitutes real learning.
“Where this map might be located is unclear but parts of the thalamus, such as the pulvinar, might be involved” (p. 271). This is where in my opinion, the operating system metaphor and our boxing into modules is interfering with progress. While some localisation may be possible how do we find this overall operating system is what I personally doubt though I think that they are correct in “ Once a particular salient location has been selected, probably by a winner-take-all mechanism” but doubt that it is in a single location. to put my comment in perspective one can use [1] to classify the level of granularity of the localisation. The problem lies as to how much resolution can be achieved and discriminate it from noise(going down the ladder of Fig1 on page 742).
To answer your question about illusions in AI, It is not common to do so. Though a good data set is supposed to have a level of difficulty in order to be useful for testing. The metric though is not standardised.
There is some research on the subject such as:
Watanabe, E., Kitaoka, A., Sakamoto, K., Yasugi, M., & Tanaka, K. (2018). Illusory motion reproduced by deep neural networks trained for prediction. Frontiers in psychology, 9, 345.
https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00345/full?source=post_page-----dc303056171----------------------
but it is not standard
Hope this helps
[1] Churchland, P. S., & Sejnowski, T. J. (1988). Perspectives on cognitive neuroscience. Science, 242(4879), 741-745.
https://patriciachurchland.com/wp-content/uploads/2020/05/1988-Perspectives-On-Cognitive-Neuroscience.pdf
Philadelphia, PA
Dear Geigel & readers,
Thanks for your reply. I'm afraid I have to say, though, that what you say does not seem to bring me much further with our question!
I will go through the Crick and Koch paper again to see if that helps. Perhaps the problem is that I am an outsider to artificial intelligence?
More later.
H.G. Callaway
---you wrote---
Before I state my opinion, I want to say that the paper is brilliantly put for the time it was written. I am able to make some comments based on the hindsight that I have and this should not belittle the contribution that they made in structuring the subject. Also as you already know, I am basing my biased comments as an outsider to neuroscience. ...
H.G. Callaway ,
So far I have been answering more specific questions that you have made on information and Kock's paper.
I will now try to argue what I think is the more general question that you are after. Let us postulate Lycan Style[1]:
(d) "My pain at t = the firing of my c-fibers at t"
Now, the first problem to figure out is how to tackle the equivalence relation from the:
1) Biological perspective
2) Philosophical perspective
Then you have to do the following equivalences
(a) An artificial consciousness feels pain at t =My pain at t
and
(b) The firing of an artificial consciousness circuits at t=the firing of my c-fibers at t
then you can finally say:
(c) An artificial consciousness feels pain at t= the firing of an artificial consciousness circuits at t
From my perspective I think that notions of information are not helpful in getting this question answered ,nor the more theoretical notions of computation. The question is also not within abstract computational theory but on actual systems design and construction from a computer science perspective (I am not adjudicating a systems view nor the neural correlates of consciousness , merely equating both so that they are compatible). From this point of view I think that (b) is doable from a computer science and hardware perspective (sensor design for feeling and perceiving).
Now there are two hurdles.
The first is granting (a). This is where my comments on my previous post on the characteristics found in:
https://plato.stanford.edu/entries/consciousness/
are doable or not doable or not.
If we agree on this then we can overcome 1a) and also (c) since the system as a running component will be the proof. If not, then the problem remains.
The second is accepting (c) and that as a whole equals (d)
I also think that this problem is different from generating emergent behavior that is complex enough to simulate complex behavior of humans.
The problem can then be complicated from this point on forward, once the above is agreed upon in my opinion.
Hope this is more in line with what you are looking for
Regards
[1] See Consciousness by William Lycan for a good history of arguments behind the statement.
Dear H.G. Callaway ,
I think you asked about how information theory fits in with artificial and natural consciousness. The answer is that the same limits on lossless compression of information sent across a network apply, no matter whether that network is the human brain or a telecommunications network. It would be interesting to know how efficient the human brain is in its communications.
Best wishes,
Andrew
However, the artificial intelligence system is a form of technological development that is the highest in the current era, and despite its advantages, the administration's reliance on it in all its activities and the consequent legal effects may be fraught with risks due to the errors that may result from artificial intelligence
Dear Andrew Powell,
Yes, it is important to improve the lossless compression of information sent over the network. But in the context of the technological progress taking place in subsequent years, the improvement of artificial intelligence technology, and the improvement of data transfer systems, the question arises: Under what conditions of new technologies, under what characteristics of new technologies, artificial awareness could exist in the structures of previously created, specific, improved in the future artificial intelligence?
What's your opinion on this topic?
Please reply
Best regards,
Dariusz Prokopowicz
Dear Jasim Mohammed Hamzah Mahaweelee,
Yes, the development of artificial intelligence may generate new categories of technological operational risk. Therefore, it is necessary to improve the legal norms defining the possibilities of using artificial intelligence in manufacturing processes, production processes, artificial intelligence replacing human work, in the matter of taxation of enterprises in which production processes are carried out by artificial intelligence.
Best wishes,
Dariusz Prokopowicz
Dear Arturo Geigel,
Thank you very much for the interesting, substantive explanation of your opinion on the mechanisms of action and processes forming the operation of consciousness.
Best wishes,
Dariusz Prokopowicz
Dear José Miguel Belisario Gavazut,
Thanks for the answer. I'm glad we think the same. Thank you for your answer to the question: Will it be possible to build artificial consciousness in the digitized structures of artificial intelligence?
Regards,
Dariusz Prokopowicz
Dear Darius,
By the way, energy, logic, microprocesors and memory storage drives are there for AI consciousness to be developed.
Just a matter of some time.
Regards.