We have talked so much about what is the nature of thinking and language. Epistemologically, It seems to me we are beating around the bush. Let us put the question very bluntly: is it feasible or not? If yes- how? if no-why not?
About human feelings and our tendency "to use laws to gain some advantage" is to my mind the easiest challenge. le us consider for a second that we are logical machines with feelings that distort our vision of reality ( right now I am thinking of Plato's cave metaphor). We lose ,to various degrees of course, our capacity of magical judgment once we are in a context that triggers our emotions ( love, hatred, envy, desire etc.)
Seen from this perspective, we can consider that all emotions function like viruses that attack our system. But these viruses are dormant. They need a context, the right amount of pre-destabilizing background and of course the right illusion to make the individual honestly think he/she is going through a unique and original experience.
So at the end of every analysis, emotions are like dormant viruses and can be artificially recreated using the suitable algorithms.
Joashim,
I like your contribution because I know you know your field of research and I truly admire your technical expertise. Had I half your skill, I would be able to create the first auto sufficient homo-cyborg. But I'm just a philosopher and a theoretician.
You pointed out a very important fact: "The problem is
power supply: accumulators don't have
the necessary energy density yet that
is needed to let them walk around
a couple of hours."
Let's go back to the essential. In order to survive, humans need air ,water and food. Notice that at least water and air are electromagnetic wave transmitters. So, to my mind, the creation of a self-sustained power supply lays in this very simple fact. We have to create accumulators that are able to receive electromagnetic waves ( like radio waves for example) and are able to transform these waves into energy. This is , to my mind, the best way to deal with this serious challenge.
Truly speaking the aim is not so much creating cyborgs that feel, etc. in the very same sense as humans do. No one, reasonably would argue that. The aim, on the contrary is creating cyborgs - or let's call them the way we want - capable of feeling, making choices, and acting on their own. The literature about it is large , and a good reading shows my point.
If not all, many bets are on, indeed. The real question is not whether that is r not feasible. The real question is when shall we attain that point in techno-scientific research.
That singularity is presumably to be reached some time in next 20 to 40 years, it has been said. I do not see any dangerous on that endeavor. Unless, of course, one' s ego is too big enough not to have "something" better, or different, or higher that we ourselves.
I like it when you said "The real question is when shall we attain that point in techno-scientific research." So we agree that it is feasible. Now let me ask a question which is in fact a corollary to your suggestion; Why not now?
A second question and now it is a response to your statement: "No one, reasonably would argue that." Why not again? Of course this cyborg will be a thousand times smarter than a homo Sapiens but why not in the image of us after all. Can we imagine things we do not really know?
Well, I think, the bush will always be there, when we want androids or cyborgs thinking and feeling like humans. Which humans do you mean? The biggest difference of all is between men and women, after all. OK, so we have two kinds of androids. Let them think. Let them follow laws Asimov's, for example). Does this make them human in the sense that we humans follow laws? No, I don't think so. Humans usually try to use laws to gain some advantage, and if androids would follow this path we humans would have a tough future to bear. Androids may think logical but we don't. Studies have shown that we actually do not even have free will because the brain intercepts senses and feelings.
Feeling like humans is another challenge that I cannot imagine. Why do psychologists have to carry out so many experiments with humans to find out what they feel and why? Because we are different. We develop. What triggers all this development is a prerequisite if we want to build a surrogate human in the sense of the question.
About human feelings and our tendency "to use laws to gain some advantage" is to my mind the easiest challenge. le us consider for a second that we are logical machines with feelings that distort our vision of reality ( right now I am thinking of Plato's cave metaphor). We lose ,to various degrees of course, our capacity of magical judgment once we are in a context that triggers our emotions ( love, hatred, envy, desire etc.)
Seen from this perspective, we can consider that all emotions function like viruses that attack our system. But these viruses are dormant. They need a context, the right amount of pre-destabilizing background and of course the right illusion to make the individual honestly think he/she is going through a unique and original experience.
So at the end of every analysis, emotions are like dormant viruses and can be artificially recreated using the suitable algorithms.
Joashim,
I like your contribution because I know you know your field of research and I truly admire your technical expertise. Had I half your skill, I would be able to create the first auto sufficient homo-cyborg. But I'm just a philosopher and a theoretician.
You pointed out a very important fact: "The problem is
power supply: accumulators don't have
the necessary energy density yet that
is needed to let them walk around
a couple of hours."
Let's go back to the essential. In order to survive, humans need air ,water and food. Notice that at least water and air are electromagnetic wave transmitters. So, to my mind, the creation of a self-sustained power supply lays in this very simple fact. We have to create accumulators that are able to receive electromagnetic waves ( like radio waves for example) and are able to transform these waves into energy. This is , to my mind, the best way to deal with this serious challenge.
Another issue arises and should be resolved, and it is about the differences in each robot identity. It is a paradigmatic distinction in terms of software technology used: we have to consider the method that the robot will use to decide about unknown issues. If we settle machine self-evolutive mechanisms in the kernel reasoners of the robot, we need to face the current limitations [learning, computation, and prediction]; on the contrary, if the robot is a "point" of a meshed framework, the risk of repetitive identities [the old paradigm of "terminals"] is high and oriented to "service systems”, some ecosystem more repetitive than we would like to have.
All questions related to identity, creativity, decision -making and human like language are theoretically answerable. The only thing I still do not have an answer for ,and here i need your help and the help of all the interested colleagues, is how we gurantee that this homocyborg will not take over. What can we do to know for sure that we will remain in control. Remmeber if we give him/her a free will and with all the knowldge stored in his /her brain, all the capacity of reasoning and shut down emotions in time of need, it is very likely that we will be his/her slaves. I m thinking of creating a central obedience virus inside the point of the meshed framework of the homocyborg.Another way is to create the illusion that we are gods and he/she has to follow our will. But to be honest with you, I can't find any satisfying answer yet.
in terms of pure computation [thus, a part from any matter of identity spirit of self, and even more of gnosis], some problems linked to decision-makings, conflict situations, altruism, were already shown in numerous experiments with "communities" of robots — that is to say, groups of autonomous robots placed in ensembles that share common interests and functions. Please, have a look at the relevant literature published by the Laboratory of Intelligent Systems at Lousanne; it will be surprising to know that, in some conditions, robots showed some proto-autonomous decision-making strategies, tending to overwhelm the interests of others.
By the way, we should consider decision-making just as a part of the institute of judgement. In terms of neuro-psychiatry, we have also to consider that a relevant difference in the mind-map of the human being is recognized and identified - for example - in cases of neuropathologies. The difference between the speciation of the functions of our 2 hemispheres, reveals that the matters of identity and judgment is not a mere question of computational effort.
I assure you all these concerns are theoretically answerable and therefore, these answers are potentially transferrable into algorithms. But you have not answered my question yet .
I did not have the experience. All I know is that if questions are answerable theoretically than they are feasible in reality. I'm good at seeing flaws and imagining solutions so feel free to point out all the challenges you may think of. For the sake of methodology: what is your immediate concern right now?
I do not pretend I am smart. Actually I know for sure that all the persons, without exception, who participated in this theoretical debate are smarter than me. What really almost breaks my heart is that philosophy is almost degraded in this positivist world in which we live. Guess what? I m a philosopher and if I have a group of smart people like Joachim, Marika, Michael and Giuseppe I know for sure, within five years, that I can create a Homo-cyborg. My statement presupposes two corollaries. Either I'm crazy or in fact I'm telling the truth.
I like Marika when she says that she does not want to lose time-me too. I do not want to lose the time or energy of anybody. I only need that somebody answers the question I already posted before. Bear my stupidity and try to give me a satisfying answer, because I pretend that I have all the answers for all the concerns presented so far.
Brahim, there're several open questions, but my personal view is currently focused on quantum reasoning, to close the gap (or get closer) between computational rationality — thus, the class of judgments — and irrational behaviours, and again their coherence with judgments. If the ratio between rationality and "noise" would be treated through quantum math, and overhelms a point of equilibrium (call it "natural", or human-like), the robotic chances to decide could override any algorithmical control, or device. As mentioned, the human boundaries that fix the equilibrium at a normal level (ethical, sustainable, efficient) is given by our gnosis (γνῶσις)— a synthesis of thousands years of culture, that's something very difficult to define in batches.
The problem is inherent in the approach. Decision making is not part of the class of judgment. Both of them belong to a deeper class: self-preservation. Irrational behavior is rational after all if we link it to this overarching episteme. The robot has to be built on self-preservation as a matrix and then with different contextual computational rationalities we will be able to create unpredictable rational and irrational behavior..