I would like to talk about our interaction with artificial intelligence in the future. How artificial intelligence will affect our values and our very existence?
I think the basic stock of philosophy and its history is not touched by new developments, of course new technical opportunities open the view for new fields of reflection and could be the trigger for a new paradigm, but not much more.
Indeed, AI is developing fast. We can see on the example of Facebook how it can offer us stories of our life based on our photos. In google search it also takes into account our history to suggest something interesting. Its advantage today is an ability to do huge volumes of work much faster than humans.
However, it is not as smart and probably will never be. t may understand the meaning of complex or culture-related expressions in a wrong way. Indeed, it helps scientists to find literature. But I doubt that it ca give philosophical advice. It cannot analyse the meaning better that human (except for those who are very stupid).
There were also some comments on AI in some answers to this question: https://www.researchgate.net/post/What_do_you_know_or_think_about_economics_of_social_networks
I think the basic stock of philosophy and its history is not touched by new developments, of course new technical opportunities open the view for new fields of reflection and could be the trigger for a new paradigm, but not much more.
You have an intriguing question here, but it is difficult to answer or even speculate on it. You ask whether the "development of artificial intelligence eventually change the philosophical concepts and axiological values of humanity?" But I suspect that any sensible answer to your question will largely depend on what "artificial intelligence" may turn out to be.
In contemporary developments, what is called artificial intelligence seems to regularly depend on and draw upon human intelligence--as with the dependence on combing through "big data" in the training of artificial systems for particular tasks. It would, perhaps, be more surprising, if the development of what is called artificial intelligence did not depend in very significant ways upon pre-existing human intelligence. But one may wonder if such dependence can eventually be replaced or supplanted. Is the dependence something like Wittgenstein's "ladder," --something to be discarded once it is used to reach a new level in some sense?
I would encourage those interested in this question to try to say, as a preliminary, just what artificial intelligence is supposed to be --for purposes of answering the question. Much detail may, in that way, come in for consideration. Perhaps it would be useful to consider what the various prominent promoters of the concept have to say about it.
There are ways by which humans absorb change and adjust: through art, literature, media. That change involves alteration of self, but it invariably does, with ongoing adjustments.
Advertisements appear fronted by AI. Discussions take place on the nature of AI prejudice. Are they like us, or are they not? What makes humans different? AI actors appear on TV and on comedy shows exhibiting shockingly dry humour.
We begin to question notions of cognition and self. Are AI heaven bound or situated permanently on earth? Can they be our friends? Can they be our lovers? As they engage in self-creation, they emulate human notions of beauty. They improve themselves. We begin to compete with their physical perfection and superiority.
After 50 years an AI Prime Minister or President is voted in, suitably human-like, and new paradigms are formed. We, humans, begin to acquire artificial parts in order to emulate those we now admire.
Humans become merely a version of what we were before. Humankind now lives forever.
I appreciate the important contributions of colleagues who have expressed themselves on the subject.
For my part, I see a very likely debate in the definition of Artificial Intelligence understood as a result of human reaction, but which, at a certain moment, will inevitably escape human control.
The integrated applications, in particular, those that are produced in the avant-garde sciences, will lead to an exponential growth of the technology, this will induce drastic changes that can be summarized in the advent of the Technological Singularity (Kurzweil, 2005) ....
Mckenna, quoted by Theys (2012), stated that "... we are at the peak of a radical evolutionary leap, towards an order of complexity of Biology plus Culture. On the edge of something impressive nape before known. "
Kelly, quoted by Ptolemy (2011), indicates that the singularity is of mythic proportions, it is such a strong idea that we have to deal with it, even if it were not true.
I agree with one point HG made: it depends entirely on what AI develops into. We don't know yet. As of now, a lot of what is called "AI" is actually rule-based programming, where the logic is still largely deterministic. (In part, because those employing AI want to obtain certain outcomes, useful and safe, and predictable.)
The other aspect of this is that the history of mankind is full of "changes in philosophical concepts," as our knowledge base, and therefore perspectives, have evolved. So viewed that way, there should be nothing unusual about AI also causing our previous presumptions to be questioned. It's all very exciting, I think.
Thanks for the suggestion of looking at Kurzweil, on "the singularity." For starters, here is a quotation from a short review of one of his books, which (review) appeared in the NYTimes:
Like string theory's concept of an 11-dimensional universe, Mr. Kurzweil's projections are as abstract and largely untested as they are alluring. Predictions from his earlier books (including "The Age of Spiritual Machines" and "The Age of Intelligent Machines") have been borne out, but much of his thinking tends to be pie in the sky. He promotes buoyant optimism more readily than he contemplates the darker aspects of progress. He is more eager to think about the life-enhancing powers of nanotechnology than to wonder what happens if cell-size computers within the human body run amok.
In some sense, I suspect that the question of "what A.I. will be?" is more an experimental question in engineering than it is a question which one might answer without the actual attempts at engineering A.I. In some ways, Kurzweil seems so thoroughly engaged in the abstract possibilities, that this may discourage or distract from recounting what has actually been accomplished in the direction of A.I. It is important to see that Kurzweil has met with some skepticism.
I wonder, though if some of his own work might be available on line to pursue. I know of some TED talks, but perhaps there are also some of his texts?
I will decompose the question into two separate stages:
1) the development of A to a stage where it could potentially change philosophical concepts
2) How will it affect us
To answer one 1) one must assess the current state of AI. Toachieve something like what Kurzweil is proposing, we are still very far and in my opinion we will not reach it in the time he estimates. Current hoopla in Deep learning is just pattern recognition and this is not even close to the goal presented in the question. Advances are still required in natural language processing, knowledge representation and reasoning and meta programming(my personal view on this last one). these are not "glamorous" research areas right now as deep learning is and are evolving much slower than pattern recognition.You would also need symbolic and sub symbolic integration to even come close to tackling stage 1) above. also the AI would also need to understand how to reason about logic in first and second order which is a daunting problem and be able to discriminate when to use them as opposed to "common sense" answers in multiple environments. This is still very far away. I am not saying that it is not possible, just that academia nor industry are focusing on solving these problems (which in my opinion are solvable).
To answer number 2), we do not need such AI to see the impact of primitive precursors such as data mining and simple heuristics on our philosophical concepts and how they affect our daily lives. Just look at what has come out of Cambridge Analytica and its misuse of data gathered from Facebook. This has tremendous repercussions on the notion of privacy and the giving away of information for the use of such social networks. such simple targeted advertising analysis can sway opinion and it is being asked if it is ethical and legal to do so. Also there have been several initiatives on assigning a price to your data in order for you to get financial remuneration(instead of giving it for free to such companies). There is also debate on whether there should be massive DNA profiling to help cure diseases, but in the interim will people discriminate against you if they found out your profile from a leak? Examples like this are abound today in which simple "dumb" AI is making headway into debates on what it means to be human (through DNA profiling techniques), the questions about consciousness ( if you simulate the death in a neural net it reverts to its "most cherished" thoughts).
In summary, we can get a glimpse on both answers as of today and we have a lot of work ahead of us in solving most of the basic issues, since we are just beginning to grasp their importance.
What you emphasize here is not entirely new; and as you suggest similar techniques (as with demographic analysis) have long played a role in advertising. What comes into question, politically, from data mining and "deep learning," is a computer-aided intensification of the politicians' disreputable appeal to pre-existing public prejudices --"targeting" of political "advertising"--as contrasted with open debate and discussion of issues and problems, policy and proposals. This, I take it is symptomatic of political elitism. New techniques to control the outcomes of electoral contests are in play. This didn't begin with Cambridge Analytica. There is a considerable pre-history in the use of "focus groups" to hone targeted political messages.
Apparently, we need not be detained by Kurzweil's A.I. utopianism? Well, in any case, you have skipped over this. Still, I think there would be some value in whittling down the more extravagant projections and claims for A.I. A certain utopianism belongs to the mystique of the topic and tends to dispose people to full, unrestrained development of the possibilities. By concentrating on the actual state of development, we may get a more reasonable estimate of the positive and negative potentialities.
H.G. Callaway
---you wrote---
To answer number 2), we do not need such AI to see the impact of primitive precursors such as data mining and simple heuristics on our philosophical concepts and how they affect our daily lives. Just look at what has come out of Cambridge Analytica and its misuse of data gathered from Facebook. This has tremendous repercussions on the notion of privacy and the giving away of information for the use of such social networks. such simple targeted advertising analysis can sway opinion and it is being asked if it is ethical and legal to do so.
the problem with utopians is that they are not in a position to learn from experience. (AI could learn from experience in "an other way" - that's why K
As you can see from this and other of my posts, I am very skeptical on the future of AI as it is currently being carried out. Kurzweil's projections are too optimistic and sometimes makes unreasonable assumptions. Also, the limited scope of dissertations and the need for corporate R&D to make profits within a four year window does not help the AI progress.
But given my perception that you desire to pursue the subject as it stands on the question, I will comply.
Let us assume:
That we develop an AI complete intelligence [1] and further, we develop an AI that can do potential infinite recursive analysis to carry out evaluation of function evolution and performance.
We have conquered NLP , reasoning, metaprogramming, and subsymbolic/symbolic processing problems.
Given the above, the first impact will be that this kind of intelligence can put into question the validity of the Chinese room argument.
If you further join the capability described above with pressure sensors that can go into feedback loops in the programming it can be construed as pain receptors. As a consequence, this will raise real issues (not as today's speculation) of robots rights. Can we shut down an AI that feels pain? What will that do if you put enough of these AI to work and they strain our energy resources?
Under this scenario we can contemplate other issues such as:
Could we in principle be freed from work? not likely unless we change basic concepts of economics
Well we perceive our AI colleagues differently? most likely, since we still have not been able to eradicate prejudice amongst ourselves
Will AI develop religious belief? if so will it be accepted?
To wrap up, I do not believe that AI will positively impact philosophical concepts besides the ones outlined above (which fall mainly on philosophy of consciousness and AI rights) for two reasons:
Most philosophical concepts will most likely be impacted not because of the AI itself but because of a problem in which the AI is involved. For example, advance in genetics, Economy principles, politics, etc.
AI achievements have always been downplayed and being pushed aside under the argument of "This is not AI but X". Under this thinking AI is always something else which is not the technology we have achieved. this is the most likely response when AI will confront us with other philosophical issues. AI will continue to be an Utopian ideal under the most likely scenarios.
Hope this better fulfills your expectations of my answers
Regards
[1] Yampolskiy Roman, "artificial intelligence, evolutionary computation and metaheuristics".
Undoubtedly, the management of large data has grown, although, on the one hand, complexity theorists show us that it is impossible to predict the future through a broad argument of uncertainties. On the other hand, (Big Data) makes it possible to trust in the predictable nature of certain social events.
The collection of robust data bodies with a strategic objective today exceeds the capacity of the usual software; With this, an informative body of value can be ordered and processed in a moderate time, to access proposals for anticipatory solutions in various areas of the economy, politics and health, among others.
In an executive report of IBM (Mooiweer & Shockley, 2013) in collaboration with the Saïd Business School of the University of Oxford, the use of Big Data by companies in the health sector was widely discussed.
With Big Data, obtaining data is not limited to those data that users contribute unconsciously, through the use of the network and mobile phones, certain algorithms and consecutive studies make possible the correlation of variables that report on revalidated behavior later, this reinforces the existing prediction in models.
While there is potential in big data, it is overstated due to the following underlying reasons:
In any ML algorithm that you use, you introduce a bias/variance and outcomes will depend on it.
Prediction based on variable input/output selection as of today is very brittle given the dynamic situation of most complex environments.
Too much data can be as detrimental as too little data. Ullman makes an excellent point of this in chapter 1 of [1].
Also, in adversarial machine learning there is an assumption that as soon as an attacker knows that an ML is being used as oracle, he/she/it will modify its behavior to overcome the prediction. The assumption in ML is that the underlying distribution is stationary, this is not the case in adversarial ML (which is still in its infancy) and could help in this matter. Additionally, there is progress being made in data set shift analysis to overcome non stationary distribution, but this is another field in its infancy.
While no doubt we can make headway into the prediction problem, we are limited by the constraints above that are fundamental issues in ML.
[1]Mining of Massive Datasets by Anand Rajaraman and Jeffrey Ullman
Now: on the subject of the accelerated growth of technology that has anticipated the development of AI, there are elements that I would like to take up again with you and with all the distinguished colleagues who participate in this analysis. I am honored by this debate.
Wilson (2012), refers to the acceleration of knowledge through history. He calculated that from the year 1 of Our Era until the middle of the Renaissance, such duplication had only occurred twice.
But the next duplication only took 250 years to occur, this shows the presence of the acceleration factor in the duplication of knowledge.
Korzybski points out that by the year 1750 there had been 4 duplications, but the effect of the acceleration became more evident according to his calculations, pointing out that in 1900 there had been 8 duplication units, in 1950 there had been 16, and in just ten more years late in 1960, we had 32 duplications, seven years later 64, in 1974 There were 128 units.
He also says that the following estimate issued by Jack
Vallee, emphasizes that since then the knowledge began to duplicate each year, but the most recent calculations account for the rhythms much faster.
If such growth continues, it will become inevitable the advent of an era in which technological changes will have to happen at such an unusual speed, whose consequences will be as drastic as irreversible, It has been anticipated as Technological Singularity:
What will be its consequence in our existence and in our values?
Agreed that there is more information each year and it grows exponentially. The key limitation lies in the integration to derive new technology (especially to achieve the singularity). One of the concepts that I have been exploring is reconfirmation and extension of the results obtained in:
"Role of design complexity in technology improvement" by
McNerney , Farmer, Redner and Trancik
where they state in their paper:
"The exponent α of the power law depends on the intrinsic difficulty of finding better components, and on what we term the design complexity: the more complex the design, the slower the rate of improvement."
My preliminary research supports this thesis
Also, my points above on economic incentives by corporations and academic delimitation, further adversely effect converting the information you precisely reference into actionable steps towards a singularity.
It is very stimulating to know about your research that, you explain to us in a friendly way and very consistent .
We hope that the integration of avant-garde sciences: Nanotechnologies, Genetic Engineering, Neurosciences, Computing, among others, will provide an optimistic future for our species, or at least not catastrophic.
The outstanding thinker Edgar Morin, has underlined his reservations regarding the set of problems that threaten humanity today.
I would like to hope that the excessive growth of technology was not a boomerang in time.
A definitive and irreversible change of our human values.