One could argue that artificial intelligence serves as a means to finally falsify some substantial philosophical claims, about the nature of the mind-body problem for instance.

More specifically, what I'm concerned with is whether the eventual success or failure of AI, in its most ambitious human-level intelligence and consciousness, will serve to prove or disprove any substantial philosophical thesis.

What is at stake for philosophy in the whole debate concerning the feasibility of AI?

More Bessat Elmie's questions See All
Similar questions and discussions