Assume that human will be able to build a superintelligent machine which surpasses all aspects of his intelligence reaching the point of singularity where this superintelligent machine recursively self-improves itself in an uncontrollable and irreversible way that might pose a threat to human civilization.

Who will hold responsibility for the consequences of such a threat to human civilization? Will it be human who contributed to the development of this machine’s intelligence until it undergone a state described as an intelligence explosion? The answer depends on whether this machine is conscious or not of its actions in the post singularity era.

In the case where the machine is conscious of the impact of its actions, then human cannot be made accountable for the decisions it makes in an aware manner. On the other hand, if the machine is unconscious of such consequences, then it merely represents a powerful tool developed by human who would be responsible for the impact of its actions.

This leads us to a situation where, for the same action taken by human, there are two valid scenarios. In the first, he would be responsible for the consequences of such an action and in the second he would be free from responsibility for these consequences. This situation represents a challenge to the theory of technological singularity.

More Mohamed el Nawawy's questions See All
Similar questions and discussions