While many modern causal models do not seem to adhere to Laplace's demon (strict determinism) which treated error factors as merely unknown causes, they do not also always address the issue of freedom and responsibility sufficiently. While it is acknowledged that the human element (as far as intervention) is concerned might involve an exogenous factor (perhaps, "transcendent cause" in neoplatonic terms), posing problem to the equilibrium of an otherwise deterministic system, the models themselves might seem relevant for systems that are independent of human intervention, e.g. artificial intelligence. But, that evokes ethical questions, especially regarding whether formalism of such models can totally ignore the question of responsibility or should they really be resolving them. In more practical terms, can such a machine be constructed based on a causal model that can correctly predict and make right moral decisions for humans?

Similar questions and discussions