Should there be ethical standards and regulations to regulate algorithms used in Artificial intelligence and Machine Learning?

Without a legal standard, a societal acceptance of boundaries, and a pursuit of societal benefits to humanity, are we at risk of creating a monster with self learning machines that have no boundaries, no legal consequences and no fear of penalties?

Is fear required in order to create a societal conscience for moral behavior?

Is there such thing as moral behavior?

What is the limit of machine learning where beyond the limit, the machine becomes a destructive force?

Questions to ponder.

More Kenneth Loebel's questions See All
Similar questions and discussions