This question is based on the assertion some “strong AI“ researchers hold that ”value” can’t exist in the universe without a value-assigning agent perceiving the universe and assigning value to things. This line of thinking concludes that creating a “value-assigning“ entity that is permanently self-sustaining is the ultimate moral imperative, because without it, the universe risks being devoid of value. All scientific pursuits are undergirded by human, emotional motivations involving human values. I’m curious to know your thoughts on whether our human values taken to the extreme lead to creation of the strong AI described above, or whether they lead to something else. if you answer, please do so under the hypothetical context of being a non-theist. Being a theist strips the question of its relevance, because my goal here is to understand the ethico-philosophical motivations of scientists who are non-theists.