In the pursuit of creating highly efficient and capable AI systems, if we develop an algorithm that consistently produces results beyond our understanding or ability to interpret, should we continue to deploy and use it, even if its decision-making processes are essentially a "black box" to human comprehension?