My AI algorithms can run on cheap consumer devices, which means that they could in theory be used on a large number of consumer devices to facilitate distributed, massively parallel machine learning and deep learning.

Obviously, this would expose researchers' data to the computers of ordinary people, which poses a security risk.

My question is whether researchers would accept this security risk in exchange for a lower price, together with a guarantee regarding the quality of execution (versus the quality of security).

More Charles Davi's questions See All
Similar questions and discussions