I’ll tell you about something that exists right now that no group of the smartest people in the world can figure out.
Neural network algorithms are a type of artificial intelligence used every day. Google uses advanced algorithms to give you top-notch search results. YouTube recommends videos to you using algorithms. Self-driving cars utilize insanely complex neural networks to recognize if a human is in their path or not, so they can stop in time. Quora even uses them to suggest the 10 most interesting questions to you in your daily Digest. Neural networks are all over, and they’re important.
However, nobody knows how they work.
Imagine a complex neural network that a programmer feeds a billion pictures of humans into, then a billion pictures of different cars into it (with the programmer explaining whether each picture is an image of a human or a car).
If this example sounds familiar, it probably is. Has Google recently asked you to identify pictures of road signs, cars, or streets for its reCAPTCHA, to make sure you’re not a bot? If so, you’ve become the programmer telling the algorithm the answer to the image. Coincidentally, Google is developing its own self-driving car right now, and you’ve become free labor for them to help train their algorithms. Fun, huh?
After feeding it billions of pictures, the algorithm gets pretty good at predicting whether any additional image is a car or a human. But, we have no idea what happens to the data after it enters the “black box”.
This is a representation of a neural network (it’s called a neural network because it vaguely resembles the brain’s neuron structure, another piece of biological ‘technology’ we have no idea how it works).
The circles represent “nodes” and the lines follow the data as it is inputted, stretched, contorted, bended, and finally outputted to the user. While we can figure out what single nodes do, and we can figure out what small clusters do, we have absolutely no idea how the data goes in one end and comes out the other end. Like, we just don’t.
Now, there’s a catch. If you want high interpretability (i.e., you MUST know how the data is manipulated from start to finish) you will have to stick to a relatively less complex, less reliable neural network. It might correctly identify a car vs. human 60% of the time, but at least you’ll know how it works.
On the other hand, you can have a super complex, super accurate (>99%) neural network, but you must sacrifice the knowledge of how your data is being handled. MIT professor Dimitris Bertsimas gave an interesting but lengthy on this if you care to take a look.
So, to answer your question: It’s already happened. We already have technology that we don’t understand, yet we still use it a ton in almost every single industry on earth (that should make you a little worried). And it’s only being used more and more. Hurrah for progress, I guess?