Despite Elon Musk’s warnings this summer, there’s not a whole lot of reason to lose any sleep worrying about Skynet and the Terminator. Artificial Intelligence (AI) is far from becoming a maleficent, all-knowing force. The only “AIpocalypse” on the horizon right now is an overreliance on machine learning and expert systems by humans, as demonstrated by the deaths of Tesla owners who took their hands off the wheel.
What currently passes for “Artificial intelligence”—technologies such as expert systems and machine learning—are excellent for creating software that can help in contexts that involve pattern recognition, automated decision making, and human-to-machine conversations. Both types have been around for decades. And both are only as good as the source information they are based on. For that reason, it’s unlikely that AI will replace human beings’ judgment on important tasks requiring decisions more complex than “yes or no” any time soon.
Expert systems, also known as rule-based or knowledge-based systems, are when computers are programmed with explicit rules, written down by human experts. The computers can then run the same rules, but much faster, 24×7, to come up with the same conclusions as the human experts. Imagine asking an oncologist how she diagnoses cancer, and then programming medical software to follow those same steps. For a particular diagnosis, an oncologist can study which of those rules were activated to validate that the expert system is working correctly.