AI is the latest technological monster scaring people about the future. Legitimate concerns about things like ethics training, environmental impact, and scams using AI are too easily transformed into Skynet and Matrix nightmares. The possibility of AI becoming sentient and overthrowing humanity is often raised, but as Meta's head of AI Yann LeCun said The Wall Street Journalthe idea is “complete nonsense.” LeCun described the AI as less intelligent than a cat and incapable of planning or even wishing for anything at all, let alone the downfall of our species.
LeCun is right that the AI won't plan to murder humanity, but that doesn't mean there's nothing to worry about. I'm much more concerned about people relying on AI to be smarter than they actually are. AI is just another technology, which means it is neither good nor bad. But the law of unintended consequences suggests that relying on AI to make important, life-altering decisions is not a good idea.
Think of the disasters and near-disasters caused by relying on technology over human decision-making. Rapid stock trading using machines much faster than humans has caused more than one near collapse of part of the economy. A much more literal meltdown almost occurred when a Soviet missile detection system failed and claimed nuclear warheads were approaching. In that case, only a brave human being at the controls prevented global armageddon.
Now let's imagine that AI as we know it today continues to trade on the stock market because humans gave it more complete control. Then imagine that the AI accepts the faulty missile alert and is allowed to activate missiles without human intervention.
AI Apocalpse Averted
Yes, it sounds crazy that people would trust a technology famous for its mind-blowing facts to be in charge of nuclear weapons, but it's not that far from what's already happening. The AI voice on the customer service phone could have decided whether you get a refund before you have a chance to explain why you deserve it, and there's no human listening to change your mind.
AI will only do what we train it to do, and it uses human-provided data to do so. That means it reflects both our best and worst qualities. The facet that manifests depends on the circumstances. However, handing over too much decision-making to AI is a mistake at any level. AI can be a big help, but it shouldn't decide whether someone is hired or whether an insurance policy pays for an operation. We should worry about humans misusing AI, accidental or otherwise, replacing human judgment.
Microsoft's branding of AI assistants as co-pilots is great because it evokes someone who is there to help you achieve your goals, but who doesn't set them or take more initiative than you allow. LeCun is right that AI is no smarter than a cat, but a cat with the ability to push you, or all of humanity, off a metaphorical counter is not something we should encourage.