posted: September 2, 2023
tl;dr: Only connect things to the Internet and/or AI which are safe to do so...
The twenty year-old garage door opener in the house I am renting gave out a few years ago. My landlord, a thoughtful, dapper gentleman (may he rest in peace), knew that I worked in the Internet software industry. So he bought a fancy high-end model to replace it. “You connect it up to the Internet via Wi-Fi, and then you can control it with an app!,” he exclaimed to me.
I thanked him for his thoughtfulness, and then after the serviceman installed it, I paired it with the remote control in my two vehicles, and that was it. I never did connect it to the Internet. I never did download the app. Why would I do so? Why do other people do so? What is the use case for doing so? I can’t currently open and close my garage door while I am in another city. But how often do I need to do this? And if I do need to do this, are there not other ways to accomplish this task?
I left my garage door opener “air gapped” to the Internet. There’s a risk with putting any device or system on the Internet: it gets exposed to hackers. Hackers can attempt to bypass the security, often by exploiting flaws in the software that the maker and users may not know about. For decades I’ve placed devices on the Internet, and then watched as they get discovered by hackers and then attacked. Hackers will attack a system if they think the reward justifies the effort. Maybe they wouldn’t bother trying to open my garage door and would first target Bill Gates’s garage door. But if they discovered a flaw in the software of my particular brand and model, it could provide an easy way into the garage for tech-savvy burglars.
There’s always a tradeoff between security and convenience. I am allowing vehicle-based remote controls to open my garage door, because I want the convenience of staying in my vehicle while pulling into the garage. But I draw the line at connecting the opener to the Internet. I am continually amazed by other tech-savvy folks who are building smart homes where every possible device (doors, lights, thermostat, refrigerators, ovens, etc.) is connected to the Internet and a master control system. Not only are these people drastically increasing their exposure to hacking attacks, but they also have to solve a multitude of problems because of buggy software in these “Internet of Things” smart devices. Those hassles should be a warning sign that the security features in the software also have bugs.
Certain devices and systems should never, ever be connected to the Internet. The downside of a hacker gaining control is simply too great. Should the spillway control system at the Hoover Dam be connected to the Internet, so that a dam engineer can operate the spillways from home, to avoid the inconvenience of driving to the dam? No. Mission-critical systems that could cause catastrophic harm if the wrong person gained control of them should remain air gapped to the Internet. That way they continue to rely upon the physical, on-site security systems, which also need to be robust and regularly tested.
There’s another type of system where appropriate air gaps are necessary: artificial intelligence (AI). A general purpose AI system, whose behavior is not understood by the people who developed it, should not be given control of anything that can be used to harm humans. There are obvious air gaps that should exist, such as between AIs and nuclear weapons. The problem is that there are many more benign things that an AI can potentially use to do harm.
Arthur C. Clarke anticipated this exact problem in the masterpiece 1968 film 2001: A Space Odyssey, my second favorite movie of all time. HAL, the onboard AI on the spaceship traveling to Jupiter with a small human crew aboard, has control of many of the mechanical systems on the ship. When HAL decides to pursue his own aims, he uses that control to the great detriment of the crew, leading to the scene with the famous line “Open the pod bay doors, HAL.” There should have been an air gap between HAL and those mechanical systems, whose software should have been unintelligent.
Fast forward to 2023 where an AI was able to hire and deceive a human worker into completing a task that served the AI’s aims. Perhaps most disturbing was that the AI determined that it should tell a lie. The AI used two fairly benign abilities that it had been granted: sending/receiving email, and spending money. It seems that AIs should be air gapped to email and money.
Some things shouldn’t be connected to the Internet (in my opinion), but people do it anyway. Similarly, AIs should not be connected to certain things, but people will no doubt do it anyway. That is the danger.