Tag Archives: ai

Pause Giant AI Experiments?

Future of Life Institute published an open letter calling for a 6 month moratorium on “training AI systems more powerful than GPT-4”.

Why?

The argument, in general, is that as AI systems become more and more competitive with humans on previously untouchable cognitive tasks, there is enough of a chance that human survival is at stake for us to take this competition seriously.

There are concerns around misinformation, so pervasive, that truth could never be found which will ultimately doom us as a species by either making us extinct or no different from the various animals we have domesticated.

Doom and gloom scenarios like these where the AI / robot overlord subjugate or eliminate humanity altogether have been bandied about for at least a century. From the Terminator series to Battlestar Galactica to the Star Trekkian Borg (well, they were cyborg which is a little different if we want to get into the scifi weeds) to the Replicators of Stargate to Futurama which imagines an Earth around 3000AD where robots have their own planet to the various comic book novels representing some form of “Supreme” or “Superior” non-organic intelligence (Transformers, Brainiac, “I, Robot”) all in some way question humans at the top of the intellectual and food chain.

AI law enforcement is a bad idea. This image was AI generated.

These scenarios can come true, for sure. But a moratorium on innovation is not what is going to prevent them.

We need to come to grips with several psychological elements underpinning the concerns around our imminent doom from AI.

  1. Misinformation: We accept and allow as a matter of course that humans can and will engage in the spreading of misinformation, AND this behavior (broadly deception) should be only in the human domain. We simply cannot tolerate the idea of another entity, biological or otherwise out-deceiving us. If we cared about misinformation, we would enact stronger rules and regulations on our own, human run, news and media outlets.
  2. To err is human: We understand that we are not perfect and that mistakes happen. But we have a paradoxical relationship to mistakes made by AI. If an AI makes mistakes similar to a human, we tend to not see much value in having AI other than perhaps an efficiency in getting through tasks, especially if this error rate is orders of magnitude lower than a human’s. Even still, this speed is mitigated by the fact that a human still has to fix the error as the AI can’t self-correct, generally. On the other hand, we have an absolute aversion to “AI mistakes” — that is mistakes only an AI would make and that, statistically speaking, no human would ever make. The ask of AI, then, is “don’t make the mistakes that humans would make and don’t make mistakes that humans would never make” — an extremely high bar to meet.
  3. We fear what we don’t / can’t understand: Chess is a great example of AI dominance. It wasn’t more than 30 years ago, that an AI chess program needed to compute billions of moves per second to (controversially) defeat Kasparov, the undisputed chess world champion. Now, elite chess players don’t stand a chance against the top AI chess engines. AI systems have become so complex that sometimes why they make the decisions they make are beyond us. We do not like the fact that there is a system / entity sharing space with us that is able to make better choices and we don’t understand the reasoning. We’ve always been in control of “why”, but with AI this is being challenged.

What’s going to prevent the sunsetting of the human race? It’s certainly not going to be a moratorium on innovation. Innovation had better remain in the human domain. What will prevent the human race from dying out will come down to what is turned over to the AI.

Your “Smart” coffee maker will order more coffee cups on its own so that you’ll never run out. Great! What’s the downside? It orders 50,000 Keurig cups becomes one of its predictive models’ parameters became unbounded. Oops. But not the end of the world, just maybe your financial world.

How about autonomous weapons? This is a stupendously bad idea. This is one of those cases where AI needs to be perfect: “yeah, if civilization-ending nukes are accidentally launched, we prefer that humans would have done it” (AI should not make a mistake that a human would make) or “what in the holy hell did the AI do?!! It launched nukes! Why?? We have had global peace for 300 years [AI Narrator: and so human civilization came to an ignominious end in the year 2550 caused by an integer overflow in one of our subroutines. Oops.]” (AI should not make a mistake that a human would never make).

Tightly regulate what systems can and should be run autonomously (and that doesn’t mean that just because there is a “Kill Switch Engineer” role that somehow the AI is under control). Smart coffee maker? Fine, whatevs. Caveat emptor — you buy 50,000 Keurig cups, that’s on you. Self-driving cars? Caveat venditor. AI law enforcement? Dicey. Some may welcome this because the current law enforcement regime may be disproportionately punitive to them. AI surgeon? Caveat venditor.

In a nutshell, the more risk and regulatory burdens of proof that are placed on the sellers of AI systems, the more we can mitigate un-understandable risk taking. Whisper: AI systems are inherently probabilistic / statistical and unless if it is hard coded in many places, the AI will make the decision that is the highest probability bet based on the information it has. And the Ai does not have all the information.