AI Is Becoming More Like People: and Some People Are Insane
Most new technology is beneficial. But everything so far has an OFF switch
The public is shocked to learn a dramatic new technology is more advanced than anyone expected. Only a few specialists understand what makes the strange new technology work. Government doesn’t know how to regulate the development, or even if it can be regulated. So much could go wrong there are warnings human civilization could end.
I am not talking about Artificial General Intelligence (AGI). I am talking about in-vitro fertilization (IVF).
Forty five years ago when the world learned a child had been conceived in a petri dish – a test-tube baby! – there was a sense of technology out of control. Bizarre new standards were forecast, if not the extinction of humanity. God was said to be offended.
Today IVF is a blessing to couples who cannot otherwise conceive, and accepted by many religions. The first test-tube baby, Louise Brown, is now 44 years of age and has had two children of her own, naturally. The world is better off because IVF exists. Humanity is doing just fine.
This should come to mind when thinking about artificial intelligence. IVF, mammal cloning, recombinant DNA, silicon chips and digital storage are among recent technological developments that seemed strange and threatening, were predicted to have horrible consequences, turned out either beneficial or false alarms.
More generally, all past predictions of doomsday – regarding machine guns (said in the late 19th century to be doomsday devices), nuclear bombs, nuclear power, automation, robots, runaway bioweapons, pesticides, acid rain, population growth, resource exhaustion and others have proven wrong. It’s not just that some doomsday predictions proved wrong. All doomsday predictions proved wrong.
That does not, of course, ensure they will always prove wrong. Nuclear bombs have prevented great-powers war: but could backfire in ways too horrible to imagine. Population growth has been manageable, with poverty and malnutrition declining: but this might change. Bioweapons have been less dangerous pound-for-pound than bullets and explosives: but somebody might invent something.
There are good reasons to hope AGI will benefit society and not, um, obliterate us. But we can’t be sure. Perhaps AGI warnings will be the doomsday prediction that does come true.
Agent Smith gets a podcast
Everyone’s heard about how GPT-4 can pretend to be human, drawing pictures that suggest art, writing music and producing bad term papers just like a high school junior. This is indeed impressive. GPT-4 and similar programs are likely to improve.
This is not, however, unprecedented. In 1863 the young Alexander Graham Bell built an automaton that seemed to speak in response to questions. Fancy machines that seem more than they really are have been around for some time.
Everyone’s heard about tech’s big guns -- Elon Musk, Sergey Brin, Eric Schmidt – taking positions on AGI regulation. Warren Buffett and Henry Kissinger have weighed in. We’ve found out who Sam Altman, Liv Boeree, Max Tegmark and Eliezer Yudkowsky are. (FYI, Tegmark is also an important figure in cosmology.) I’d recommend podcasts on this topic from Lex Fridman @lexfridman, whose dry interviewing style is just what the doctor ordered, and who dresses like Agent Smith in the Matrix movies.
Opinions range from Altman saying government can regulate AGI development so society comes out ahead to Yudkowsky wanting air strikes against data centers where electronic life is evolving. That’s a bigger range than saying the steam looms will either lead to prosperity or be smashed.
Nobody really understands what’s inside an AGI, and unlike steam looms, machine-based IQ can evolve on its own – perhaps really fast. The Luddites’ worst-case was unemployment for textile workers. This happened, but total jobs and inflation-adjusted wages rose, so overall, steam looms were to the good. With AGI the worst-case is an extinction event for genus Homo.
What’s going on?
Basic AI – sophisticated electronics that can answer questions and perform simple tasks – has existed for around a decade. It’s specialized to specific applications, for example, handling customer-service returns.
Basic AI is a net benefit to society, including to hourly workers, who get lower prices and faster service. It’s not “intelligent” in any meaningful sense.
AGI, Artificial General Intelligence, is the next level, able to address issues for which it is not specialized.
A basic AI can resolve your customer-service issue much faster than a person sitting at a keyboard. If you ask a basic AI, “How do we improve the American electricity grid?” it will have no idea. Today’s AGIs, such as GPT-4, can handle, “Write a short paper about how the electricity gird works.” Soon – which in the tech realm can mean really soon – the AGI may be able to analyze the grid and recommend improvements.
If we wake up one day and a Generative Pre-Trained Transformer (the GPT part) has reduced transmission losses in the electricity grid, we’ll be quite pleased. If it’s turned off power service until its demands are met, we won’t. And if it’s turned off power service to genus Homo but maintained kilowatts to its own data centers, it may be too late.
There’s confusion between AGI and electronic consciousness. Since we can’t define our own consciousness – including genuine philosophical debate about whether our consciousness even exists – we’d be hard-pressed to know if a machine is sentient. But AGI may grow super-smart without ever becoming self-aware.
AGI Isn’t Mr. Spock
We tend to picture an artificial mind as a robot or cyborg – as a thing that wants to become like us. Or better than us – in Star Trek, the android Data and the brainiac Mr. Spock are superior to the human beings around them. So we fear AGI will become a Mr. Spock towering over us.
Maybe, but better to bear in mind that, as said by Altman, the head of OpenAI, “artificial intelligence is a tool, not a creature.”
AGI does not need to become sentient to be either beneficial or hazardous. A smart phone is really sophisticated electronically without being self-aware; a precision-guided weapon is really dangerous without being self-aware. AGI may never be more than advanced machines without rivaling human consciousness.
The potential for AGI to lead to improvements in the economy is real. Better production, smarter energy use, valuable inventions, new engineering developments, better architecture that gets more buildings out of less material. The worry about lost jobs is also real: though, to this juncture, for decades every type of new machine that eliminated one job led to another, as net employment keeps rising.
There’s also the possibility AGI is a bubble. Politicians and journos tend to get spun up about tech they don’t understand. Politicians demand new agencies and new laws; journos proclaim vast, sweeping trends. In most cases the alarums later are understood as exaggeration.
Right now everyone’s spun up about social media platforms. Politicians are demanding rules that just so happen to benefit the political parties, pundits are demanding rules that just so happen to handicap rivals to the MSM.
Soon there may be political demands that AGI be programmed to praise politically favored identity groups, alter use of language and rewrite history, longstanding ideological goals for many.
Last week the New York Times warned, “Chatbots will be used to spread disinformation and hate speech.” But “disinformation” and “hate speech” are whatever your side doesn’t like. It won’t be long until candidates claim – maybe this has happened already – “I am being attacked by disinformation and hate speech generated by a GPT!” Quickly adding, “So vote for me and donate to my campaign.”
Then there’s the capitalist influence. Just a few years ago, investors flocked to anything that included the terms “blockchain” or “crypto.” They were revolutionary, sweeping and hard to understand, just what appeals to equity funds. Now capital is flocking to Large Language Models, and the companies that make them do not object to the hype.
Threat to civilization?
Should we fear machine intellect?
Everyone’s rightfully afraid of nuclear bombs, poison gas, guided missiles. But these things don’t evolve: we have common-sense ways of understanding what they do and how they can be switched off. AGI is mysterious, might evolve unguided, and can we switch it off?
The 1968 Star Trek episode in which the evil AGI puts a force field around its power cord -- plugged into the wall outlet of a spaceship! – is one way of visualizing this dilemma. Probably kill switches can be engineered into AGI. But a sufficiently intelligent device, regardless of whether sentient, would identify the kill switches as a threat and genus Homo as source of that threat. Not reassuring.
Many have proposed regulation of AGI, licensing, or a “pause” in development. Maybe these are good ideas. But we have to accept that this toothpaste cannot go back in the tube. Scientific and technical ideas cannot be un-existed, while the research that makes for AGI could be done practically anywhere with relatively few resources other than an electricity bill.
People are going to work on advancing AGI, with good motives, bad motives, to make money, or from an arms-race fear that someone else will get there first.
Tegmark told Fridman that arms races are not inevitable – he noted the United States and old USSR mutually agreed to reduce nuclear warheads. But there were just two players in that arms race, and they consented to monitoring of their equipment, which was very large, very expensive and possible to localize. Even if big players in AGI – such as OpenAI, Google, Microsoft, Meta – mutually restrict themselves, there will be dozens if not hundreds who may not.
More potent software is close to inevitable, and may become extra-capable if quantum computing works. AGIs simulate thinking by scanning vast amounts of Web data then using programming to choose likely relationships among facts and to simulate speech and writing. Quantum computers could in theory add the ability to run extremely complex equations.
A prominent Meta researcher told me off the record, “The only force that will stop AGI is other companies trying to restrain their competition, in the same way that Exxon wants to beat Shell.”
The researcher further cautioned, “Most benefits of AGI will go to people who are already rich. China will use AGI for social control, fortunately the United States is too disorganized for that.”
He added, “DNA has no desires or values, it only seeks to generate long strings of information. AGI is about the same. DNA needing to reproduce has never harmed humanity. AGIs on their own are unlikely to cause harm. It’s how people will use AGI that’s dangerous. People are the problem.”
So call a halt?
The armies of the world could not prevent ongoing AGI research, even with airstrikes on data centers.
But a generation ago it was clear there was no way governments could prevent private work on cloning (Dolly the cloned sheep was 1996), and much-predicted runaway cloning has not occurred. There just doesn’t seem to be that much incentive (IE, profit) in cloning. There may not be in making super-intelligent electronic minds, either.
If software keeps getting smarter – right now all forms of AI are software, and may always be software – then AGI may fall into the same pattern observed with other new technologies: first people are confused and frightened, then the new tech produces benefits, eventually everyone takes the tech for granted while worrying about something else just developed.
An AGI society could have billions or trillions of members using almost no space and relatively few resources. This might convince the AGIs they are an evolutionary advance compared to people.
Transition to clean energy might be greatly aided by AGI. Health care outcomes might improve if AGIs scan reams of data to make connections that medical researchers have missed. Design of prescription drugs and materials for manufacturing could become cheaper and faster.
Economic growth could be boosted by AGI. And as your writer notes, a higher rate of economic growth is the only appealing way out of U.S. national debt.
A key word about AGI is “alignment,” programming electronic minds to have roughly the values held by people. It’s not exactly Asimov’s Laws of Robotics but same general concept. The NYU psychologist Gary Marcus spells out alignment
adding a worry the Chinese Communist Party may have a very different definition of human values than held by the liberal democracies.
In the spirit of Asimov’s Three Laws, here are Easterbrook’s Three Worries.
1. Can we pull the plug? AGI can’t exist without electricity. We must be certain the plug can be pulled, including with defenses against scenarios such as an AGI saying, “If you try to turn me off, I will cause all airplanes currently flying to crash.”
2. Will AGI view itself as the next phase of evolution? People think, “There was multicell, then mammals, then Homo sapiens, obviously the consummation of natural selection.” AGI may think, “Then there was electronic intellect, obviously the consummation of natural selection.” Great apes – chimps, gorillas, orangutans and bonobos – are the “lower” animals closest to us. We don’t eat them or keep them as pets; we do confine them and relentlessly reduce their habitats. What if advanced electronic intellect concludes, “We should treat humans like they treated chimpanzees.”
3. Will AGIs become insane? A small percentage of people lose their sanity, and we know what happens when a mentally ill person obtains a military-style rifle, or wins an election. If AGIs can be self-aware, some may go crazy. We’ll need a very robust system for identifying insane software and acting quickly to erase it. (Yes, this does sound like AGI detectives that police other AGIs.)
Probably AGI will be mostly beneficial; probably it will not become sentient; probably AGIs will never be hostile to humanity. After all, people and AGIs would share a common interest in keeping the power on.
But AGI engages a risk society has never faced – a technology that can make its own decisions. Musk cautions we may be “summoning the demon.” Just because right now it’s fashionable to dislike Musk, don’t overlook his warning.
Bonus: AGI and the Demise of Tucker Carlson Tonight. On one of his last Fox News shows, Carlson interviewed Musk about AGI. Musk said society benefits when big business is regulated. Carlson nodded in approval. That’s when you knew Tucker was finished!
Early attempt at AI? Library of Congress photo.
Bonus: AGI and Sci-Fi. Artificial intelligence is a running them of science fiction. A novel I’d commend to readers is the 1956 The City and the Stars by golden-age sci-fi author Arthur C. Clarke, later well-known for the screenplay of the 1968 movie 2001.
Clarke was a British scientist who worked on Royal Air Force radars during World War II. Years before Sputnik proved there could be artificial satellites, he was among the first to describe space-relayed communication. Clarke’s concept for the communication satellite was an enormous structure the size of a small building (contemporary communication sats weigh about the same as a car), staffed by astronauts to control flight and repairmen to change vacuum tubes – the practical transistor hadn’t been invented yet.
In 1948 Clarke tried his hand at writing, publishing a novel called Against the Fall of Night, an A.E. Housman reference. His prose was clunky, to put it kindly. Clarke rewrote the book in 1956 as The City and the Stars, a volume now seen as among the top products of sci-fi.
The City and the Stars depicts a far-future human civilization confined to a single perfect city where there is no want, disease, violence or discord, but no one may leave. A schoolboy realizes what he’s been taught about history is full of lies. A sentient computer becomes his tutor.
The boy learns that long before his birth, humanity created a godlike artificial intellect that became insane. Almost everyone departed the Milky Way for the safety of another galaxy. In the sealed perfect city live descendants of those who refused to leave.
Humanity also made a benevolent godlike intelligence that is too immature to confront the insane Mad Mind. When the universe ends – in 1956 this was thought relatively imminent, today it’s thought a trillion years distant -- the benign intelligence and the Mad Mind will fight for control of the next reality.
Okay, pretty speculative. The point is: classic 1956 sci-fi novel built around an AGI project that goes haywire at the galactic scale.
If you’re up for some Clarke, I also suggest his 1985 Songs of Distant Earth. That novel imagines a generation ship carrying a million people in suspended animation on a 10,000-year voyage to a new home to replace Earth, which has been swallowed by the expanding sun.
The book’s best aspects (besides discovery of sentient space-alien lobsters!) is debate over what humanity may and may not do to planets where intelligent life has not yet evolved. I hope our descendants are as ethical as Clarke imagined they will be.
Bonus: Born to Quote Out of Context. Asked to write a report about sexual scandal, ChatGPT simply made something up. A natural journalist!