AI Computer science

Fear and trembling: Why do AI leaders frighten humanity?

Machine trans­la­tion

Artificial Intelligence devel­op­ment lead­ers com­pare AI to pan­demics and nuclear war. Are they gen­uine­ly wrong, or do they benefit?

On May 30, the Center for AI safe­ty pub­lished an open let­ter. It con­sists of one sen­tence: “Mitigating the risk of extinc­tion from AI should be a glob­al pri­or­i­ty along­side oth­er soci­etal-scale risks such as pan­demics and nuclear war.” The let­ter has already been signed by almost all the lead­ers of AI. (With some impor­tant excep­tions, for exam­ple, one of the fathers of mod­ern AI, Yann LeCun, has not signed it.) Is it real­ly that dan­ger­ous and can AI real­ly be com­pared to such threats as pan­demics and nuclear war?

A brave new world

It all start­ed on March 14, when OpenAI released ChatGPT 4. As you can see by the ver­sion num­ber, there were already 2, and 3, and 3.5 (this ver­sion was released in November 2022). But some­how there was nei­ther such a pan­ic nor such a hype. And right away, before peo­ple had time to prop­er­ly enjoy ChatGPT 4 (only a week has passed), on March 22, Elon Musk and col­leagues on the web­site of the Institute for the Future of Life launched open let­ter, where sol­id peo­ple called to stop the devel­op­ment of Large Language Models (LLM), such as GPT-4.

From there, things esca­lat­ed. On May 11, the European Union passed the AI Act, a pro­to­type law reg­u­lat­ing and restrict­ing the use of AI. The G7 sum­mit also dis­cussed AI. AI lead­ers have become as pop­u­lar as rock stars. Sam Altman (CEO of OpenAI) has become rec­og­niz­able. He (and oth­er AI devel­op­ers and rep­re­sen­ta­tives of major IT com­pa­nies) were called to the White House for a meet­ing with the vice pres­i­dent. Then Altman per­formed in the Senate, say­ing he was going on a tour of cities and coun­tries to explain what AI is and how dan­ger­ous it is.

For the last month, Altman has been giv­ing non-stop inter­views and say­ing every­where that AI should be reg­u­lat­ed, bet­ter at the inter­na­tion­al lev­el, that some­thing like the IAEA should be set up, only not about nuclear ener­gy, but about AI. (The let­ter I start­ed with was signed by Altman among the first.) On May 22, Altman and two oth­er lead­ing OpenAI employ­ees and co-founders pub­lished a col­umn on the com­pa­ny’s web­site where they write that in 10 years AI will sur­pass humans in most areas of knowl­edge. It will be not just AI and not even AGI (as they usu­al­ly call AI, which is com­pa­ra­ble to human intel­li­gence), but super­in­tel­li­gence - a superintelligence.

On June 6, Sam Altman made it to Abu Dhabi, where he attend­ed anoth­er con­fer­ence on AI con­trol. The con­fer­ence was opened by Andrew Jackson. He is asso­ci­at­ed with Group 42, a sub­sidiary of DarkMatter. DarkMatter is, in full accor­dance with its name, a rather “dark” com­pa­ny, both in its goals and meth­ods. In fact, all of its orders come from not the most demo­c­ra­t­ic gov­ern­ment of the UAE. The CIA is very inter­est­ed in the work of DarkMatter. DarkMatter is sus­pect­ed of elec­tron­ic sur­veil­lance and hack­ing. The com­pa­ny hires for­mer CIA and NSA (National Security Agency) employ­ees and even an Israeli con­tin­gent for an Arab coun­try for good mon­ey. AP cites Andrew Jackson as say­ing, “We are a polit­i­cal force, and we will play a cen­tral role in reg­u­lat­ing AI.” “We” is who? DarkMatter? The UAE gov­ern­ment? Well, why not? If an inter­na­tion­al reg­u­la­to­ry body, some­thing like the IAEA, is being cre­at­ed, then we have to coop­er­ate and share infor­ma­tion, includ­ing with DarkMatter. That’s the way it is. Except that it seems that Altman was look­ing for oth­er allies.

Several ques­tions arise.

Where do AI developers get such a desire to fall into the arms of the state?

The longer the hype lasts, the bet­ter for AI devel­op­ers. No one (but main­ly the share­hold­ers) asks the com­pa­nies and tech­nolo­gies at the top of the hype what they are doing, and when the pay­off will be. And as of today, the pay­off is not obvi­ous. The cost of a ChatGPT request by var­i­ous esti­mates costs some­thing like 30-40 cents. That’s 500 to 1,000 times the cost of a stan­dard Google query. It costs Daddy Dorset, that is, Microsoft (the main investor in OpenAI), some­thing like mil­lion a day. The more requests, the greater the direct loss. The more requests, the high­er the capac­i­ty of the data cen­ters, and those are very expen­sive NVIDIA chips. But Microsoft would­n’t be itself if it did­n’t count the mon­ey. There are dif­fer­ent ways to mon­e­tize the hype, from devel­op­ing your own chips to active­ly earn­ing mon­ey from cloud services.

Sam Altman says that OpenAI is the most “investable” start­up in his­to­ry. It needs 100 bil­lion (that’s right, bil­lions, not mil­lions) to get start­ed. Where can you get it? The gov­ern­ment is a very good donor. Let them invest heav­i­ly in AI, but not just any kind, but the “safe” kind. We all love secu­ri­ty and are will­ing to pay for it. So far, the U.S. gov­ern­ment steps into the indus­try with a mod­est $140 mil­lion. But that’s for now.

Sam Altman says: Those com­pa­nies that don’t use AI will lose. And com­pa­nies are afraid to be too late and start using AI, not real­ly under­stand­ing what they are doing and why they need it. As a result, prices for cloud ser­vices are going up. For exam­ple, for Microsoft Azure ser­vices. Once the hype sub­sides, cloud prices will drop. ChatGPT requests will become rou­tine, but they won’t become 1,000 times cheap­er. Azure ser­vices will get cheap­er. It’s a cri­sis. Probably not as steep as the dot-com explo­sion of 2000, but at least as steep as the Metaworld cri­sis. Do we still remem­ber what that is?
The meta-uni­vers­es are the ille­git­i­mate child of the pan­dem­ic. Lockdowns have made humans, not only unliv­able, but even unliv­able. And man needs man. And the zoom is not enough for him. The pan­dem­ic was over. People are out of the room. The meta-uni­vers­es are over. At least for now.

For tech­nol­o­gy of the cal­iber its cre­ators describe to shoot out and then not to fall down, it must meet some basic human need. Cell phones have brought peo­ple clos­er. Social net­works have made peo­ple even clos­er. As a result, the tele­phoniza­tion of Africa lit­er­al­ly went off in some short time. Electricity made it pos­si­ble to trans­mit ener­gy to where it was need­ed. James Clerk Maxwell paid the price for all basic sci­ence for a thou­sand years to come. He changed our lives.

What basic human need does arti­fi­cial intel­li­gence meet? The devel­op­ers answer: AI will change every­thing. It will abol­ish pover­ty, solve prob­lems with the cli­mate, with ener­gy, with health care, and every­one will be hap­py (for exam­ple, this inter­view describes it all very vivid­ly). This is, to put it mild­ly, a bit vague. Somehow it strong­ly resem­bles either the philoso­pher’s stone or the elixir of immor­tal­i­ty. This is hard to believe for the sim­plest rea­son: mankind has nev­er encoun­tered any­thing like this in its entire his­to­ry. In gen­er­al, so far, the plushies do not look very convincing.

I know one basic human need that AI is already help­ing to solve, but it is unlike­ly to be of much inter­est to the mass­es: it is the need for cog­ni­tion. AI and, above all, mul­ti­lay­er neur­al net­works have pro­vid­ed anoth­er way of pro­cess­ing data that is much clos­er to human think­ing than tra­di­tion­al pro­gram­ming. But it is not a need for com­mu­ni­ca­tion or a need for warmth. Humans need to live first, and then some may have the urge to think. At leisure. Aristotle gen­er­al­ly says that sci­ence is the child of leisure. And right­ly so. Science requires the free­dom of the mind.

If it’s not clear what AI will give, then let’s tell every­one what it will take away (unless, of course, it is used incor­rect­ly and giv­en 100 bil­lion dol­lars by OpenAI, who knows exact­ly how to get it right). Then it’s sim­ple: if you don’t agree with AI, there will be a pan­dem­ic halfway through a nuclear war. And that’s where the reg­u­la­tors have to be called, of course. Not only because they will give mon­ey, but also because they will share respon­si­bil­i­ty if some­thing goes wrong. And the devel­op­ers will groan and groan: well, how could they not watch out, but we warned them. After all, their hands are a lit­tle shaky. For real.

What are the dangers of AI seen right today?

Millions unem­ployed? The only mass AI appli­ca­tion today is ChatGPT. The only indus­try where ChatGPT is used in earnest is in pro­gram­ming. Programmers have been work­ing since ChatGPT 3. That’s the year 2022. By today’s stan­dards, that’s a long time ago. (I’m not refer­ring to those who make the tool, they’ve been work­ing on AI for about 70 years now, but those who use it.) Something I haven’t heard about mass lay­offs in IT. On the con­trary, there are plen­ty of vacan­cies. The most hum­ble begin­ner with a basic knowl­edge of Python: they’ll tear him off with their hands. So no reduc­tions are not expect­ed. Although the automa­tion of work­places, which Goldman Sachs experts write about, is obvi­ous, and the effi­cien­cy of the pro­gram­mer’s work is grow­ing before our eyes. But here’s the reac­tion of bioin­for­mati­cian Xijin Ge, who has been tak­ing a close look at ChatGPT’s capa­bil­i­ties specif­i­cal­ly in writ­ing code. Ge says some pret­ty dep­re­cat­ing words: you should treat this AI as an intern - hard-work­ing, eager to please every­one, but inex­pe­ri­enced and often mak­ing mistakes.

What has AI giv­en to pro­gram­ming? Hope. Hope that a dream that is 70 years old will come true. Deep learn­ing algo­rithms can work well, or they can work poor­ly. But they will still work bad­ly. And a clas­si­cal pro­gram either works, or it does­n’t. It’s too frag­ile. It’s the kind of tech­nol­o­gy we built mod­ern civ­i­liza­tion on. As Gerald Weinberg said, if builders built hous­es the way pro­gram­mers write pro­grams, the first fly-in wood­peck­er would destroy civ­i­liza­tion. Weinberg only failed to spec­i­fy that it was­n’t the pro­gram­mers’ fault. They did the best they could. It’s just not enough. You need some­thing more reli­able. This is exact­ly what AI promis­es. Why should human­i­ty dis­ap­pear as a result?

Dipfakes? This is real­ly seri­ous. Not even because every­thing will sink in fakes, as Jeffrey Hinton, one of the found­ing fathers of mod­ern AI, says, but because already today we are all ami­ca­bly afraid of it. Fear is a very bad assistant.

Tell me, hon­est­ly, do you real­ly believe every­thing they write on the… I mean, the Internet? No, I don’t. Not what’s writ­ten, not what’s drawn, not what’s on video. We don’t believe what we hear, we believe who’s talk­ing. For exam­ple, our neigh­bor Uncle Kolya (or our friend feed). As Mrs. Hudson used to say, “That’s what the Times says. Or Reuters, or AFP, or Nature, or Science, or… Aren’t they wrong? Yes, they do. They nev­er lie? They can be. But the cost of being wrong is high. If all these brands risk their rep­u­ta­tions, then Uncle Kolya risks get­ting hit in the cheek­bone: also, in gen­er­al, a sen­si­tive price for a fake. In oth­er words, it will be infor­ma­tion that has been firm­ly shak­en and shak­en. It can be tak­en seri­ous­ly. And if it was writ­ten by some­one in FB with an emp­ty pro­file - let him say the absolute truth, we will not believe him and will not check his infor­ma­tion, because life is short and there is no strength to refute every chatterbox.

Recently some­one post­ed on Twitters a “pic­ture of the Pentagon explod­ing. The pic­ture appears to have been tak­en by an AI. A hun­dred thou­sand fools repost­ed it, although a quick glance at this “pho­to” is enough to under­stand that it is a fake, and a crude one, too. But a hun­dred thou­sand fools is pow­er. The Pentagon spokesman came out and said it was noth­ing like that, and the Arlington fire inspec­tor came out and said it was noth­ing like that. Everybody some­how calmed down and for­got about it. The inter­est­ing thing is that the stock index fell slight­ly, amid all this noise. Not by much, 0.29%, but it was down. Not for long. Then it rebound­ed. What hap­pened? The traders’ machines fol­low the news. They caught that noise. If they had tak­en the news seri­ous­ly, the indices would have plum­met­ed. But the machines quick­ly fig­ured it out and got the index­es back on track. They turned out to be ready for such a crude fake.

Before we get on the shaky ground of “what hap­pens if…”, where we are dragged lit­er­al­ly by the hair of seri­ous peo­ple - the devel­op­ers of the AI, two words about real­i­ty. One bomb falling on a peace­ful city, one bul­let stop­ping a liv­ing heart, caused incom­pa­ra­bly more dam­age than all the AI we have today.

And I would­n’t throw around com­par­isons of AI to nuclear war and pan­demics. When Andrew Jackson, asso­ci­at­ed with DarkMatter, spoke at the Abu Dhabi con­fer­ence, it some­how became clear why no IAEA ana­logue for AI con­trol would work. The IAEA works with large and heavy (in the truest sense) facil­i­ties: ura­ni­um mines, plants for weapons-grade plu­to­ni­um, rivers for cool­ing nuclear pow­er plants… It’s all pret­ty easy to see right from space. But how do you see an AI sys­tem? Especially at the first stage, when it does not yet need much pow­er to be active. And it turns out that the “dark” com­pa­nies are already ready to take part in the “con­trol”, but how to con­trol them? So far, the CIA has not been very con­vinc­ing. The WHO esti­mates that at least 20 mil­lion peo­ple have died from the pan­dem­ic. Not to men­tion the over-infect­ed, the long­code. That’s so much pain and grief, it’s not up to AI at all. And what the devel­op­ers them­selves say about AI are pre­dic­tions, either more or less likely.

So what would hap­pen if the “Pentagon explo­sion” was not a crude fake, but a sub­tle and deep sys­tem of dis­in­for­ma­tion, cal­cu­lat­ed that the trad­ing machines would not rec­og­nize it, and the indices would fall in earnest. And such a fake would be very expen­sive. It won’t be done in five min­utes. It would have to at least fool Reuters and AFP. Exactly “and.” The infor­ma­tion must come from sev­er­al inde­pen­dent sources, which have their own sources (let’s assume that they do not rewrite each oth­er’s news, or at least they should­n’t). So they will have to be per­suad­ed. And they actu­al­ly know how to do it the old-fash­ioned way: pick up the phone and call the City of Arlington Fire Department, and (just to be safe) on an ana­log chan­nel. (Oddly enough, wired ana­log phones still exist, and ana­log encryp­tion sys­tems are tak­en seri­ous­ly by seri­ous peo­ple. Here, for exam­ple, quan­tum cryp­tog­ra­phy is analog).

If infor­ma­tion is easy to get (mold­ed on the fly at the knees), then it is cheap and easy to repli­cate. But it is incon­clu­sive, no mat­ter how many naive users repost it. If AI floods every­thing with such cheap dip­shits, they will sim­ply stop being per­ceived as mean­ing­ful infor­ma­tion, and will look like triv­ial email spam. Does it both­er you much today? I don’t. Twenty years ago it was a seri­ous prob­lem. In order to con­vince any­one of any­thing at all, these dip­shakes will have to be seri­ous­ly invest­ed in, and there’s no guar­an­tee that they will work. And then every­thing will be the same as always: a high risk of los­ing the invest­ment. Someone will take a risk. But the price will only go up. As it is today. As it always has been.

But the prob­lem of dip­fakes is, of course, very com­pli­cat­ed. It has a lot of sub­tle points. Already today we can see that, as in the cas­es of spam or com­put­er virus­es, both the sword (AI fake gen­er­a­tor) and the shield (AI detec­tor) will devel­op. And it is this prob­lem that is the main one today. The risk here is quite con­crete. There are con­crete solu­tions here, but that’s a sep­a­rate conversation.

Why are the masses so sluggish in their response to this horror-horror-horror?

And why, in fact, are the peo­ple silent? We don’t see any demon­stra­tions against the AI. Well, maybe Hollywood screen­writ­ers have spo­ken up: they are very care­ful about their intel­lec­tu­al prop­er­ty and are afraid that the AI will take away their piece of bread (they call it the “pla­gia­rism” prob­lem). Maybe they’re afraid for a reason.

Why did peo­ple smash the 5G tow­ers, why did the videos about Bill Gates chip­ping every­body get mil­lions of views? In fact, there was a whole move­ment of anti-vaxxers: not a hun­dred peo­ple, not a thou­sand, but mil­lions around the world. Where are the con­cerned cit­i­zens against AI? Isn’t AI, accord­ing to OpenAI, far more dan­ger­ous than vac­cines? Why are the devel­op­ers them­selves, or gov­ern­ment offi­cials and con­gress­men, or Hollywood screen­writ­ers in favor of con­trol­ling AI today?

Everything was clear about chip­ping and vac­cines. Not because any­one under­stood any­thing about how it worked or how 5G was dif­fer­ent from 4G, but because it was real­ly scary. The whole brew was bub­bling on the black fire of a pan­dem­ic. The virus is an invis­i­ble killer. How do you fight it? If you just tear the tow­er apart, you’ll feel bet­ter. At least you did some­thing use­ful. And what is it: 5G or 4G - who’s counting?

As for AI, every­thing is unclear. Those who warn of its dan­ger today have not offered a sin­gle con­vinc­ing sce­nario of how exact­ly it will destroy human­i­ty. Skynet, per­haps, or Legion? It’s okay, folks, it’s going to be OK. Schwarzenegger will pro­tect us.

Why is Schwarzenegger a suf­fi­cient defense against AI from the point of view of the mass­es? Because all the glob­al dan­gers that both devel­op­ers and offi­cials talk about today ulti­mate­ly boil down to a men­tal exper­i­ment: what hap­pens if… That is, to a kind of sci­ence fic­tion. Like gray goo. I mean, it’s been feared recent­ly, too. Somehow it dissipated.

Why did this particular conversation about the dangers of AI arise today?

So what hap­pened? Why now and with such force? Why did­n’t any­one say any­thing like this until a cou­ple of months ago? Actually, they did. And they often did. It’s just that no one heard this mum­bling to them­selves. But on March 14, ChatGPT 4 came out. The progress between ChatGPT 3.5 and ChatGPT 4.0 is so big, that many peo­ple gasped. About 100 mil­lion users noticed the jump. There was a hypo­thet­i­cal sce­nario: if it con­tin­ues to progress like this, we’ll lose con­trol over it. And when we do, it will be too late. Too late for what? What will we “lose con­trol” of? It does­n’t mat­ter. Then came the let­ter from Musk and com­rades. Then the oth­er let­ters I men­tioned. Then the bureau­crats came along.

Jeffrey Hinton said in an inter­view with the NYT: “There’s a pos­si­bil­i­ty that what’s going on in these sys­tems far exceeds the com­plex­i­ty of process­es in the human brain. Look at what was hap­pen­ing five years ago and what’s hap­pen­ing now. Imagine the speed of change in the future. It’s frightening.”

There is a sec­ond rea­son for fear here, besides speed: the fact that we don’t under­stand how it works is fine, but chances are we nev­er will, because a sim­pler sys­tem (the bio­log­i­cal brain) sim­ply can­not make sense of a more com­plex one (like superintelligence).

New and old scripts of the end of the world using AI have begun to gain trac­tion. How plau­si­ble are they? Maybe that’s worth dis­cussing. So far, Bill Gates’ view is clos­est to mine. He wrote sim­ply: here’s a graph­i­cal inter­face that came along a long time ago. And it com­plete­ly changed the way com­put­ers and peo­ple inter­act­ed. Now there are Big Language Models: GPT, for exam­ple. They’re going to change the com­put­er-human inter­ac­tion, too. That’s great and cool. What’s the end of the world? Or, con­verse­ly, philo­soph­i­cal stones? What do you mean?

Could it be that Gates is miss­ing some­thing? Does he under­es­ti­mate the glob­al nature of change? He sees every­thing. It’s just that he has been direct­ly involved in so many com­put­er rev­o­lu­tions that he not only sees the entrance to the new wave, but also the exit. And I trust his vast expe­ri­ence. Because I don’t see any cat­a­stroph­ic sce­nar­ios myself, not just to tease the pub­lic, but seriously.

But the world is unpre­dictable. And we are con­stant­ly mak­ing sure of that. However, GPT 4.5 is promised in September, and GPT 5.0 in January at the lat­est. A lot will become clearer.

  10.06.2023

,