AI Progress

Letter from Billionaires: Stop Artificial Intelligence!

Machine trans­la­tion

The head of Tesla and SpaceX Ilon Musk, Apple co-founder Steve Wozniak and more than a thou­sand experts signed an open let­ter call­ing for a halt in the devel­op­ment of a «giant arti­fi­cial intel­li­gence». The text of the let­ter was pub­lished by the Future of Life Institute.

The authors of the let­ter urged the devel­op­ers of arti­fi­cial intel­li­gence to imme­di­ate­ly sus­pend for at least six months, train­ing sys­tems more pow­er­ful than the GPT-4 from the com­pa­ny OpenAI. The experts and indus­try lead­ers who signed the let­ter believe that train­ing should be halt­ed until com­mon secu­ri­ty pro­to­cols are in place. If the train­ing can­not be stopped quick­ly, they said, author­i­ties «should step in and impose a moratorium.»

T-Invariant asked ChatGPT-4 itself and human experts to com­ment on the open letter.

Elena Bunina, pro­fes­sor of math­e­mat­ics at Bar Ilan University

I do not in prin­ci­ple believe in ini­tia­tives to freeze or stop any progress. Such deci­sions lead to the fact that only the neg­a­tive, non-leg­isla­tive part of this direc­tion will be clan­des­tine­ly devel­oped. And it will cer­tain­ly not be applied for the ben­e­fit of human­i­ty, but it will be applied to the detri­ment of human­i­ty. What is sur­pris­ing is that the process of devel­op­ing arti­fi­cial intel­li­gence has fol­lowed a nat­ur­al path. And how many pre­dic­tions there were about what would hap­pen. And now it has hap­pened! There were so many pre­dic­tions that arti­fi­cial intel­li­gence would grow expo­nen­tial­ly. And it hap­pened. The leap hap­pened, and peo­ple are still not ready for it. So it seems like the right ini­tia­tive for arti­fi­cial intel­li­gence is to try to find super use­ful appli­ca­tions for it and at the same time agree on the very ethics of arti­fi­cial intel­li­gence that the whole world has been dis­cussing for so long. If we stop train­ing AI sys­tems now, then every­one will stop agree­ing on eth­i­cal prin­ci­ples, too.

Anton Kuznetsov, Center for Consciousness Research, Faculty of Philosophy, Lomonosov Moscow State University, Telegram 

The sit­u­a­tion with the famous let­ter con­tains both the genre of com­e­dy and dra­ma, which is caused by its absur­dist con­tent. It seems that some­thing amaz­ing in its form is hap­pen­ing, when in the alarmist fer­vor of the «con­cerned pub­lic» falls down before Leviathan. This is where the sto­ry takes not so much a turn as a som­er­sault. And once again we are con­vinced that peo­ple learn from his­to­ry what they do not learn from his­to­ry. And most impor­tant­ly, who are these peo­ple? Who are these lovers of «antiq­ui­ty» and «radi­tion­al val­ues,» who leads the pro­ces­sion of tech­no­log­i­cal «Cossacks»? Musk and Wozniak. It is pro­posed to reg­u­late what can­not be reg­u­lat­ed, and most impor­tant­ly, to reg­u­late such a «del­i­cate» field by the most bar­bar­ic meth­ods. It is clear as day that research will not stop for six or five min­utes. And there is no pow­er that can do it here and now. So even if a part of the sign­ers real­ly seri­ous­ly believed what they were sign­ing, I do not accept the idea that this part is the major­i­ty. Clearly the con­tent of the let­ter is of no inter­est, I would even say that it is unin­ter­est­ing. What mat­ters in this sto­ry is the very fact of the let­ter, or rather, the act. The act should con­vince the gen­er­al audi­ence (and through it investors) that some­thing rev­o­lu­tion­ary, some­thing very sig­nif­i­cant is hap­pen­ing in the field of AI right now, some­thing that should not be missed in any way. It should con­vince us that noth­ing is more impor­tant right now than the field of AI devel­op­ment, that the peo­ple and com­pa­nies involved in this field are steer­ing the ship that is tak­ing human­i­ty into the future. This let­ter is not the Leviathan wor­ship for which it might be tak­en. Continuing with the nau­ti­cal theme, it is a board­ing, pow­er grab and atten­tion grab. It is also a good illus­tra­tion for the dic­tio­nary entry «post-truth.»

Vladimir Gubailovsky, author of «Artificial Intelligence and the Human Brain»

The let­ter begins with a ref­er­ence to a well-known doc­u­ment — Asilomar AI Principles (Asilomar 2017). This doc­u­ment was devel­oped in February 2017 by the Institute for the Future of Life. That orga­ni­za­tion launched this lat­est let­ter as well. Asilomar 2017 is basi­cal­ly this kind of blah-blah-blah for all things good. The first prin­ci­ple of Asilomar 2017: the goal of AI research should be to cre­ate use­ful intel­li­gence. And use­ful not for any­one in par­tic­u­lar, but for all of human­i­ty. The fact that at the start of a glob­al project the use­ful­ness is very dif­fi­cult (impos­si­ble?) can be seen in the devel­op­ment of the nuclear indus­try and the devel­op­ment of the Internet. When the nuclear project began, few peo­ple thought about nuclear pow­er. When the Internet began, every­one talked about online free­dom, and no one thought about the dig­i­tal prison. But Asilomar 2017 was signed by every­one in gen­er­al. In par­tic­u­lar, Ilya Sutzweker, co-founder of OpenAI. Few peo­ple knew about OpenAI’s work then; today it’s the devel­op­er of ChatGPT. Stephen Hawking and Elon Musk were among the signatories.

And now Musk is the first to sign a new let­ter. These are no longer gen­er­al words, as in Asilomar 2017, but a spe­cif­ic call to sus­pend work on LLMs. LLMs are big lin­guis­tic mod­els. They are ChatGPT, and a bunch of oth­er devel­op­ments that Google, Baidu, and many oth­er IT giants are doing as well. In 2017, AI devel­op­ment was a very promis­ing but still mar­gin­al area. Today, many bil­lions are already invest­ed in LLMs. Things have changed. The new let­ter Sutzweker, of course, did not sign. There are no fools to chop their own heads off.

Why Musk needs this new let­ter is more or less under­stand­able. He’s a great entre­pre­neur, but IT is not his strong suit. You can see that in the his­to­ry of the Twitter acqui­si­tion. The sharp turn toward LLMs is chang­ing the invest­ment land­scape. And Tesla, SpaceX, and oth­er engi­neer­ing solu­tions are going a bit side­ways. It’s a shame for Musk: he called mankind to Mars, but peo­ple still want to play games. But yes, humans are inter­est­ed in humans and gen­er­al­ly not inter­est­ed in any­thing else. LLMs today are a reflec­tion of man. It’s hard to look away from this crooked mirror.

The sign­ers of the new let­ter are con­cerned about the future of human­i­ty. Is human­i­ty real­ly threat­ened by some­thing? Bill Gates writes of ChatGPT with delight. He com­pares its emer­gence with the appear­ance of the graph­i­cal inter­face in the 1980s. If the GUI changed the rela­tion­ship between man and com­put­er, so will ChatGPT. Nothing but good. Gates did­n’t sign the let­ter, Microsoft is the main investor in OpenAI. Nor does Gates have any desire to chop his own head off.

Of course, the rela­tion­ship between com­put­ers and humans will change. It has already changed. So far, how­ev­er, it has been felt most­ly by pro­gram­mers: when cre­at­ing ChatGPT code, it already works and works quite well. In the short term, the ben­e­fits are obvi­ous, the harms are not. Basically, it’s a vague pre­mo­ni­tion. Is it pos­si­ble to stop the devel­op­ment of big lin­guis­tic mod­els and oth­er areas of AI? You can’t. No mat­ter how many let­ters are writ­ten, and who­ev­er signs them. There’s going to be a lot of inter­est­ing stuff. And this is where I would advise get­ting a shov­el, not pop­corn. And start tidy­ing up your lit­tle bed. The AI learns from all of us. Let him learn some­thing good.

Andrey Sebrant, Yandex ser­vices mar­ket­ing direc­tor and pro­fes­sor at the National Research University Higher School of Economics (A frag­ment of the author’s telegram chan­nel is quot­ed with the author’s kind per­mis­sion. Ed. T-invariant)

To ana­lyze the text of such let­ters with­out ana­lyz­ing the list of sig­na­to­ries and the fig­ure of the ini­tia­tor is a dead end job, it means just swal­low­ing the bait thrown to us. You can not fall for such manip­u­la­tion in this age of infor­ma­tion wars:)
And the ini­tia­tors and sig­na­to­ries are only just begin­ning to be dealt with, and the first results are amus­ing, to say the least.
The Future of Life Institute, which ini­ti­at­ed the let­ter and orga­nized the col­lec­tion of sig­na­tures, is a spe­cif­ic orga­ni­za­tion in the Valley, with one of the main donors in the per­son of the Musk Foundation (!), and pre­vi­ous­ly — with the noto­ri­ous and now sit­ting with­out inter­net, Sam Bankman-Fried.
The activ­i­ties of this insti­tute resem­ble the cul­ti­va­tion of a cult.
At the same time, the same Musk is the main sig­na­to­ry to the letter.
And then Musk’s long-stand­ing con­flict with OpenAI crawls onto the scene. At one time he want­ed to rule there more, but they would­n’t let him, and he stopped fund­ing non­prof­it. OpenAI turned to Microsoft for mon­ey, and this is where the first moves toward com­mer­cial­iza­tion and Musk’s big grudges began.
And Musk, who has repeat­ed­ly said that his humanoid robot will become a big­ger busi­ness than drones, was left with­out con­trol over the key tech­nol­o­gy for such a robot: LLMs are the foun­da­tion of such a robot­’s software.
In short: behind this sto­ry, it seems, first of all, are not the inter­ests of human­i­ty, but the inter­ests of a par­tic­u­lar per­son, a mas­ter of trolling, hype and manip­u­lat­ing the atten­tion of social net­work­ing read­ers and the gen­er­al pub­lic:) And then there are a lot of busi­ness inter­ests of those busi­ness­es and star­tups that are at risk because of ChatGPT.

ChatGPT-4, large lan­guage mod­el by OpenAI.

T-i: What do you think about the open let­ter on the top­ic: Pause Giant AI Experiments: An Open Letter. We call on all AI labs to imme­di­ate­ly pause for at least 6 months the train­ing of AI sys­tems more pow­er­ful than GPT-4.

ChatGPT-4: The open let­ter you men­tioned was writ­ten by a group of AI researchers and experts who are con­cerned about the poten­tial risks of devel­op­ing more pow­er­ful AI sys­tems. They argue that the recent advance­ments in AI, espe­cial­ly in the area of large-scale lan­guage mod­els, have brought new risks and chal­lenges that require imme­di­ate attention.

The authors of the let­ter pro­pose a six-month pause on the devel­op­ment of AI sys­tems that are more pow­er­ful than GPT-4, which is a hypo­thet­i­cal mod­el that does­n’t cur­rent­ly exist. The pur­pose of this pause is to allow researchers and pol­i­cy­mak­ers to reflect on the eth­i­cal, social, and envi­ron­men­tal impli­ca­tions of devel­op­ing such advanced AI systems.

The let­ter rais­es impor­tant ques­tions about the respon­si­bil­i­ty of AI researchers and the need to pri­or­i­tize the safe­ty and well-being of soci­ety over tech­no­log­i­cal advance­ment. However, not all experts in the AI com­mu­ni­ty agree with this pro­pos­al. Some argue that a pause on AI research could slow down progress and harm innovation.

In sum­ma­ry, the open let­ter on the top­ic «Pause Giant AI Experiments» has sparked a debate in the AI com­mu­ni­ty about the poten­tial risks and ben­e­fits of devel­op­ing more pow­er­ful AI sys­tems. It high­lights the need for respon­si­ble AI devel­op­ment that con­sid­ers the poten­tial impact of these tech­nolo­gies on society.

  4.04.2023