AI Standpoint

Fear of the Stochastic Parrot

Machine trans­la­tion

The let­ter about stop­ping the train­ing of large AI mod­els appeared in the pub­lic domain on the night of March 29, but such sug­ges­tions have been made more than once by var­i­ous peo­ple espe­cial­ly in recent months, since ChatGPT’s release in November 2022. The idea is not new, it’s been expressed reg­u­lar­ly, and Gary Marcus, a well-known crit­ic of big lan­guage mod­els (LLMs), has been say­ing «Carthage must be destroyed» almost every day. And, of course, he was among the first to sign the letter.

There are big names among the sign­ers, though not many, and there are seri­ous experts in their fields, but the call for a mora­to­ri­um itself, as well as its ratio­nale, is unlike­ly to con­vince those to whom the let­ter is addressed — the heads of AI labs.

What is the dubi­ous­ness of this idea? — That stop­ping work does­n’t solve any prob­lems. Yes, every­one agrees that LLMs are opaque, unre­li­able, prone to hal­lu­ci­na­tion, and we don’t under­stand with any clar­i­ty how they draw con­clu­sions. Nor do we know what prop­er­ties will appear in mod­els more pow­er­ful than GPT-4. However, we can­not find that out based on gen­er­al con­sid­er­a­tions and exper­i­ment­ing with small­er mod­els for six months or longer. This is an impor­tant fea­ture of LLMs: their behav­ior depends non­triv­ial­ly on the size of the mod­el, on the qual­i­ty and quan­ti­ty of train­ing data, and on train­ing methods.

Only work­ing with AI mod­els will allow to under­stand at least some­thing about them, see the risks and test the pos­si­bil­i­ties. As philoso­pher Artem Besedin apt­ly not­ed, «the authors of this let­ter sug­gest that we learn to swim with­out enter­ing the water.» Alas, it does not work that way.

Next, how will China, for exam­ple, feel about the mora­to­ri­um? Judging by the news com­ing out of there, the CCP is bet­ting heav­i­ly on the devel­op­ment of arti­fi­cial intel­li­gence. China’s offi­cial goal is to become the world leader in this area by 2030. Right now, they are in a catch-up role, and the stalling of projects by U.S. com­peti­tors will be a great gift to Chinese researchers. They are unlike­ly to stop on their own.

However, it is unlike­ly that OpenAI, the com­pa­ny that cre­at­ed GPT-4, will take a break. The next ver­sion is already in train­ing, and GPT-4.5 and then GPT-5 have been announced for the fall. Given that Microsoft is invest­ing $10 bil­lion in these mod­els, there will be no stop­ping. Nor did Yann LeCun, vice pres­i­dent of Meta, endorse the let­ter. Perhaps the authors of the let­ter are way over­due: the genie has already been released.

In addi­tion, the threats of AI are out­lined rather vague­ly to be feared seri­ous­ly. Against the back­drop of the thun­der­ing covid and the cur­rent tragedies we’re see­ing live, the risk of fake pro­lif­er­a­tion and the com­ing changes in the labor mar­ket are not impres­sive enough to «drop every­thing.» And there are no real­is­tic sce­nar­ios for how LLMs will enslave human­i­ty. And while some researchers find «sparks of gen­er­al intel­li­gence» in GPT-4, oth­ers call lan­guage mod­els «sto­chas­tic par­rots» 🦜 deny­ing them even a rudi­men­ta­ry thought. And then should we be afraid of a par­rot, even if it is a talk­ing one?

The ques­tion of LLMs’ intel­li­gence is far from set­tled, and this is where exper­i­ments are need­ed. But every­one agrees that it is a pow­er­ful tool that can be very use­ful. It expands our capa­bil­i­ties, and giv­ing them up has a price, too. These pos­si­bil­i­ties are impres­sive, and in the emer­gence of man­i­festos we see a typ­i­cal «future shock» in the terms of soci­ol­o­gist Alvin Toffler, who wrote about the psy­cho­log­i­cal effects of the rapid devel­op­ment of tech­nol­o­gy back in the 1960s.

Models are upgrad­ing too quick­ly and get­ting «smarter» before our eyes. The lim­it of their intel­li­gence growth is not clear, and this fright­ens peo­ple with rich imag­i­na­tion. However, mora­to­ri­ums and bans will only give a respite to our psy­che, but will not help to elim­i­nate future risks. In biol­o­gy, for exam­ple, some hypothe­ses can be test­ed on ani­mals or cell cul­tures. In the field of AI, there is no sim­i­lar sub­sti­tute for large mod­els; their behav­ior is unique.

In a more gen­er­al sense, we are cre­at­ing a super­com­plex object of a new type for which we have no ready-made the­o­ries. Understanding its prop­er­ties can only be gained by inter­act­ing with it. That is why the call to dis­cuss the key prob­lems of AI in a pause, with­out research, sounds like utopia. Of course, all this does not can­cel the need to take seri­ous­ly the issues of reli­a­bil­i­ty, trans­paren­cy and safe­ty of LLMs, their train­ing and reg­u­la­tion, but a break in exper­i­ments for half a year will hard­ly bring us clos­er to the answers.

Text: DENIS TULINOV, author telegram chan­nel «Vagus nerve».

The text illus­tra­tion was gen­er­at­ed by Midjourney.