AI Progress

Letter from Billionaires: Stop Artificial Intelligence!

Machine translation

The head of Tesla and SpaceX Ilon Musk, Apple co-founder Steve Wozniak and more than a thousand experts signed an open letter calling for a halt in the development of a «giant artificial intelligence». The text of the letter was published by the Future of Life Institute.

The authors of the letter urged the developers of artificial intelligence to immediately suspend for at least six months, training systems more powerful than the GPT-4 from the company OpenAI. The experts and industry leaders who signed the letter believe that training should be halted until common security protocols are in place. If the training cannot be stopped quickly, they said, authorities «should step in and impose a moratorium.»

T-Invariant asked ChatGPT-4 itself and human experts to comment on the open letter.

Elena Bunina, professor of mathematics at Bar Ilan University

I do not in principle believe in initiatives to freeze or stop any progress. Such decisions lead to the fact that only the negative, non-legislative part of this direction will be clandestinely developed. And it will certainly not be applied for the benefit of humanity, but it will be applied to the detriment of humanity. What is surprising is that the process of developing artificial intelligence has followed a natural path. And how many predictions there were about what would happen. And now it has happened! There were so many predictions that artificial intelligence would grow exponentially. And it happened. The leap happened, and people are still not ready for it. So it seems like the right initiative for artificial intelligence is to try to find super useful applications for it and at the same time agree on the very ethics of artificial intelligence that the whole world has been discussing for so long. If we stop training AI systems now, then everyone will stop agreeing on ethical principles, too.

Anton Kuznetsov, Center for Consciousness Research, Faculty of Philosophy, Lomonosov Moscow State University, Telegram 

The situation with the famous letter contains both the genre of comedy and drama, which is caused by its absurdist content. It seems that something amazing in its form is happening, when in the alarmist fervor of the «concerned public» falls down before Leviathan. This is where the story takes not so much a turn as a somersault. And once again we are convinced that people learn from history what they do not learn from history. And most importantly, who are these people? Who are these lovers of «antiquity» and «raditional values,» who leads the procession of technological «Cossacks»? Musk and Wozniak. It is proposed to regulate what cannot be regulated, and most importantly, to regulate such a «delicate» field by the most barbaric methods. It is clear as day that research will not stop for six or five minutes. And there is no power that can do it here and now. So even if a part of the signers really seriously believed what they were signing, I do not accept the idea that this part is the majority. Clearly the content of the letter is of no interest, I would even say that it is uninteresting. What matters in this story is the very fact of the letter, or rather, the act. The act should convince the general audience (and through it investors) that something revolutionary, something very significant is happening in the field of AI right now, something that should not be missed in any way. It should convince us that nothing is more important right now than the field of AI development, that the people and companies involved in this field are steering the ship that is taking humanity into the future. This letter is not the Leviathan worship for which it might be taken. Continuing with the nautical theme, it is a boarding, power grab and attention grab. It is also a good illustration for the dictionary entry «post-truth.»

Vladimir Gubailovsky, author of «Artificial Intelligence and the Human Brain»

The letter begins with a reference to a well-known document — Asilomar AI Principles (Asilomar 2017). This document was developed in February 2017 by the Institute for the Future of Life. That organization launched this latest letter as well. Asilomar 2017 is basically this kind of blah-blah-blah for all things good. The first principle of Asilomar 2017: the goal of AI research should be to create useful intelligence. And useful not for anyone in particular, but for all of humanity. The fact that at the start of a global project the usefulness is very difficult (impossible?) can be seen in the development of the nuclear industry and the development of the Internet. When the nuclear project began, few people thought about nuclear power. When the Internet began, everyone talked about online freedom, and no one thought about the digital prison. But Asilomar 2017 was signed by everyone in general. In particular, Ilya Sutzweker, co-founder of OpenAI. Few people knew about OpenAI’s work then; today it’s the developer of ChatGPT. Stephen Hawking and Elon Musk were among the signatories.

And now Musk is the first to sign a new letter. These are no longer general words, as in Asilomar 2017, but a specific call to suspend work on LLMs. LLMs are big linguistic models. They are ChatGPT, and a bunch of other developments that Google, Baidu, and many other IT giants are doing as well. In 2017, AI development was a very promising but still marginal area. Today, many billions are already invested in LLMs. Things have changed. The new letter Sutzweker, of course, did not sign. There are no fools to chop their own heads off.

Why Musk needs this new letter is more or less understandable. He’s a great entrepreneur, but IT is not his strong suit. You can see that in the history of the Twitter acquisition. The sharp turn toward LLMs is changing the investment landscape. And Tesla, SpaceX, and other engineering solutions are going a bit sideways. It’s a shame for Musk: he called mankind to Mars, but people still want to play games. But yes, humans are interested in humans and generally not interested in anything else. LLMs today are a reflection of man. It’s hard to look away from this crooked mirror.

The signers of the new letter are concerned about the future of humanity. Is humanity really threatened by something? Bill Gates writes of ChatGPT with delight. He compares its emergence with the appearance of the graphical interface in the 1980s. If the GUI changed the relationship between man and computer, so will ChatGPT. Nothing but good. Gates didn’t sign the letter, Microsoft is the main investor in OpenAI. Nor does Gates have any desire to chop his own head off.

Of course, the relationship between computers and humans will change. It has already changed. So far, however, it has been felt mostly by programmers: when creating ChatGPT code, it already works and works quite well. In the short term, the benefits are obvious, the harms are not. Basically, it’s a vague premonition. Is it possible to stop the development of big linguistic models and other areas of AI? You can’t. No matter how many letters are written, and whoever signs them. There’s going to be a lot of interesting stuff. And this is where I would advise getting a shovel, not popcorn. And start tidying up your little bed. The AI learns from all of us. Let him learn something good.

Andrey Sebrant, Yandex services marketing director and professor at the National Research University Higher School of Economics (A fragment of the author’s telegram channel is quoted with the author’s kind permission. Ed. T-invariant)

To analyze the text of such letters without analyzing the list of signatories and the figure of the initiator is a dead end job, it means just swallowing the bait thrown to us. You can not fall for such manipulation in this age of information wars:)
And the initiators and signatories are only just beginning to be dealt with, and the first results are amusing, to say the least.
The Future of Life Institute, which initiated the letter and organized the collection of signatures, is a specific organization in the Valley, with one of the main donors in the person of the Musk Foundation (!), and previously — with the notorious and now sitting without internet, Sam Bankman-Fried.
The activities of this institute resemble the cultivation of a cult.
At the same time, the same Musk is the main signatory to the letter.
And then Musk’s long-standing conflict with OpenAI crawls onto the scene. At one time he wanted to rule there more, but they wouldn’t let him, and he stopped funding nonprofit. OpenAI turned to Microsoft for money, and this is where the first moves toward commercialization and Musk’s big grudges began.
And Musk, who has repeatedly said that his humanoid robot will become a bigger business than drones, was left without control over the key technology for such a robot: LLMs are the foundation of such a robot’s software.
In short: behind this story, it seems, first of all, are not the interests of humanity, but the interests of a particular person, a master of trolling, hype and manipulating the attention of social networking readers and the general public:) And then there are a lot of business interests of those businesses and startups that are at risk because of ChatGPT.

ChatGPT-4, large language model by OpenAI.

T-i: What do you think about the open letter on the topic: Pause Giant AI Experiments: An Open Letter. We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

ChatGPT-4: The open letter you mentioned was written by a group of AI researchers and experts who are concerned about the potential risks of developing more powerful AI systems. They argue that the recent advancements in AI, especially in the area of large-scale language models, have brought new risks and challenges that require immediate attention.

The authors of the letter propose a six-month pause on the development of AI systems that are more powerful than GPT-4, which is a hypothetical model that doesn’t currently exist. The purpose of this pause is to allow researchers and policymakers to reflect on the ethical, social, and environmental implications of developing such advanced AI systems.

The letter raises important questions about the responsibility of AI researchers and the need to prioritize the safety and well-being of society over technological advancement. However, not all experts in the AI community agree with this proposal. Some argue that a pause on AI research could slow down progress and harm innovation.

In summary, the open letter on the topic «Pause Giant AI Experiments» has sparked a debate in the AI community about the potential risks and benefits of developing more powerful AI systems. It highlights the need for responsible AI development that considers the potential impact of these technologies on society.

  4.04.2023