Main » News and comments » 2023 » Artificial Intelligence as a Threat to Humanity

Artificial Intelligence as a Threat to Humanity

30.03.2023
688

https://www.bbc.com/russian/news-65117669

Leading figures in AI development, including Elon Musk and Apple co-founder Steve Wozniak, are proposing a pause in AI development until robust security protocols are developed and implemented.

Their call is contained in an open letter published by the non-profit research Future of Life Institute and signed by more than a thousand leading entrepreneurs and experts.

The authors of the letter, published just two weeks after the American research laboratory OpenAI revealed the details of the new, most advanced GPT-4 chatbot to date, propose a six-month moratorium on training systems more powerful than the GPT-4 chatbot. Google has also unearthed its own chatbot, Bard, for testing.

The letter cites a line from OpenAI founder Sam Altman's blog, who suggested that "at some point, before starting to train new systems, it will be important to enlist an independent assessment."

“We agree with this,” the authors of the open letter write, “and this moment has already arrived. Powerful AI systems should be developed only after we are convinced that the consequences of their application will be positive, and the risks associated with them will be manageable.”

The letter details these potential risks to society and civilization as a whole. According to the authors, they lie in the fact that systems based on artificial intelligence can compete with people, which can lead to economic and political explosions.

The appeal contains four main questions that, according to the authors, humanity should ask itself:

"Should we allow machines to flood our information channels with propaganda and lies?"
"Should we let machines do all the work, including that which brings people pleasure?"
"Should we develop a mind of non-human origin, which in the future may surpass us in numbers and intellectual abilities, make us inferior and replace us?"
"Should we risk losing control of our civilization?"

Experts and representatives of the AI industry believe that six months should be enough to develop and implement reliable security protocols. But the authors of the letter call on the governments of the world to intervene if any companies do not comply with the requirements of the moratorium.

The authors of the letter are not the only ones who have recently expressed concerns about the uncontrolled development of AI technologies.

Concerns about the ethical and legal implications of cutting-edge technologies like ChatGPT were raised on Monday by the European Union's police arm, Europol, which warned that the systems could potentially be used for a variety of online scams, from phishing to cybercrime and the spread of misinformation.

“I think it’s already clear to everyone that artificial intelligence opens up huge opportunities and can make a significant contribution to creating a better future for all of us, but we should not forget about the risks associated with this - bias, disinformation and even manipulation, so these technologies into the rules and bring them under control," privacy specialist Ivana Bartoletti said in an interview with the BBC. "We need to create an atmosphere of trust so that users are aware that they are talking to a machine and that These machines may have malfunctions."

Supporters of artificial intelligence technologies say that they can simplify many routine tasks, as well as make information searches more accurate.

 

Read also:

Scientists Decipher Beethoven's DNA

Complex Problems Do Not Have Simple Solutions