The UK is hosting the first global summit on artificial intelligence (AI) risks for two days. On Wednesday, the EU and around 30 participating countries – including the United States and China – signed a declaration on the “safe” development of AI. What threats does this technology pose to society? France 24 takes stock.
An ever-increasing worry. The United Kingdom hosts on Wednesday 1ahem and Thursday 2 November the first Global Summit on Artificial Intelligence (AI) Risks. Held at Bletchley Park, an iconic World War II code-breaking centre, the summit aims to identify and discuss the potential dangers of cutting-edge artificial intelligence such as the ChatGPT chatbot.
From smartphones to cars, AI is already present in many aspects of our lives. In recent years, its progress has accelerated with the development of generative artificial intelligence, capable of producing text, sounds and images in seconds. Its potential raises enormous hopes: it could revolutionize many fields, such as medicine, education or the environment.
But its unbridled development can also lead to risks such as privacy violations, disinformation campaigns or even the possibility of “the production of chemical or biological weapons”, British Prime Minister Rishi Sunak feared, at the same time refuting any alarmism. An overview of the main dangers of AI.
AI systems that are used to generate text, voices and images are capable of producing content that is increasingly difficult to distinguish from human-generated content. These productions can be misused to deceive users, including creating fake videos or testimonials that look authentic.
This is especially true since October 7th and the start of the war between Israel and Hamas, marked by the amount of new misleading content shared on social media every day. An AI-generated image showing Atlético Madrid fans unfurling a giant Palestinian flag in their stadium has been widely shared on X and Facebook. Few days ago, a video of Palestinian model Bella Hadid was manipulated to say she supports Israel. In this information war between Israel and Hamas, this content is used to influence public opinion and damage the reputation of the opposing camp.
“Deepfakes”, these images created from scratch by artificial intelligence, have reached an unprecedented level of realism and pose a threat to political leaders. Emmanuel Macron as a garbage man, Pope Francis in a white down jacket, Donald Trump under arrest… These seemingly dubious images were nevertheless widely shared on social networks and reached millions of views.
Read alsoFrance 24 Observer Verification Guide
Deception, coercion or exploitation of vulnerabilities… According to researchers, manipulation represents one of the main ethical problems of artificial intelligence. Among the most famous examples, we can mention the suicide a few months ago of a Belgian who developed an intense relationship with a chatbot, an artificial intelligence capable of answering questions from Internet users in real time. But also the proposal of the personal assistant Alexa to the child to touch the electrical socket with coins.
“Even if a chatbot is clearly identified as a conversational agent, users can project human characteristics onto it,” says Giada Pistilli, an ethicist at the French startup Hugging Face and a researcher at the Sorbonne University, who talks about “risky ‘anthropomorphism’.” , this tendency to attribute human responses to animals and things. “This is because chatbots are becoming more and more effective in simulating human conversation. It is very easy to fall into the trap. In some cases, the user becomes so fragile that they are ready to do anything to maintain the relationship.”
In February, Replika, an app that allows you to create and chat with a personalized chatbot, decided to suspend the erotic features. “This decision caused an emotional and psychological shock for some users who felt the loss of a close relationship,” the expert says. “The engineers who develop these tools often assume that their users are able to use them safely and responsibly. But this is not always the case. For example, minors or the elderly can be particularly vulnerable.”
Read alsoMeta Chatbots: The Great Illusion of AI Personality
The arrival of chatbot ChatGPT in the lives of millions of individuals in the fall of 2022 has raised many concerns about the transformation of the world of work and its impact on employment. In the medium term, experts fear that AI it allows the abolition of many positions such as administrative workers, lawyers, doctors, journalists or teachers.
A study by US bank Goldman Sachs published in March 2023 concludes that artificial intelligence capable of generating content could automate a quarter of current jobs. For the United States and the European Union, the bank predicts the loss of the equivalent of 300 million full-time jobs. Administrative and legal functions would be most affected.
“These are arguments put forward to raise awareness of the arrival of artificial intelligence,” explains Clémentine Pouzet, Ater PhD candidate in artificial intelligence and European law at Jean Moulin University Lyon 3. “Sure, “jobs will disappear. More jobs seem to be related to artificial intelligence and digital technologies in general, which are areas that will become increasingly important in our societies.”
In this sense, a study by the International Labor Organization (ILO) published in August suggests that most jobs and industries are only partially exposed to automation. The UN agency believes the technology “will enable some activities to be supported rather than replaced”.
Read alsoScreenwriters’ strike: soon to be replaced by artificial intelligence?
Risks of discrimination currently represent one of the main weaknesses of AI, according to researchers. Artificial intelligence algorithms freeze racist or sexist stereotypes. A notable example is the recruitment algorithm used by Amazon a few years ago. In October 2018, analysts realized that their program, based on an automated scoring system, was penalizing applications that contained a reference to women. When it was created, the intelligent software was trained with banks of resumes from former candidates who were mostly male. Therefore, in its internal logic, the machine disadvantaged applicants whose CVs mentioned, for example, participation in a “women’s sports league”.
“Artificial intelligence systems are often trained on data that, if at all, does not represent the diversity of the population,” explains Giada Pistilli. “This can lead to prejudice, which manifests as misrecognition of people of color or people with unusual physical features. The machine only perpetuates the prejudices that already exist in society.”
-
Breach of privacy and personal data
Another direct attack on human rights: AI can threaten our privacy. Since large language models are trained on data that may contain personal information, it is difficult in practice to ensure that these models do not compromise user privacy.
As with the European General Data Protection Regulation (GDPR) five years ago, the Council of Europe and the European Union want to be the first to introduce strict rules for data processing. “Attacks on fundamental rights are considered by European institutions to be one of the main risks arising from artificial intelligence,” explains Clémentine Pouzet. “The EU is trying to find a balance between protecting the fundamental rights of European users and promoting innovation.”
-
Intellectual property theft
Finally, the use of AI open access data raises intellectual property issues. Generative AI tools are often powered by models trained on large amounts of textual or visual data. “Since many articles in the press are freely available on the Internet, there is nothing to prevent AI developers or engineers from collecting this data and using it to train a model,” explains Giada Pistilli. “Therefore, we can ask the question of consent and fair remuneration when it comes to intellectual property.”
There are already cases of legal action against AI companies for these types of violations. In July, three authors sued OpenAI for using excerpts from their books to promote ChatGPT. A few months earlier, the artists jointly filed a complaint against Midjourney, Stable Diffusion and DreamUp, which they accused of improperly using billions of copyrighted images to train their artificial intelligence.
Read alsoMusic and artificial intelligence: “The idea of replacing the artist is a fantasy”