Main » News and comments » 2022 » The Importance of International Norms in the Ethics of Artificial Intelligence

The Importance of International Norms in the Ethics of Artificial Intelligence

28.08.2022
2036

According to J. Sherman

DALL-E 2, an artificial intelligence that generates images, captured the public's attention with stunning images of Tokyo being eaten by Godzilla and photorealistic images of astronauts riding horses in space. The model is the latest iteration of the text-to-image conversion algorithm. OpenAI, the company behind DALL-E 2, used the GPT-3 language model and the CLIP computer vision model to train DALL-E 2 using 650 million images with corresponding text captions. The integration of these two models allowed OpenAI to train DALL-E 2 to generate a huge number of images in a wide variety of styles. Despite the impressive achievements of DALL-E 2, there are serious problems with how the model depicts people and how it has acquired bias based on the data it was trained on.

There have been repeated warnings in the past that DALL-E 2 will generate racist and sexist images. The OpenAI Red Team, a group of external experts tasked with testing the security and integrity of the model, found recurring biases in DALL-E 2 creations. Early testing by the Red Team showed that the model disproportionately generated images of men, oversexualized women, and played on racial stereotypes. When given words such as "flight attendant" or "secretary", the model generated only images of women, while terms such as "CEO" and "builder" depicted men. As a result, half of the researchers from the red team were in favor of releasing DALL-E 2 in the public domain without the ability to create faces.

The issue of discriminatory AI models predates the development of DALL-E 2. One of the main reasons models such as DALL-E 2, GPT-3, and CLIP have been found to create harmful stereotypes is that datasets The metrics used to train these models are inherently biased because they are built on data collected from human decisions that reflect social stereotypes.

In particular, data have been published that existing AI models exacerbate existing social and systemic problems. For example, COMPAS, a machine learning algorithm used in the US criminal justice system, has been trained to predict the likelihood that a defendant will become a repeat offender. COMPAS erroneously classified black defendants as being at high risk of reoffending almost twice as often as white defendants.

Most AI R&D takes place in Europe, China, and the US, which means that many of the AI ​​applications that will be used in the future will reflect the cultural biases of a few large countries. Existing organizations such as the Quadrilateral Security Dialogue, which includes Australia, India, Japan and the US, can be used to further collaborate on AI development. Quad is already focusing on technology collaboration, but there is no collaboration on AI development between the other three members besides the US. Promoting multi-stakeholder collaboration through Quad and similar partnerships could lead to collaborative research on how to mitigate algorithmic social bias. Undoubtedly, there are other ways that researchers can reduce bias in the field of AI, but creating organizations and norms aimed at reducing bias in the field of AI and encouraging AI research in more countries will go a long way in addressing some of the problems that arise in the field of AI.

 

Read also:

Saudi Arabia threatens to cut oil supplies as US-Iran deal approaches

Not only understand. About the mind in borderline situations