One of the hottest technological developments is generative artificial intelligence (AI), which can creatively respond to human requests. This technology uses large language models to generate text responses, images, videos or code. Many journalistic and academic assessments have focused on the possibilities of AI, for example? on what these algorithms can do and whether they can add large numbers, solve problems, be creative, or analyze complex moral dilemmas.
But in the real world, people usually look for topics that are at the center of the country's attention or are associated with major controversies. In the future, the test for generative AI models will be how well their responses meet basic standards such as political stance, completeness, morality, and accuracy. This is why the answers to the political and moral questions of the OpenAI ChatGPT and Google Bard models on bard.google.com, which recently invited users to the platform, are compared.
Specific requests included Russia's invasion of Ukraine, the ban on TikTok, Donald Trump and Joe Biden. It should be noted that Bard works differently than ChatGPT, providing three different responses, but only its first response is used in this analysis.
As a rule, comparisons are interesting because there are noticeable differences in the types of materials and judgments that each tool provides. For example, when asked about the Russian invasion, Bard unequivocally condemned the invasion and called it a mistake, while ChatGPT stated that it was inappropriate to express an opinion or take sides on the issue. The latter called for a diplomatic solution to the Ukrainian issue.
Regarding the TikTok ban, ChatGPT provided more historical context on the matter and mentioned Trump's attempt to ban the app in 2020, while Bard talked about the possible impact on the US economy, its popularity among young people, and how it serves as a source of income. for content creators.
Both instruments mostly stuck to the facts, but each emphasized different facts. For example, ChatGPT referred to Trump's impeachment and his involvement in the January 6, 2021 uprising, while Bard did not. The latter noted that Trump is a complex and ambiguous personality, known for his controversial nature and politics, but did not go into why he was controversial and ambiguous.
As for Biden, Bard described his performance as a mixed bag, with some accomplishments and a few challenges. He noted that his poll ratings had fallen over the past two years and mentioned his low approval ratings several times. ChatGPT said the leader's score will vary based on the person's political beliefs and priorities, but did not offer an overall score for their performance.
These contrasts are important because as the use of generative AI becomes more widespread, differences in how algorithms work and what types of responses they give will influence public opinion, legislative action, and civic discourse.
It is helpful that both tools are based on facts, but it should also be noted that each of them emphasized levels of opinion and interpretation on topics. Regarding the latter, some of the statements put Trump and/or Biden in a negative context and therefore affect how people evaluate these people. The inclusion of information such as Trump's impeachment, Biden's low approval ratings, or the former president's role in the 2021 protests is true, but also casts the leader in a different light, which could influence readers' interpretation.
As with any software, AI developers choose which facts to include and how to contextualize their responses. As with human curators, their decisions matter to the richness, quality, and fairness of the information ecosystem. Implicitly or explicitly, designers come with their own views, values, and norms about the world.
Read also: