Dal quotidiano “THE GUARDIAN”: Technology and the ‘truth’ How the culture wars found a new battleground in AI


Nick Robins-Early New York · 22 Ago 2023

‘I think [ChatGPT] was too biased and will always be. There will be no one version of GPT that everyone agrees is unbiased’
When Elon Musk introduced the team behind his new artificial intelligence company xAI last month, the billionaire entrepreneur took a question from the rightwing media activist Alex Lorusso. ChatGPT had begun “editorialising the truth” by giving “weird answers like that there are more than two genders”, Lorusso posited. Was that a driver behind Musk’s decision to launch xAI, he asked.
“I do think there is significant danger in training AI to be politically correct, or in other words training AI to not say what it actually thinks is true,” Musk replied. He had earlier told the event that his own company’s AI would be “maximally true”.
It is a common refrain from Musk, the world’s richest person, the CEO of Tesla, and the owner of the platform formerly known as Twitter. “The danger of training AI to be woke – in other words, lie – is deadly,” he tweeted last December in a reply to Sam Altman, the OpenAI founder.
Musk’s relationship with AI is complicated. He has warned about the existential threat it poses for approximately a decade, and recently signed an open letter airing concerns it would destroy humanity, though he has simultaneously worked to advance the technology’s development. He was an early investor and board member of OpenAI, and has said xAI’s goal is “to understand the true nature of the universe”.
But his criticism of currently dominant AI models as “too woke” has added to a larger rightwing rallying cry that has emerged amid the boom in publicly available generative AI tools that began with the launch of ChatGPT last November. As billions of dollars pour into the arms race to create ever more advanced artificial intelligence, generative AI has also become one of the latest battlefronts in the culture war, threatening to shape how the technology is operated and regulated at a critical time in its development.
Republican politicians have railed against large AI companies in Congress and on the campaign trial. At a campaign rally in Iowa last month, Ron DeSantis, the Republican presidential candidate and Florida governor, warned that big AI companies used training data that was “more woke” and contained a political agenda.
Conservative activists such as Christopher Rufo – who is generally credited with stirring the right’s moral panic around critical race theory being taught in schools – have warned their followers on social media that “woke AI” is an urgent threat. Major conservative publications such as Fox News and the National Review have amplified those fears, with the latter arguing that ChatGPT had succumbed to “woke ideology”.
The rightwing backlash echoes its pushback against content moderation policies on social media platforms. Much like those policies, many of the safeguards in place on AI models such as ChatGPT are intended to prevent use of the technology for the promotion of hate speech, disinformation or political propaganda. But the right has framed those content moderation decisions as a plot by big tech and liberal activists to silence conservatives.
Meanwhile, experts say, their critiques of AI attribute too much agency to generative AI models and assume that services can hold viewpoints as if they were sentient beings.
“All generative AI does is remix and regurgitate stuff in its source material,” said Meredith Broussard, a professor at New York University and the author of the book More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech. “It’s not magic.”
Lost in the discussions among rightwing critics, experts say, is the way AI systems tend to exacerbate inequalities and harm marginalised groups. Text-to-image models like Stable Diffusion create images that tend to amplify stereotypes around race and gender.
Still, the rightwing criticisms of AI leaders are having consequences. Coming amid a push by Republicans against academics who monitor disinformation, and a lawsuit by Musk against the anti-hate-speech organisation the Center for Countering Digital Hate, whose work the billionaire says has resulted in tens of millions of dollars in lost revenue on Twitter, the “anti-woke AI” campaign is putting pressure on companies in the sector to appear politically neutral.
After the initial backlash, OpenAI published a blog in February apparently aimed at appeasing critics across the political spectrum and vowed to invest resources to “reduce both glaring and subtle biases in how ChatGPT responds to different inputs”.
When speaking in March with the podcast host Lex Fridman, who has become popular among anti-woke cultural crusaders such as Jordan Peterson and tech entrepreneurs such as Musk, the ChatGPT founder Altman said: “I think it was too biased and will always be. There will be no one version of GPT that everyone agrees is unbiased.”
Rightwing activists have also made several rudimentary attempts at launching their own “antiwoke” AI. The CEO of Gab, a social media platform favoured by white nationalists and other members of the far right, announced that his site was launching its own AI service. “Christians must enter the AI arms race,” Andrew Torba said, accusing existing models of having a “satanic worldview”. A chatbot on the platform Discord called “BasedGPT” was trained on Facebook’s leaked large language model, but its output was often factually inaccurate or nonsensical.
Those previous attempts have failed to gain mainstream traction.
It remains to be seen where xAI is heading. The company says on its website that its goal is to “understand the true nature of the universe” and has recruited an all-male staff of researchers from companies such as OpenAI and institutions like the University of Toronto. It’s unclear what ethical principles it will operate under, beyond Musk’s “maximally true” claim. The company has signed up the researcher Dan Hendrycks from the Center for AI Safety as an adviser. Hendrycks has warned about the long-term risks AI poses to humanity, a fear Musk shares.
Musk, xAI, Hendrycks and the Center for AI Safety could not be reached for comment.
Despite the futuristic discussions from figures such as Musk around AI models learning biases as they become sentient and potentially omnipotent forces, some researchers believe that the simpler answer as to why these models don’t behave as people want them to is that they are glitchy, prone to error and reflective of current political polarisation.
“The internet is very wonderful and also very toxic,” Broussard said. “Nobody’s happy with the stuff that’s out there on the internet, so I don’t know why they’d be happy with the stuff coming out of generative AI.”