How Elon Musk is disrupting US elections to boost Trump
State investigators and watchdogs are closely monitoring whether the world’s wealthiest man is potentially defrauding voters while disinformation floods his social media platform, Alex Woodward reports
While he stokes far-right hysteria overseas, Elon Musk is amplifying election disinformation at home.
The 2024 presidential cycle is the first since Musk bought X, formerly Twitter. Under his ownership, the fact checkers are gone, AI-generated content is feeding false election information to millions of users, and Musk himself is responsible for misleading posts that have been viewed more than 1 billion times.
The world’s wealthiest man not only controls a platform where he can boost Donald Trump’s campaign, he also helped launch a Trump-supporting political action committee to spend and raise unlimited dollars to get him elected — a uniquely powerful combination that election analysts, civil rights groups and state prosecutors are closely watching.
“Elon Musk is abusing his privileged position as owner of a small, but politically influential, social media platform to sow disinformation that generates discord and distrust,” according to Imran Ahmed, CEO of the Center for Countering Digital Hate, which found that none of Musk’s posts about US elections have been fact-checked.
The platform is “failing woefully to contain the kind of algorithmically-boosted incitement that we all know can lead to real-world violence, as we experienced on January 6, 2021,” Ahmed said.
The Independent’s requests for comment to X received an auto reply: “Busy now, please check back later.”
‘Definitely seems like election fraud’
An image of an iPhone text message appears on the screen: “Hey, you need to vote.”
The next message includes a picture of a bloodied Donald Trump raising his fist above the headline “Trump Rally Assassination Attempt” followed by a video clip of Trump onstage as shots are fired towards him.
“This is out of control,” replies a man lying in bed as the messages roll in. “How do I start?”
“Register to vote!” the sender replies. “It’s easy!”
The message includes a link to America PAC, Elon Musk’s Trump-supporting political action committee.
The 15-second ad ends with a message to “REGISTER TO VOTE NOW.”
Users are asked for their address, phone number and age. After they hit submit, they’re told “thank you” — and that’s it.
By the end of their visit to Musk’s PAC website, not only were they not registered to vote, they also ended up handing over extremely valuable data to a billionaire-backed operation.
The “voter registration” page was active for at least a month, according to website data.
The stunt is now under investigation in Michigan, one of several battleground states targeted by America PAC.
“Every citizen should know exactly how their personal information is being used by PACs, especially if an entity is claiming it will help people register to vote in Michigan or any other state,” a spokesperson for Michigan’s secretary of state Jocelyn Benson told The Independent.
The office is investigating the PAC’s “activities to determine if there have been any violations of state law,” the spokesperson added.
North Carolina’s Board of Election has also opened an investigation. It is a crime in that state to fail to submit a voter’s registration form if they were told they were being signed up to vote.
America PAC did not respond to The Independent’s requests for comment.
It’s unclear whether the stunt broke any campaign finance laws, but it’s definitely “sh***y,” according to Stetson University College of Law election law professor Ciara Torres-Spelliscy, a fellow at the Brennan Center for Justice at NYU Law.
Georgetown University policy professor Don Moynihan said that “getting people’s personal information on the promise of helping them to register to vote and then not helping them to register to vote definitely seems like election fraud.”
‘A breeding ground for false statements’
Hours after President Joe Biden ended his re-election campaign on July 21, Twitter’s AI chatbot Grok produced false information about deadlines for getting candidates’ names on state ballots. The false claims were reproduced across the platform.
X delayed a correction for 10 days, even after the company had learned the information was wrong.
On August 5, secretaries of state from five states wrote a letter to Musk urging the company to “immediately implement changes” to X’s AI chatbot, and to direct users to a nonpartisan voter information and registration website instead.
“As tens of millions of voters in the US seek basic information about voting in this major election year, X has the responsibility to ensure all voters using your platform have access to guidance that reflects true and accurate information about their constitutional right to vote,” they wrote.
In another video, a voice that sounds nearly exactly like the vice president’s plays over a video montage that looks nearly identical to an average campaign ad.
“I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” the voice says. “I was selected because I’m the ultimate diversity hire. I’m a woman and a person of color, so if you criticize anything I say, you’re both sexist and racist.”
Musk shared the video on his platform last month without noting that it’s a parody until days later. He only wrote “This is amazing” with a laughing emoji.
X prohibits users from sharing “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”
‘Weak sauce’ attempts to stop misinformation
US elections in 2020 were the last without easy-to-access AI tools like deepfake videos and ChatGP, a period that “will likely be remembered as the beginning of the deepfake era in elections,” according to Daniel I Wiener and Lawrence Norden with Brennan Center for Justice at the NYU School of Law, which has provided a framework for combating AI-assisted election disruption.
Before Musk bought it, Twitter made some attempts to combat disinformation on the platform, but “none of that is the case anymore,” according to Adav Noti, executive director of the Campaign Legal Center, a nonpartisan democratic advocacy group.
“All of the major social platforms are doing less,” he told The Independent. “It’s most stark when it comes to X, which is now almost proudly a breeding ground for false statements about the election … and doesn’t seem to be troubled in any way by that use of its platform.”
The Federal Election Commission can enforce a “narrow” set of rules about misrepresenting or pretending to be a candidate, and consumer protection and libel laws can be used to protect deepfaked people, but “the current laws don’t reach it very well at all,” according to Noti.
A group of senators have introduced legislation to protect individuals’ voices and likenesses from AI-generated replicas, but it’s unlikely to move through Congress before Election Day.
The FEC is also not expected to propose any new rules targeting AI in political advertising this year — leaving it up to tech companies, election officials and the media to keep tabs on manipulated messages.
Ahmed with the Center for Countering Digital Hate has suggested that federal officials lean on Section 230 of the Communications Decency Act to “allow social media companies to be held liable in the same way as any newspaper, broadcaster or business across America.”
But the larger concerns from watchdog groups are threats to vote counting and certification — major parts of the election process that have been targeted in court by Trump’s campaign and other right-wing legal groups to cast doubt on the legitimacy of elections they lost by invoking spurious claims of fraud and malfeasance.
“If we get to a post-election period where candidates are trying to exploit those doubts that they themselves have seeded, as we saw in 2020, that’s where it gets concerning,” Noti told The Independent.
Social media platforms could be “actively promoting accurate and trustworthy information and clamping down on intentionally deceptive misinformation,” but “they’re doing a horrendous job,” Noti said.
“Really just abysmal,” he added. “Sometimes they say, ‘Well, we’re going to combat the misinformation with accurate information,’ which I guess is better than nothing, but it’s pretty weak sauce.”
Subscribe to Independent Premium to bookmark this article
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments