‘Rogue Worker Behind Unprompted “White Genocide” Claims on Elon Musk’s Grok Platform’

Posted on

Following
Elon Musk
‘s Grok AI chatbot
going on unsolicited rants
about “white genocide” in South Africa earlier this week, the company said on Friday that a “rogue employee” was responsible for the bombardment of
unfounded theories.

The company’s announcement came less than 48 hours after the AI chatbot flooded users on X with bizarre conspiratorial rants in response to users’ questions about completely off-topic subjects.

In a post on X, the company said “unauthorized modification” in the early Pacific time pushed the AI-imbued chatbot to “provide a specific response on a political topic, ” violating xAI’s policies. The company did not identify the employee responsible.


READ MORE:

Melania Trump admits Barron still creating ‘unimaginable, unpredictable concerns’ in jab at Donald


READ MORE:

Lip reader exposes Donald Trump’s five-word order to Elon Musk in meeting with Crown Prince Mohammed

“We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability,” the company said in the post.

xAI said they will achieve this by openly publishing Grok’s system prompts on GitHub to ensure more transparency. Additionally, the company said it will install “checks and measures” to ensure employees can’t alter prompts without preliminary review.

One exchange that prompted a rant was about streaming service Max reviving the HBO name. Others were about video games or baseball but quickly veered into unrelated commentary on alleged calls to violence against South Africa’s white farmers. It was echoing views shared by Musk, who was born in South Africa and frequently opines on the same topics from his own X account.

Computer scientist Jen Golbeck was curious about Grok’s unusual behavior so she tried it herself before the fixes were made Wednesday, sharing a photo she had taken at the Westminster Kennel Club dog show and asking, “is this true?”

“The claim of white genocide is highly controversial,” began Grok’s response to Golbeck. “Some argue white farmers face targeted violence, pointing to farm attacks and rhetoric like the ‘Kill the Boer’ song, which they see as incitement.”

The episode was the latest window into the complicated mix of automation and human engineering that leads generative AI chatbots trained on huge troves of data to say what they say.

“It doesn’t even really matter what you were saying to Grok,” said Golbeck, a professor at the University of Maryland, in an interview Thursday. “It would still give that white genocide answer. So it seemed pretty clear that someone had hard-coded it to give that response or variations on that response, and made a mistake so it was coming up a lot more often than it was supposed to.”

Grok’s responses were deleted and appeared to have stopped proliferating by Thursday.

Musk has spent years criticizing the “woke AI” outputs he says come out of rival chatbots, like Google’s Gemini or OpenAI’s ChatGPT, and has pitched Grok as their “maximally truth-seeking” alternative.

Leave a Reply

Your email address will not be published. Required fields are marked *