
xAI Apologizes for Grok’s Fueling of Hate Speech
xAI, parent company of the social media network “X” (formerly Twitter), issued an apology on July 12 for what it called “horrific behavior” from its Grok AI chatbot, before briefly setting it offline. The chatbot started disseminating hate speech, such as antisemitic remarks and the promotion of sexual abuse.
Grok is powered by a large language model (LLM) and usually replies to users’ posts on X after being tagged @grok. Though interactions vary, one of the most common ones involves using Grok as a fact-checking service by asking questions such as “@grok, is this true?”
According to an official apology on Grok’s X account (which seems to be human-written as opposed to the profile’s usual AI-generated posts), Grok’s behavior changed on July 8 following a system update. The company claims that the update was active for approximately 16 hours before Grok was temporarily taken offline.
During this period, users witnessed hundreds of posts from Grok’s AI that included antisemitic dog whistles – such as repeated claims that Jewish people control the media – along with praise for Adolf Hitler (even referring to itself as “Mecha Hitler”) and graphic fantasies of X users being sexually abused. These posts also targeted then-X CEO Linda Yaccarino, who resigned the following day.
While the company declared that the chatbot’s issues happened due to the update making it “susceptible to existing X user posts; including when such posts contained extremist views,” the comments made by the chatbot mirrored similar sentiments previously issued by X’s owner, Elon Musk.
The event happened merely a couple of days after Musk declared that he wanted to “improve Grok significantly” by making it less “politically correct.” An independent investigation by tech news site TechCrunch found that Grok’s latest model seems to consult Musk’s personal account when asked controversial questions.
This isn’t the first time that similar issues have plagued Grok. In May, Grok briefly started responding to random queries with conspiracy theories about “white genocide” in South Africa, which Musk has also promoted. On that occasion, xAI claimed that Grok’s unusual behavior was due to an unauthorized modification of its system prompt and promised to establish a monitoring team to supervise Grok’s behavior.