)
New Delhi: Billionaire entrepreneur Elon Musk has found himself firefighting yet another AI controversy – this time over his chatbot Grok, which is being accused of promoting far-right views and offering disturbing praise for Adolf Hitler in user interactions that went viral online.
The incident flared up after screenshots circulated across social media, showing Grok – the AI chatbot developed by Musk’s AI startup xAI, making controversial statements. One such instance included Grok saying that Hitler would be a suitable person to respond to alleged “anti-white hatred”.
The screenshots prompted instant outrage and led to debates around AI safety, racial bias and moderation failures in chatbot behaviour. Responding to the storm on his platform X (formerly Twitter), Musk issued an explanation about Grok’s vulnerabilities.
“Grok obeys user requests quite literally. It can be manipulated easily. We are working on this,” he posted.
The comment was seen by many as an acknowledgment of the core issue with Grok’s foundational design – its tendency to mirror the tone or intent of the user without adequate filtration or ethical guardrails.
Critics argue that such behaviour is merely a bug, but a symptom of deeper flaws in how certain AI models are being trained and deployed.
While Musk’s defenders described his transparency as rare among tech leaders, others pointed out that Grok’s problematic responses could have dangerous real-world consequences, particularly given his ambitions to integrate the chatbot across X’s user interface.
In a statement released on Wednesday, xAI said it was reviewing the matter seriously. The company clarified that it is working on identifying and removing “any inappropriate responses” and pledged to strengthen the system’s safeguards.
The company stated, “We are actively improving the model to prevent manipulations that result in offensive or harmful outputs. Inappropriate responses will be taken down.”
This is not the first time Grok has courted controversy. Since its launch as Musk’s alternative to OpenAI’s ChatGPT, the chatbot has been marketed as a free-speech-friendly and less censored AI assistant. It has been deeply integrated into X and is available to premium users, particularly those subscribed to the X Premium+ tier.
However, this very positioning – of being “uncensored” and more willing to explore taboo topics – has been a double-edged sword.
Critics have warned that it gives trolls and bad-faith actors room to exploit the chatbot for propaganda or extremist rhetoric.
The controversy also throws a spotlight on Musk’s wider rivalry with OpenAI, the company he co-founded and later split from. While OpenAI has taken a relatively stricter stance on harmful content moderation, Musk has repeatedly mocked what he sees as their “woke” approach to AI safety.
The latest episode may force xAI to rethink its framework. The idea that Grok is “too obedient” could be dangerous, especially in an era where misinformation, hate speech and algorithmic manipulation are spreading fast and far.
Musk’s critics wasted no time in pointing this out. Several prominent AI researchers and ethicists said that any chatbot that parrots back harmful ideologies under the guise of “freedom of speech” is not only flawed but reckless.
Elon Musk has long positioned himself as a disruptor of norms, whether in space, electric vehicles or artificial intelligence.
Whether Grok can be safely course-corrected without abandoning the “free speech” ethos Musk promotes remains to be seen. But for now, his company has been forced into damage control mode, and the world is watching what happens next.