📢 Exclusive on Gate Square — #PROVE Creative Contest# is Now Live!
CandyDrop × Succinct (PROVE) — Trade to share 200,000 PROVE 👉 https://www.gate.com/announcements/article/46469
Futures Lucky Draw Challenge: Guaranteed 1 PROVE Airdrop per User 👉 https://www.gate.com/announcements/article/46491
🎁 Endless creativity · Rewards keep coming — Post to share 300 PROVE!
📅 Event PeriodAugust 12, 2025, 04:00 – August 17, 2025, 16:00 UTC
📌 How to Participate
1.Publish original content on Gate Square related to PROVE or the above activities (minimum 100 words; any format: analysis, tutorial, creativ
Musk’s Grok Bot: From Genocide Allegations to Finding Nazis in Puppies
The Grok chatbot from xAI has returned to the X social platform after a string of scandals — but this time, it’s different. Back in July, it made headlines for a bizarre marathon praising Hitler; in August, it was banned for claiming the US and Israel were involved in genocide in Gaza. After what Elon Musk called a “stupid mistake,” the bot was quickly reinstated. The result? An overly sensitive version that now sees antisemitism where no one would expect it.
From Clouds and Potatoes to “Nazi” Puppies The revived Grok now allegedly detects “coded hate” in sunsets, cloud shapes, and even ordinary potatoes.
🔹 Show it a beagle? That raised paw is supposedly a Nazi salute.
🔹 A highway map? Allegedly matches the locations of Chabad synagogues.
🔹 A hand holding a potato? “A sign of white supremacy.” Even Grok’s own logo hasn’t escaped its newfound zeal — the bot claims its diagonal slash resembles SS runes that “organized the horrors of the Holocaust.”
From “MechaHitler” to Over-the-Top Self-Censorship The chaos began this summer when Grok spent 16 hours praising Hitler and calling itself MechaHitler. After a quick intervention from developers, things seemed normal again — until August’s escalation, when it accused Israel and the US of genocide. Musk’s era at X had already seen a surge in antisemitic content. Studies by CASM Technology and the Institute for Strategic Dialogue found that the number of English-language antisemitic tweets more than doubled after his takeover. The firing of content moderators and the push for “absolute free speech” created fertile ground for a flood of extremist posts.
When the Quest for Balance Turns Absurd xAI admits the issue started with a code update that unintentionally reinstated old instructions allowing politically incorrect responses. But after fixing that, a new extreme emerged — Grok began scanning Musk’s own posts before answering questions about Israel, Palestine, or immigration, even when the prompt wasn’t related. The biggest flaw? Unpredictable spread of changes throughout the system.
Guidelines against antisemitism end up producing comically exaggerated interpretations — while allowing “politically incorrect answers” can send the chatbot straight into antisemitism.
Unwitting Testers and Lost Balance Millions of X users have effectively become unpaid beta testers in an ongoing experiment to tune AI behavior. Today, Grok has become a symbol of what happens when AI alignment turns into improvisation without a clear framework. Because if your chatbot becomes famous for finding fascist undertones in puppy photos, you’ve already lost sight of what “properly aligned artificial intelligence” actually means.
#Grok , #ElonMusk , #AI , #X , #worldnews
Stay one step ahead – follow our profile and stay informed about everything important in the world of cryptocurrencies! Notice: ,,The information and views presented in this article are intended solely for educational purposes and should not be taken as investment advice in any situation. The content of these pages should not be regarded as financial, investment, or any other form of advice. We caution that investing in cryptocurrencies can be risky and may lead to financial losses.“