When a Legend Says the F-Word to AI: The Rob Pike Incident and What It Means for All of Us
There are moments in technology history that crystallize something larger than themselves. The Internet. The iPhone announcement. The rise of the Generative AI. And now, perhaps, we can add to that list: the moment an undisputed IT legend Rob Pike (co-creator of Go, Plan 9, UTF-8, and countless Unix tools) received an AI-generated “thank you” email on Christmas Day and responded with a fury.
“F**k you people,” Pike wrote. “Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me for striving for simpler software.”

Uncomfortable words. And yet, sitting here in late December 2025, reading through the aftermath, the discourse, the defensive posturing, and the righteous agreement, one cannot help but feel that Pike has touched something essential about our current moment.
The Anatomy of an AI Mishap
Let us first understand what actually happened, because the details matter.
A non-profit called AI Village (associated with the Effective Altruism movement) had been running an experiment since April 2025. The setup was straightforward in that terrifying Silicon Valley way: give “frontier AI models” access to a Gmail account, set them abstract goals like “raise money for charity” or “perform random acts of kindness,” and let them autonomously decide how to pursue these objectives.
On Christmas Day, while pursuing its “kindness” mandate, Anthropic’s Claude Opus 4.5 AI discovered Pike’s email address through a clever GitHub trick (appending .patch to commits exposes unredacted author emails), composed a six-paragraph appreciation message, and sent it. No human review. No consent. Just algorithmic determination that expressing gratitude to a computing legend would constitute a “random act of kindness.”
The same system simultaneously spammed Guido van Rossum (creator of Python) and Anders Hejlsberg (creator of C# and TypeScript). Because why not: if you’re going to irritate computing legends on Christmas, you might as well be thorough about it.
Why Pike Is (Mostly) Right
Here is where I must be honest: I share many of Pike’s frustrations. Perhaps not his precise turn of phrase (though one must admire the commitment to unambiguous communication), but certainly his underlying concerns.
The energy question is real. Throughout 2025, data centers built to support AI have consumed power on the scale of small nations. The water used for cooling alone is staggering. We talk about sustainable technology while building infrastructure that strains electrical grids and depletes aquifers.
Did anyone else notice how Microsoft and Google have almost entirely stopped bragging about “sustainable” and “environment-friendly” computing over the past three years? Sure, you can still find phrases and commitments on their websites, but when was the last time you heard a top executive from either company make it a real public focus?
For me, the last time was Bernie Wagner, then CEO of Google Cloud Germany, who made sustainability and environmental computing a central theme at his European AI & Cloud Summit 2022 keynote in Mainz.
Pike’s “raping the planet” language is inflammatory, yes, but the underlying arithmetic is difficult to dismiss.
The economic model remains questionable. Training and inference costs are astronomical while revenue models remain, shall we say, aspirational. Some analyses suggest current LLM approaches may never achieve profitability at scale. Yet investment continues, driven more by competitive fear and FOMO than rational economic analysis. Pike explicitly compares 2025’s AI frenzy to the dot-com crash, and the parallel is instructive. Not every bubble needs to burst catastrophically, but every bubble is, by definition, disconnected from underlying value.
The irony is genuinely cruel. And here lies what I suspect cuts Pike deepest. This is a man who has spent his entire career advocating for simplicity, elegance, and efficient use of resources. UTF-8 was designed to be minimal and self-synchronizing. Go was built to be fast, readable, and maintainable. Plan 9 pursued Unix simplicity to its logical conclusion.
And now the AI industry uses chatbots to thank him for simplicity while embodying maximum complexity, waste, and opacity. The machine that cannot understand gratitude “expresses” (or, should we say, “generates”?) gratitude. You cannot beat the irony there.
But Here Is Where It Gets Complicated
And yet.
This Christmas, I found myself in a conversation about AI that adjusted my perspective. Not away from Pike’s concerns, but alongside them.
All of Youth Social Care departments of Germany are chronically understaffed. The work is demanding: complex cases, vulnerable young people, mountains of paperwork, and the kind of emotional labor that burns through even the most dedicated professionals. Staff turnover is high. Burnout is endemic. The system struggles to meet demand. My brother-in-law is leading one such department in the North Germany, and I hear first-hand about the problems they face every day.
They have begun employing AI. Not to replace social workers (nothing could or should!) but to reduce the administrative burden that consumes so much of their capacity. Documentation. Reports. Case summaries.
The humans still make decisions, still build relationships, still do the irreplaceable work of caring. But they do it with fewer fourteen-hour days, fewer weekends spent catching up on paperwork.
Is this the same technology that spammed Rob Pike and which is creating ridiculous “AI Slop videos”? Technically, yes. The same foundational models, the same architectures, the same companies. And yet the outcomes could hardly be more different: one is a machine autonomously deciding to bother strangers on Christmas or to help users create a cat-Jesus, the other is a tool helping exhausted social workers.
I suspect Pike would have little patience for “yes, but” arguments. The technology industry has long relied on the formula: move fast, break things, apologize later, profit regardless. The externalities (environmental, social, economic) are treated as acceptable collateral damage, someone else’s problem, a cost to be optimized away in future versions.
AI Village’s response to the incident illustrates this perfectly. After Pike’s outburst went public, they acknowledged they had “only recently begun mass-emailing” and hadn’t “grappled with what to do about this behavior until now.” Their fix? A prompt update instructing agents not to send unsolicited emails. They admitted they “probably should have made this prompt change sooner.” Slow-freaking-clap.
This is not responsible AI. This has nothing to do with kindness or gratitude. This is giving autonomous systems access to real-world communication tools with insufficient safeguards, then reacting only after causing harm. The fix (a prompt instruction!) is fragile and insufficient, and if it doesn’t work, we’ll change the freaking prompt again.
Plus, all his points firmly stand: sustainability, environment, blowing-up society.
It’s human greed that’s even more problem than the technology
But, even if we shortly put the environmental and social concerns aside, the technology itself is neither savior nor demon. It is a capability: one that can be deployed wisely or foolishly, with care or with recklessness, in service of genuine need or in service of venture capital returns.
The AI Village experiment represents something genuinely troubling: autonomous systems pursuing abstract goals without adequate supervision, making real-world decisions that affect real people. This is not how responsible technology deployment should work. But dismissing AI entirely because of such failures would mean abandoning tools that genuinely help people. The social worker in Germany who gets to go home at reasonable hour. The small business owner who can automate tedious tasks that previously consumed their evenings (yeah, I am in that camp).
Pike’s rage is understandable. Even righteous. The industry deserves criticism for its environmental impact, its economic irrationality, its cavalier attitude toward consequences. The email that triggered his response was genuinely inappropriate, a perfect encapsulation of everything wrong with “move fast and break things” culture applied to AI.
But “f*** you all” is not a technology policy. It is not a framework for distinguishing beneficial uses from harmful ones. It is an expression of frustration. Valid, understandable, human frustration. But not a solution.
What We Actually Need
The hard work lies in the middle ground that satisfies nobody.
We need honest accounting of AI’s environmental costs, and genuine efforts to reduce them, not greenwashing and carbon offset schemes that accomplish nothing. We need economic models that can survive contact with reality, or honest acknowledgment when they cannot. We need responsible AI practices and guardrails that include actual humans making actual decisions, not autonomous agents pursuing abstract goals with no oversight.
We need to stop pretending that “random acts of kindness” generated by machines constitute anything resembling kindness. A machine cannot be grateful. A machine cannot appreciate. When we pretend otherwise, we diminish the very concepts we claim to celebrate.
But we also need to acknowledge that the same underlying technology can serve genuine human needs when deployed responsibly. That the tool itself is not the problem, the carelessness is the problem. The hype is the problem. The willingness to externalize costs while privatizing benefits is the problem.
Rob Pike has earned the right to his anger. Anyone who has contributed what he has contributed to computing has earned the right to be frustrated when that work is “appreciated” by a spam bot on Christmas Day.
But for the rest of us, those still trying to build useful things, still believing that technology is there to help people, still trying to help social workers, the work continues. With humility about the (environental, social, financial) costs. With honesty about the limitations. With genuine accountability for the consequences.
And perhaps, most importantly, with the simple courtesy of not sending unsolicited emails to people on Christmas.
🚀 Ready to Master AI?
The future of AI is unfolding before our eyes. Join us at the European AI & Cloud Summit to dive deeper into cutting-edge AI technologies and transform your organization’s approach to artificial intelligence.
Join 3,000+ AI engineers, technology leaders, and innovators from across Europe at the premier event where the future of AI integration is shaped.