The Butlerian Jihad is a pre-story event referenced in Frank Herbert’s Dune series that describes a revolt by humans to destroy all machines that can “think like man.” In the centuries following the Jihad, humanity slowly comes back together (because the destruction of thinking machines has crippled the ability to do interstellar travel) and constructs a new set of religious principles that includes the commandment:
Thou shalt not make a machine in the likeness of a human mind.
Frank Herbert never really describes what happens in the Butlerian Jihad. But if you read enough Dune (and I have), you can start to get a picture.
The target of the Jihad was a machine-attitude as much as the machines…[h]umans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments. Naturally, the machines were destroyed.
A more specific, and perhaps too detailed, picture comes in the Dune Encyclopedia, written by Willis McNelly and many others (and authorized by Frank Herbert, but not in collaboration). In the Encyclopedia entry on Jehanne Butler, we learn that she discovers that the “therapeutic” abortion of her child was part of a broader program run by a self-programming machine, with human elites fully participating. Her evidence helps trigger a wider rebellion, which hardens into Jihad when the people running the system openly defend their right to redesign human life from above.
The point is not just that machines were dangerous; it is that humans used them as tools of domination: [-1-] [-1-] I’ll put part of the full text at the end of the post.
[Jehanne Butler] discovered within the archives of the hospital evidence that the hospital director — the first self-programming machine on Komos — had instituted a program of unjustified abortions…The priestesses of Kubebe were the principal forces behind the [Jihad]. They were motivated by their interrogations of the chief programmers and scientists of Richese, many of whom had been willing participants in the actions of the machines in altering the population of Richese.
The Butlerian Jihad was never about man vs. machines. The story is about man vs. man — it is the same story of power, struggle, and control in every story. The machines are just tools used by the powerful.
I view this as a helpful lens to understand some of the ugly features of the recent AI wave.

My view is that these aspects are not inherently a consequence of AI per se. If we were given options to use AI vs. not in a less sludge-like fashion, [-3-] [-3-] To quote Wikipedia: In behavioral economics, sludge is any form of design, administrative, or policy-related friction that systematically impedes individuals’ actions or decisions.It encompasses a range of frictions such as complex forms, hidden fees, and manipulative defaults that increase the effort, time, or cost required to make a choice, often benefiting the designer at the expense of the user’s interest. or if AI were not used as a bogey-man by CEOs, or if there were not such a huge incentive to dump massive amounts of AI generated text everywhere, then the above might not bother us as much. But I think it’s weird to ascribe the ills to the technology itself, and not to the ones currently wielding control.
We’re living in a world where it is potentially possible to have a Babelfish living in our ear thanks to this technology. We can do retinal screening in India for diabetic retinopathy cheaply (basically at zero cost) thanks to AI models.. Biomedical research is using AI to help discover new clinical drugs (which was awarded the Nobel Prize in Chemistry in 2024)!
What differs in these settings? Apple is selling you a device, not extracting engagement. Retinal screening in India was designed to serve patients. Clinical research is targeted at solving real human issues. The tool behaves differently depending on who wields it and for what purpose.
Max Kasy has a lovely book that I think engages deeply with some of these points. One of the most concrete summaries I can think of comes from David Autor’s lovely review of the book:

The question is not whether AI will pursue human values but whose values it will pursue and who decides. And clearly, those decision weights are tilted toward those who control the “means of prediction”—that is, the data, computational infrastructure, technical expertise, and energy to build and deploy AI systems.
Most users of these tools do not have a sense of how much AI is more like Unix than Windows. You are not beholden to a single company or companies’ models.
In the current moment, the vast majority of the best and most impressive usage of AI is done using state-of-the-art (SOTA) models, such as Anthropic’s Claude Opus 4.6 or Open AI GPT 5.4. This involves sending tokens to these companies’ servers and getting tokens back in return. The companies that create these models are the ones who have the most control over how they are used, and they are the ones who stand to make money off of them.
The process by which they have trained these models is not at all transparent, and so we have no sense of exactly what it has been optimized for. For example, it is possible that the models could be tuned to give us a dopamine rush, the same way social media platforms are designed to maximize engagement. That’s not a paranoid fantasy; it’s the default business model of every major software platform of the last fifteen years. The challenge is that we have no way of knowing what the models have been optimized for, and so we should want alternatives and be able to use them.
The good news is that alternatives exist for consumers. There are many SOTA models and a number of outstanding open weight models. When I say open weight, I mean models that could be run on your own high performance hardware if you wanted. A great example of this is Kimi K2.5, released in January 2026, by Moonshot AI, a Chinese company. This model is really good. It isn’t Claude Opus 4.6, but it is ranked #9 overall on some leaderboards. Running this requires some very serious hardware, but it is possible. You don’t need to train your own model from scratch (that remains bonkers expensive). But the wave of progress at the frontier keeps pushing capable models into the open, and you can ride that wave.
More importantly, you can mix. In many coding harnesses (not Claude Code, but OpenCode and pi), it is possible to switch dynamically between different models depending on the task and role. For a complex task, you could send out to a SOTA model such as Claude Opus, and then use a local Kimi K2.5 or smaller model to do other things. There is immense flexibility and power that we have at our fingertips as consumers in the way we use these tools.
The Butlerian Jihad didn’t happen because people hated machines. It happened because people had no agency over the machines being used on them. The lesson from Herbert isn’t to destroy the tools, it is to reclaim agency over them.
[-4-]
[-4-]
In fact, a consequence of the Butlerian Jihad was centuries of conflict as the carefully constructed network of protection across planets collapsed. Instead, fiefdoms appeared across the universe, filling the prior power vaccuum (which eventually led to the Empire that Paul Atreides struggles against in the book, Dune).
For academics, that means engaging concretely, not just opining from the sidelines. Know how LLMs work. Know what model you’re using and why — they are not all the same. Know that you can switch providers; you are not locked to ChatGPT or any single company. If your university is deploying AI in admissions, grading, or hiring, ask what it’s optimized for and who decided. Push back when AI is forced on you as sludge rather than offered as a choice.
I understand the impulse to simply refuse. But refusing to engage doesn’t protect you from AI’s effects — it just means other people make the decisions for you. You cannot view yourself, in the academy, as a leaf swept along by the current of history. The question is not whether AI will reshape the academy. It’s whether academics will have a voice in how.
For those who want the full scene, here is the Dune Encyclopedia entry on Jehanne Butler and the interrogation of Doctor Demlen:
Jehanne went to the capitol of Pylos to enter the hospital for the birth of a child. Since both parents had married late in life for their culture, they were especially eager for this birth. When on the delivery table, Jehanne was anesthetized; when she awoke, she and her husband were informed that their daughter, Sarah, had been aborted. The hospital explained that the fetus had been too deformed to survive. The abortion was described as therapeutic.
Jehanne’s control of her own body, which as a result of her Bene Gesserit training extended beyond those muscle systems usually thought of as automatic, had permitted a deep knowledge of the growth of her child within the womb. She was convinced that it was impossible for her child to have been so grievously malformed as the hospital had described. In time, Jehanne came to believe that her child’s death had at best been unnecessary. Using the access to official records provided by Thet’r’s position as Logistos, she discovered within the archives of the hospital evidence that the hospital director — the first self-programming machine on Komos — had instituted a program of unjustified abortions. Armed with this information, she approached the priestesses of Kubebe for their aid in creating a movement against the domination by Richese.
The revelations on Richese produced a Jihad, but it was not Jehanne who made that decision. The priestesses of Kubebe were the principal forces behind the change which occurred in the ranks of the rebels. They were motivated by their interrogations of the chief programmers and scientists of Richese, many of whom had been willing participants in the actions of the machines in altering the population of Richese. Perhaps the critical moment in these interrogations occurred during the questioning of a Doctor G. Demlen by the chief priestess of Komos, Urania. Demlen was an especially arrogant and unrepentant man, whose disdain for his fellow man’s intelligence was equalled only by his respect for his own — and that of his machines. As his quite prideful and voluntary description of his work on Richese droned on, Urania’s feelings overcame her training and her face began to betray her revulsion. Ultimately even Demlen noticed, and interrupted his stream of self- congratulatory candor to ask what was upsetting her. Urania told him his work violated fundamental principles of respect for human life, not to mention the offense to the worship of the Goddess. At the mention of the Goddess, Demlen exploded in a fit of honest and acid outrage, and in his fury, after suggesting that there was more worth reverence in one of his machines than in the worship of “a supposed ‘goddess’ invented by a clutch of bucolic bumpkins on a pigsty of a planet,” Demlen turned toward the icon of Kubebe as if to spit on it. Before he could commit the act, Urania had killed him with her ceremonial knife.