AI Malaise Isn’t the Problem—It’s the Signal
.png)
I used to joke that writing is my favorite thing I hate to do. It feels great when something comes together, but then there are times when I’m staring at a blank document, pit in my stomach, unsure how I ever wrote coherent sentences before. That cycle of dread and relief defined my creative process for years. Then, almost overnight, AI altered the rhythm.
As I started to integrate AI into my process, I felt a sense of wonder. It could draft my emails, sharpen my writing, spark fresh content ideas, and even sketch strategic options for framing and audience. Tasks that used to stretch me—finding the right phrasing or figuring out what angle would resonate—became almost effortless. My to-do list shrank, my output grew, and for a moment it felt like I’d unlocked a new way of working. Who wouldn’t want a “make my life easier” button?
But after a few weeks, an undercurrent of unease began to settle in. I found myself less interested in doing the tasks I used to enjoy. The satisfaction of wrestling with a difficult paragraph or finishing a demanding project had faded. The onslaught of near infinite possibilities for rephrasing felt dizzying. The pleasurable, slow pace of thinking and ideating sped up. The style of content I generated (and often encountered) felt lifeless.
I don’t think my unease stemmed from the usual suspects: fear of displacement or even resistance to change. Intellectually, I welcomed AI into my life and accepted the necessary shift from writer to orchestrator. Emotionally, I felt something close to what I call “AI malaise.”
Diagnosing the Issue
AI malaise is the emotional and psychological friction of living in a world saturated with AI. The feeling has many sources—one of which is a strange paradox: I can appear more capable while feeling less competent.
By competence, I don’t just mean the ability to complete a task. I mean the deeper sense that I am sufficient, able to take on the world and meet its problems, and have something of value to offer. AI undeniably helps me get more done and increases the potential of what I can accomplish. But the experience through which I gained the feeling of competence gets compressed. If I use AI without discretion, my intellectual and creative muscles get less exercise. And without any resistance, something in my sense of engagement starts to erode.
It’s a trite observation to say that process often matters more than product. We all know that a home-cooked meal tastes better than a frozen dinner, even when the latter is easier. Convenience always exacts a cost—a quiet “joy tax” we pay for speed and ease. With AI, though, that cost seems greater. It isn't only the loss of pleasure, but the addition of a new, corrosive friction: the twinge of guilt for choosing a slower path when the specter of efficiency always lingers in the background. This makes the very grounds of purpose and engagement feel as though they're shifting, because enjoying the process now feels like needlessly wasting time.
A Collective Reckoning
The more I talk with other people—the more I sheepishly admit my feelings about AI, half-worried it outs me as a foot-dragging luddite—the more I realize I’m not alone. Almost everyone feels weird about AI.
We’re all grappling with some version of the same questions: How do we use it without losing ourselves? What’s the right balance between human and machine in our tasks? If AI speeds everything up, does that mean we’re expected to do more? Will using it make me better… or just busier?
There are some people who eagerly use AI in everything they do. They’re the same sort of people who might choose to sleep only one hour a day if possible or replace their meals with nutrient-dense shakes. By this I mean: some genuinely love efficiency and don’t miss the “process” of working as others might.
But not everyone is in this camp, and imposing AI onto people who aren’t eager to speed up everything risks alienating or disengaging them entirely. This matters not just for individuals, but for companies and organizations trying to integrate AI into their teams.
AI hype makes it seem as though these tools are a magical “easy-bake oven” for instant output and efficiency gains. But the reality is proving more complex. As organizations see AI pilots stall and deliver less-than-transformative results, they're learning a crucial lesson: the tools themselves don't create value. Value is created by the humans who wield them.
Herein lies the tension. For AI to be useful, humans have to use it wisely, but many of us don’t know how to feel about it, let alone how to use it in a way that preserves meaning. That’s where the malaise sets in: we feel bored, uneasy, overwhelmed. We want to work with the machine, but not become it.
Finding a Human Pace
There’s no quick antidote to this unease, but I’ve found that naming it is a start. When the feeling goes unspoken, it’s easy to misinterpret it as laziness, resistance, or some personal flaw. But giving it a name—“AI malaise”—softens its grip. It reminds me that this discomfort isn’t just mine; it’s a natural response to a massive shift in how work gets done.
Recognizing this feeling is what allows me to create space to work with AI without losing myself. For me, that has meant being selective and intentional. I could have generated this essay with a single prompt and finished in half the time, but that would have missed the point altogether. I wanted a true reflection grounded in lived experience, because that’s how I connect with my audience: as real people, not just data points.
At the same time, I don’t cut AI out entirely. I drafted this piece myself, then used AI to smooth sentences and review flow. I took what suggestions helped and abandoned the rest. This approach follows what MIT researcher Nataliya Kos'myna found more effective for cognitive engagement: start with your own brain, then bring in AI as a collaborator.
Following this collaborative model works on a personal level. But my experience also highlights a bigger truth: even the best individual strategies can falter if the organizational environment around them isn’t supportive. What helps me find my pace won’t scale unless workplaces, teams, and norms allow space for people to wrestle with what AI means for their work.
Rethinking the Environment: From Efficiency to Efficacy
If we want AI adoption to be thoughtful and effective—for both people and organizations—the environments we inhabit, not just the tools we use, must evolve. Speed-obsessed workflows and relentless pressure don’t just erode individual agency; they significantly increase the risk that AI initiatives stall or yield low-quality, derivative results. Intentional AI use requires time to experiment, reflect, and iterate.
Translating this realization into action means changing how we structure work. This involves adjusting success metrics away from sheer velocity and toward value. Instead of only measuring output quantity, organizations must also measure discretionary AI use, the quality of human-AI handoffs, and employee-reported ownership over the final product. Only by giving employees the bandwidth—and the cultural permission—to thoughtfully choose when not to use AI can we transform malaise into meaningful engagement across all roles, from content creation to supply chain analysis.
In thinking about this shift, I’m reminded of a seemingly unrelated study comparing the behavior of wild and captive orangutans. Researchers found that, in the wild, where predators and scarcity are constant threats, orangutans are cautious and neophobic—they avoid novelty and rarely experiment. But in zoo environments, where their needs are met and the risks are minimal, the same orangutans become highly exploratory and innovative. They tinker, invent, and learn from each other. The capacity for curiosity was always there; the environment determined whether it could surface.
I think we’re in a similar situation with AI. Adopting these tools solely for the sake of speed and efficiency can suppress curiosity and lead to burnout. But if organizations create conditions of safety and experimentation, something different becomes possible. People can find their own balance with AI—one that protects what makes us human while also yielding more meaningful and original output.
Maybe that’s what AI malaise is trying to tell us: it’s a signal that the way we’re working doesn’t quite fit the way we’re wired. If we pay attention to that discomfort, rather than ignoring or overriding it, we might begin to shape environments where we can discover new value with AI—where the tools empower instead of erode.