Agentic AI and Human Agency: A Cyberpsychologist's Perspective
Agentic AI involves autonomous systems that can make decisions and take action without humans overseeing them
I didn't mean to write a whole essay for House of Beautiful Business (HoBB). All they’d asked me to do was offer an ‘expert’ view, as a cyberpsychologist, on how agentic AI will change organisations. My first thought was, ‘I doubt it.’ I doubted I had sufficient experience of AI from within business. I knew agentic AI meant autonomous systems that could make decisions and take actions without the involvement of humans, but that’s all I knew, and that didn’t seem like enough.
But then I began wondering what my mind meant by ‘sufficient,’ or ‘enough,’ and I realised these adjectives were holding me back. So instead, I allowed them to galvanise me. I re-read a conversation about agentic AI between House of Beautiful Business co-founder Tim Leberecht and the Head of AI Lab at Hotwire, Anol Bhattacharya. And I found that I definitely had a response, even if I felt uncertain, so I began to write.
As a cyberpsychologist and author, I've spent much of my career exploring technology’s influence on organisations, brands, and the world of work, appearing on media outlets like BBC and CNN, and speaking on international stages about digital wellbeing, AI ethics, and the human experience of technological change. In my books Reset and All the Ghosts in the Machine, I examine how deeply digital life now frames human experience, from individual psychology to organisational culture.
The essay that eventually emerged was written from a generative place of philosophical doubt, rather than a fixed place of expert certainty. It’s an invitation to doubt, written to any leader who is standing at the precipice of agentic AI adoption and who feels they must jump hastily in the name of progress.
Below, I’ve summarised the key arguments and questions I have about agentic AI in organisations. Please, when you have time and inclination, read the full essay on House of Beautiful Business’ Substack, as well as the extended three-part conversation among me, Anol Bhattacharya, and Tim Leberecht.
Key Takeaways:
Every AI agent serves hidden agendas embedded by powerful humans. Those agendas are rarely questioned, often invisible, and probably not aligned with some of your organisation's actual values.
AI's ‘clean tech’ illusion hides massive environmental costs and invisible exploited human labour (e.g., data labellers, content moderators, and miners).
We're co-opting ecological language (‘ecosystems of agents’) while ignoring our embeddedness in actual ecosystems under stress.
Google search interest in ‘efficiency,’ ‘productivity,’ and ‘optimisation’ has hit 20-year historic highs, showing how AI is driving ever more urgency and interest around these values.
‘Agency-regenerative AI’ could enhance human capacities for deliberation, creativity, and ecological connection rather than eroding them.
Organisations that slow down to question AI narratives may discover untapped competitive brand advantages.
A Note on ‘Thought Leadership’
Before diving into a summary of my HoBB essay, a word about expertise and thought leadership. When I was asked for my ‘expert’ view, I hesitated for another reason. That word implies deep content knowledge, a power-over position, and perhaps imperviousness to challenge.
I've been thinking about the phrase ‘thought leader,’ which has become so popular and which is often applied to me as a keynote speaker and writer. If ‘thought’ is read as a noun, it's not so different to ‘expert,’ something guru-esque in the sense of ‘I’ll tell you what to think.’ I don’t feel comfortable with that.
But if I read ‘thought’ as a verb, as a process, and read ‘leader’ as an inclusive ‘let me take you with me,’ I like that much better.
So don’t take the following as a prescription. I’m taking you with me on a meandering, messy, human journey of thought. I was curious about a succession of things, a variety of angles, a cascade of questions. If this is thought leadership, it’s thought leadership as an invitation to think alongside me as I mind-walk.
The Six Moves: A Summary of My Thought Process on Agentic AI
1. Every Agent has an Agenda
When we deploy agentic AI, whose interests are we actually serving? The Latin roots agere (to drive, to act) and agentum (one who acts) remind us that AI agents act on behalf of someone. Those someones have values and motives not shared by everyone, yet AI adoption is framed as inevitable, incontestable. What human agendas are baked into AI agents? Whose interests get erased?
2. The Blind Man and the Elephant
Like meat-eaters avoiding the slaughterhouse, we bracket off AI's material substrate: rare-earth mining, energy consumption, invisible exploited workers (data labellers, content moderators, lithium miners, ‘fauxtomation’ workers). Kate Crawford's discussion of that invisible labour in her book, Atlas of AI, reveals the whole elephant. When we celebrate AI that ‘never needs a coffee break,’ we're invisibilising human exploitation beneath the comfortable fallacy of clean tech.
3. Agentic Ecosystems versus Actual Ecosystems
The phrase ‘ecosystems of agents’ linguistically hijacks ecological language while ignoring the actual planetary ecosystems we're embedded in. Kate Raworth's book Doughnut Economics shows we need regenerative (not extractive) models that respect planetary boundaries. Current AI acceleration absolutely pushes us outside safe operating limits. Every mention of ‘AI ecosystems’ should remind us we're harming actual living ecosystems.
4. The Cult of Efficiency, Productivity, and Optimisation
I checked Google Trends for ‘efficiency,’ ‘productivity,’ and ‘optimisation.’ All three terms were at 20-year historic search highs, simultaneously. This drumbeat makes resistance feel not just futile but stupid. AI is constantly promoted as the way to achieve these now-mandatory values. But anxiety-driven FOBO-driven (fear of becoming obsolete) decision-making isn't meaningful human agency. The late Mark Fisher said it's easier to imagine the end of the world than the end of capitalism. Organisations that dare to slow down, reflect, and question are choosing conscious agency over reactive adaptation, and I think that’s a relatively untapped brand opportunity.
5. Heidegger’s Warning about ‘Enframing’
Heidegger described technology's tendency to position everything, including us humans, as resources to be used, tapped, optimised, exploited. He called this tendency ‘enframing.’ When enframing becomes total and unquestioned, we lose the ability to see alternatives. The phrase ‘we are all agents’ and the languaging of inevitability signal we're at a threshold where total surrender is dangerously close. But Heidegger also advised that if we keep questioning technology, we can better preserve other ways of being.
6. Foucault and Discursive Coercion
Discourses don't just describe reality. They produce it. The ‘adapt or die’ rhetoric serves as discursive coercion, containing three themes: inevitability (AI is unstoppable), survival (adopt or be eliminated), and moral imperative (don't let down customers/shareholders). We underestimate how much power we wield by questioning and changing discourse itself.
Agency-Regenerative AI: A Different Path
Human agency emerges from embodied experience, emotional complexity, moral imagination, genuine empathy, and embeddedness with living systems.
AI agency operates through algorithmic optimization within predetermined parameters set by humans with often-unreflected agendas.
Agency-regenerative AI would be designed to:
Enhance capacity for slow, deliberative thinking
Encourage questioning rather than optimising
Prompt remembering our connection to actual ecosystems
Encourage imagining alternatives
Nudge toward collaborative meaning-making
We can make judicious use of AI. That means proceeding in a way that consciously preserves rather than erodes human agency. We can interrogate unreflected-upon discourses. We can resist false urgency, questioning what’s driving it. We can demand transparency about AI vendors’ values.
Read the Full Essay
For more, read the full essay at House of Beautiful Business’ Substack: A Cyberpsychologist's Perspective on Agentic AI—in Fact, a Rebuttal.
In the complete piece, you can dive deeper into the six moves I made, catching up with Heidegger, Foucault, Kate Crawford, David Smail, Mark Fisher, and Kate Raworth along the way. You can also see the Agency-Regenerative Checklist, which includes thought-provoking questions for organisational leaders on:
whose agenda you're serving
whether you’re seeing the whole elephant, and
how you might innovate by slowing down.
—----------
Looking for a keynote speaker who brings cyberpsychological knowledge and philosophical depth to AI conversations? Get in touch here.
Elaine speaks at conferences and corporate events in the UK, Europe, and beyond on AI ethics, digital wellbeing, and the psychology of technological change. Her talks challenge leaders to question inevitability narratives and explore what it means to preserve human agency, creativity, and values in an age of increasing automation.
Frequently Asked Questions About Agentic AI
What does ‘agentic AI’ mean, and why might it be problematic?
‘Agentic AI’ refers to AI systems that can act autonomously, meaning they can make decisions, delegate tasks, and operate independently of humans. But every AI agent embeds the values and agendas of its creators (often powerful tech companies), which are rarely questioned. The language makes technological ‘enframing’ feel like empowerment while eroding genuine human agency.
What do you mean by ‘agency-regenerative AI’?
It’s a concept, a prompt to think: What if AI systems were designed to enhance rather than erode human capacities for slow deliberative thinking, questioning, creativity, ecological connection, and collaborative meaning-making? Instead of optimising humans as resources, can AI be used to preserve what makes human agency irreplaceable, including embodied experience, moral imagination and genuine empathy? I don’t know the answer. Asking the question, however, is important.
What’s the reference to ‘the blind men and the elephant’?
Like the parable of blind men touching different parts of an elephant, we only see AI's user interface, not its material substrate. The ‘whole elephant’ includes: rare-earth mining, massive energy consumption, environmental degradation, and countless invisible exploited workers. Kate Crawford's Atlas of AI is a great book that reveals these hidden costs.
What’s wrong with using the word ‘ecosystems’ for AI?
Linguistic choices shape reality and perception. ‘Ecosystems of agents’ hijacks ecological language for profit-driven systems while we ignore our embeddedness in actual ecosystems that are under unprecedented stress. Kate Raworth's book Doughnut Economics speaks about how unlimited growth pushes us outside sustainable planetary boundaries, and the unlimited growth of AI is no exception.
How can organisations resist the ‘inevitability narrative’?
Discourses produce reality. The ‘adapt or die’ narrative is discursive coercion, not objective truth. Organisations can: question whose agenda AI adoption serves; make space for slow deliberative thinking; resist false urgency; examine success metrics beyond EPO (efficiency/productivity/optimisation); and choose AI vendors based on values transparency. Slowing down to reflect could prove a competitive advantage.
What’s wrong with efficiency, productivity, and optimisation?
Sometimes, nothing. But unreflective conformity to these growth-imperative capitalist values (especially without considering environmental/social/human costs) represents reaction formation: embracing the ‘religion’ of progress to avoid ecological and existential guilt. Organisations that question these taken-for-granted values demonstrate courage and conscious agency, and that can benefit a brand.