Why forcing employees to use AI is producing the opposite of what companies want

Share This Post


Psychological safety is the condition underneath all the others, the ground on which everything else gets built [File]
| Photo Credit: REUTERS

The way offices work has always changed in waves, but the last three decades delivered something different — a pace of change that didn’t allow people to settle before the next disruption arrived. Paper files gave way to desktops. Desktops to laptops. Laptops to phones that did almost everything.

I remember being handed a BlackBerry and a laptop in 2007. Neither felt like a burden. They felt like an upgrade. — and, if I’m honest, like a small marker of status. Not everyone had them. The colleague who could reply to an email from an airport lounge, who could approve a document without being in the building, had something others wanted. That mattered more than it perhaps should have, but it mattered. These devices didn’t arrive as mandates. Nobody was told to use them or face consequences.

That distinction — between something you’re given and something you’re ordered to adopt — is important as a growing number of companies are executing AI mandates at scale.

E-commerce platform Shopify told employees that opting out of AI was, in effect, opting out of a future at the company. Language learning platform Duolingo announced it would stop using contractors for any work AI could handle. Crypto exchange Coinbase was even blunter, asking its engineers to adopt AI coding tools. The message from the corner office, across company after company, has been some version of: use this or lose your seat.

What happened next should not be surprising to anyone who has spent time thinking about how people actually behave at work as reports suggest an increasing proportion of employees quietly doing the opposite — skipping training, feeding AI systems bad inputs to game the metrics, and reverting to manual processes. Among Gen Z workers, the generation most assumed to be inherently comfortable with digital tools, that number is climbing. Something has gone wrong, and I certainly don’t see this as an issue with the technology.

To understand it, you have to revisit a concept that organisational psychologists call ‘psychological safety.’ Research shows that teams perform well not simply when they have the right tools or the right incentives, but when they feel genuinely safe — safe to take risks, to admit they don’t know something, to speak up when something seems wrong, without fearing they’ll be punished for any of it. Psychological safety is the condition underneath all the others, the ground on which everything else gets built.

A top-down AI mandate, issued under threat, is almost perfectly designed to undermine it. When a CEO frames AI publicly as a mechanism for doing more with less, and that same quarter sees layoffs, employees don’t experience the rollout as a productivity tool, but as surveillance device.

Employees won’t see usage dashboards, login counts and token consumption metrics as instruments of trust. They will instinctively pull away from anything that feels like a threat to autonomy and job security as a growing number of workers see AI replacing their jobs.

The issue has never been technology; but in how organisations approach change management. Meaningful adoption follows a different path: one that involves frontline workers in the design before anything gets built, that communicates honestly about what change means for jobs rather than burying the answer in corporate language, that invests in building genuine competence rather than demanding performance of it, and that treats scepticism as something worth understanding rather than something to be overridden.

The companies that could truly see real results will be the ones that ask its workers first and then build around those answers. They will view AI tools usage and headcount reductions as two different mutually exclusive problems. Their change management programmes will be tailored towards building new skills with confidence. That in turn will make employees approach AI tools as an enabler and not a threat.

The mandate-heavy approach reveals a category error at the leadership level — the mistake of treating a cultural and psychological challenge as though it were a process re-engineering problem. An organisation that measures transformation by how many tokens its employees consumed has clearly misplaced its priorities.

The AI reckoning that companies are navigating isn’t really about whether the tools work. It’s about whether the people leading organisations have learned anything at all about how humans change.



Source link

spot_img

Related Posts

Microsoft’s full $499.99 desktop IDE is $42.49 right now

TL;DR: Use code MARCH15 to get lifetime access to Visual...

Xiaomi 17 Review: A Real Galaxy S26 and iPhone 17 Pro Rival?

The Xiaomi 17 isn’t very different from its...

Fuel volatility accelerates EV leap in last mile delivery for ecommerce, logistics firms

Ecommerce and logistics companies are accelerating the shift...

Starfish Space finds a new partner for docking demonstration mission

WASHINGTON — Starfish Space has changed plans for...
spot_img