ar
Photo: Collected

Twelve months after a major tech company rolled out its state-of-the-art AI coding assistant, the results were underwhelming. Only 41% of its 28,698 engineers had tried the tool.

Even more surprising, adoption rates were only 31% among women and 39% among engineers over 40—despite significant investment in training, support, and deployment, reports Harvard Business Review.

This wasn’t an isolated case. Across corporate America, a similar trend is emerging. According to Pew Research Center, two years after ChatGPT’s launch, only 16% of American workers report using AI at work—even though 91% have permission to. The usual explanations—technical barriers, lack of confidence, or unclear applications—don’t fully explain the hesitation.

HBR’s research suggests something deeper is at play: a “competence penalty”.

The Experiment: Same Code, Different Judgement

To understand the hesitation, HBR ran an experiment with 1,026 engineers. Each evaluated a piece of Python code supposedly written by another engineer—either with or without AI assistance. The code was identical in all cases. The only difference was whether the engineer had reportedly used AI.

The results were striking. Engineers perceived their peers as 9% less competent when they thought AI was involved, even though the code quality remained unchanged. This perception penalty fell hardest on female engineers, who faced a 13% drop in perceived competence, compared to 6% for men.

Even more concerning: male engineers who hadn’t used AI themselves were the most critical—especially toward female AI users, penalising them 26% more harshly than male peers using the same tools.

Rational Fear, Real Consequences

Follow-up surveys with 919 engineers confirmed that many actively avoided AI to protect their professional image. Those who feared the competence penalty most—especially women and older engineers—were the least likely to adopt AI, despite having the most to gain from productivity enhancements.

This is the hidden tax of AI adoption: it’s not just about training or access. It’s about social perception. In environments where competence is fragile or routinely questioned, the risk of being seen as ‘less capable’ outweighs the benefits of faster or smarter work.

One company in the study estimated that low adoption of its AI coding tool resulted in a loss of up to 14% of annual profits, amounting to hundreds of millions in unrealised value.

Shadow AI and Workplace Inequity

When employees feel unsafe using official AI tools, they don’t necessarily stop using AI—they go underground. Shadow AI use (unofficial, unmonitored tools) increases compliance and security risks, while making it harder for organisations to track usage or measure success.

AI also risks widening existing inequalities. While AI is often pitched as a great equaliser, our findings show it can reinforce bias. In male-dominated environments, women using AI were more likely to be perceived as incompetent—not strategic.

This may stem from a phenomenon called social identity threat—when members of underrepresented groups use AI, it can unintentionally confirm existing stereotypes about their ability. In such settings, transparency around AI usage may backfire, creating professional risk rather than trust.

Breaking the Cycle

The company they studied had done everything by the book: invested in training, provided access, and promoted AI use. Still, it wasn’t enough. Solving the problem requires deeper cultural change. Here’s how:

1. Map the Penalty Zones

Look for places where vulnerable groups (like women or older engineers) are underrepresented or outpowered by senior non-adopters. These are hotspots where competence penalties thrive. Analyse time-to-promotion, AI adoption, and disclosure policies by demographic group to understand how deep the issue runs.

2. Empower Role Models

Non-adopters—especially men in senior positions—impose the harshest competence penalties. Countering that requires visible AI champions, especially among those most affected. When senior women openly use AI, it encourages junior women to follow suit.

Initiatives like “30 Days of GPT” or internal AI hackathons have proven effective. At Pinterest’s Makeathon, 96% of participants continued to use AI monthly, and 78% of engineers credited AI with saving time. These events make AI usage normal, social, and celebrated.

3. Redesign Performance Reviews

If AI-assisted work is flagged during performance reviews, it can trigger bias. Companies should remove visible indicators of AI use from evaluations, and instead focus on outcomes: accuracy, efficiency, delivery time.

Some firms, like Microsoft and Shopify, now reward AI usage. Microsoft encourages managers to evaluate AI use as part of an employee’s overall performance, while Shopify’s CEO Tobias Lütke has made AI fluency a formal performance metric.

The Way Forward

The biggest obstacle to AI adoption isn’t lack of access or skills—it’s fear of being judged.

Women and older professionals, who could benefit most from AI, are the ones least likely to use it because they can’t afford the reputational risk. Until organisations acknowledge and dismantle this penalty, they’ll continue to miss out—not just on productivity, but on people’s full potential.

The future of AI at work won’t be shaped by who has the best tech—it’ll be decided by who creates a culture where everyone feels safe to use it.