The Only Thing We Have To Fear...

An AI Implementation Story

It was 8:47 AM on a Tuesday when Susan, not her real name, clicked open her inbox, just like she had done thousands of times before. Steam rose from her coffee mug - the chipped one with 'World's Okay-est Employee' written on it - as she began her daily ritual of sorting through the digital chaos of company communications. What she didn't know was that seventeen floors up, in a meeting room with a view of the city, her manager was discussing her future with someone like me.

This story - Susan's story - is one that's playing out in offices everywhere, right now, as you read this. It's a story about fear, about good intentions gone awry, and about how trying to protect people can sometimes hurt them instead.

Three Months of Sideways Glances

Every morning, Susan's manager would walk past her desk, exchange pleasant good mornings, and pretend nothing was changing. Every afternoon, that same manager would sit in meetings with the automation team, trying to explain Susan's job without actually involving Susan. It was like watching someone try to write a cookbook by only looking at photos of finished meals.

The reality of Susan's work was far more complex than her manager realised. Every morning, she performed a kind of digital triage. Each email required nuanced decisions: Was this an urgent service request hiding behind casual language? A partnership inquiry that needed routing to business development? A customer complaint requiring special handling? Or perhaps an internal update that needed proper filing? She'd developed an almost supernatural ability to spot the subtle cues - the tone, the sender's history, those little tells that separated truly urgent matters from the merely urgent-sounding ones.

Imagine being Susan's manager for a moment. You've worked with her for six years. You've seen her kids grow up through the photos on her desk. You know she's paying for her mother's care home. You've watched her handle crises, solve problems, and keep the email chaos at bay. How do you tell someone like that their job might be automated?

The manager chose what felt like kindness - silence. Instead of involving Susan in documenting her own process, they tried to capture it themselves. Each day, they'd peer over Susan's shoulder, making notes about what they thought they were seeing, but never actually asking her about the hundreds of micro-decisions she made every hour.

When Protection Becomes the Problem

I remember the exact moment I realised what had gone wrong. We were three weeks into what should have been a straightforward automation project, staring at classification results that made no sense. The accuracy was stuck at 64%, and no amount of model tuning was fixing it. That's when it hit me: the training data we'd been given was fundamentally flawed.

You see, Susan's manager had tried to document how emails should be classified without actually understanding Susan's decision-making process. "Urgent" meant one thing to the manager and something entirely different to Susan. What the manager labeled as "general inquiry" might have been a critical service request in Susan's experienced eyes. We were essentially teaching our AI system using a translation guide written by someone who didn't speak the language.

Think about your own work for a moment. How many subtle judgments do you make each day that would be invisible to an observer? For Susan, it might be knowing that "Just following up..." from a certain client actually means "This is critically urgent but I'm being polite." Or understanding that a brief "OK" from the CEO needs immediate escalation while a similar "OK" from a colleague can wait. Or knowing that when Karen from Accounting writes "fine," things are absolutely not fine and someone better deal with it quickly. These nuances, these human algorithms, these years of accumulated understanding of office dynamics and personality quirks, were completely absent from our training data.

The irony was painful: in trying to protect Susan from the automation process, management had given us a dataset that captured what they thought she did, not what she actually did. It was like trying to teach someone to cook using recipes written by people who'd only eaten the food but never stepped into the kitchen.

What Susan Actually Thought

Here's where the story takes an unexpected turn. When we finally had no choice but to approach Susan, spreadsheets of profit-sharing calculations in hand and anxiety in our hearts, something remarkable happened. Not only was she willing to help - she was enthusiastic about it.

"Oh, you mean like this?" she said, pulling up a personal document where she'd been keeping notes about her decision-making process for years. "I've always thought someone should organize this better."

But it was what Susan said next that really stuck with me. "You know," she said, looking at her notes, "I've been doing this job so long, I sometimes forget there might be something else out there. Something I haven't even imagined yet. Maybe it's time to find out what that could be."

That's the kind of trust in the future that we all need to cultivate, especially now. Not just in our jobs, but in life. As Susan put it, "You can spend all your energy protecting what you have, or you can use that same energy to discover what's next."

I have seen the writing on the wall

I personally learned this lesson the hard way in my copywriting career.

One fateful day, a client had asked me to review a script, and as I read through it, I found myself thinking, "Wow, this is good. Better than what I usually write." Then came the kicker: "Oh, by the way, this was written by AI. We just wanted your opinion on it."

My stomach dropped. That's when I knew - if AI could write better than I could, my days as a copywriter were numbered.

It took a while for me to recover and understand what this crisis was showing me. But over time I'd reinvented myself as an AI implementation advisor, helping companies navigate this very transition. I'd gone from being someone who should be afraid of AI to being the person other people were afraid of.

Finding the Way Forward

The question that kept tugging at me was: how do we turn this around? How does a company ensure employees actually want to help automate their own jobs? It seems counterintuitive - like asking someone to dig their own grave. But that's exactly the wrong metaphor.

Think about it from the company's perspective for a moment. You have employees like Susan who understand the work at a deep level - knowledge that's crucial for successful automation. But you're essentially asking them to make themselves obsolete. Unless... what if they weren't making themselves obsolete? What if they were investing in their own future?

This is where profit-sharing enters the picture. It's a deceptively simple idea: give employees a direct stake in the efficiency gains their knowledge helps create. When automation reduces costs or increases productivity, the people who helped make it happen get a share of those benefits. Suddenly, the incentives align. It's no longer about digging your own grave - it's about building your next career step.

The data backs this up powerfully:

  • Companies that implement profit-sharing see 12-18% higher productivity growth

  • For every 1% of profit share, resistance to automation drops by 0.8%

  • Team-based implementations see 14.2% productivity boosts over five years

But these aren't just numbers floating in a spreadsheet. Let me translate them into real life: When we went back to Susan with a concrete proposal - one that included both profit-sharing and a path to a new role overseeing automated communication systems - everything changed. She wouldn't just help us understand her current job; she'd help shape her future one.

The results were extraordinary. With Susan's insights driving the training data, accuracy jumped to 94%. The system started catching nuances we hadn't even known existed - those subtle patterns that only become visible when someone isn't afraid to make themselves "redundant." And Susan? She evolved from someone who sorted emails to someone who taught AI systems how to understand human communication, earning more and doing more interesting work in the process.

But here's something crucial we need to acknowledge: Susan's path isn't for everyone. And that's not just okay - it might be exactly as it should be. Not everyone wants to become an AI trainer or automation specialist. Not everyone's natural talents or interests align with that direction. And honestly? The world needs more than just people who work with machines.

Maybe in five years we'll have many more massage therapists around. And the world will be better for it.

Of course, profit-sharing isn't a magic wand, particularly in cases where automation might eliminate a role entirely. But it can provide a financial bridge while people explore and transition to new careers - whether that's moving up the automation chain like Susan, or moving into entirely different fields that leverage their human qualities in new ways. The key is involving employees early, understanding their capabilities and desires, and supporting them in finding their next step - whatever that step might be.

The Real Fear

You know what's scarier than telling employees about automation? Trying to implement it without them. Research shows that 68% of workers fear job loss from automation, but here's the twist: 83% of organisations experience "zombie automation" (persistently underperforming implementations). These automations are often built with lack of proper insight into the work process — the kind of insight that only comes from the people currently doing the job.

The solution isn't avoiding these conversations. It's transforming them from threats into opportunities. Here's what that looks like in practice:

Transparent Profit-Sharing

  • 5-15% of net income goes to shared pools

  • Individual allocations reach 50-150% of salary

  • Success metrics are clear and achievable

Direct Involvement

  • Employees become process architects

  • Their insights drive implementation

  • Their experience shapes the future

Clear Pathways Forward

  • Minimum 50 training hours annually

  • New role development

  • Career transition support

The Unexpected Lesson

Susan's story has a happy ending, but not the one anyone expected. She didn't lose her job to automation - she evolved with it. Today, she leads a team that handles complex communication challenges that AI flags but can't solve. Her salary is higher, her work more interesting, and yes, she gets a share of the automation savings.

But the real lesson isn't about Susan, or automation, or even profit-sharing. It's about fear - not the fear i I learned this lesson myself the hard way. itself, but what happens when we let our assumptions about fear drive our decisions. When we try to protect people from changes they might actually want to be part of. When we let our own anxieties create the very problems we're trying to avoid.

The Path Forward

As Susan put it, "You can spend all your energy protecting what you have, or you can use that same energy to discover what's next." Maybe that's the real insight here. Not that change is coming - we all know that. But that willingness to let go of fear and trust in the future might be what determines whether we help shape that future or get shaped by it.

P.S. Susan still has that coffee mug. But now it sits next to a new one that reads "AI Whisperer." She bought it herself, with her first profit-sharing bonus.

Liked this? Send it to a colleague! Transform your relationship to AI with 10xbetter.ai