- 10x better with AI
- Posts
- Scaling our Flaws to the Moon
Scaling our Flaws to the Moon
The Risks of Amplifying Human Weakness in Artificial Intelligence
The Mind is a Dangerous Thing

Steve Jobs called computers 'bicycles for the mind'. If so, AI is our rocket ship – we're strapping ourselves to an explosion and hoping it flies straight.
Because AI doesn't just amplify our capabilities. It also scales our limitations to potentially catastrophic proportions. To understand why this matters, let me tell you about the day I learned just how dangerous our minds can be, even without technological amplification.
It's a small story, but a juicy one.
When 'Send to All' Becomes 'Send to Oh-No'
Years ago, I managed email marketing for a spiritual school. Our director - I guess you could call him a guru - was experiencing typical guru problems: accusations of sexual and power abuse.
He decided to record a video explaining himself. You know the type: part apology, part justification, all crisis management. My task? Send this sensitive explanation to everyone on our mailing list. What could possibly go wrong?
The technical challenges piled up. Video hosting issues. Timeline pressure. When I finally hit 'send', I made a catastrophic error: the video went not just to our list, but to partner organisations who knew nothing about the controversy.
Like your colleague who once 'replied all' to the entire company about the office fridge situation, but considerably worse. I was scolded heavily.
Looking back, this incident became a stark mirror reflecting my tendency towards tunnel vision under stress. When pressure mounts, my focus narrows dangerously. Since that day, I've become acutely aware of this pattern in myself.
But this does not just apply to me.
The Candle Problem: Why We Miss the Obvious
Let's talk about a famous psychology puzzle that explains why this happens to people. You're given a candle, a box of thumbtacks, and matches. Your task: attach the candle to a wall without wax dripping on the table below.
Most people try tacking the candle directly to the wall. (Spoiler: that ends badly.)
The solution? Empty the thumbtack box, tack it to the wall, and use it as a candle holder. Simple, once you see it. But under pressure, most of us don't.

Here is the solution to the candle problem: empty the box of thumb tacks and use it as a candle holder
When AI Gets Tunnel Vision
This brings us to modern AI.
Recently, while developing a course about large language models, I needed examples showing the difference between AI using its general training versus retrieving specific uploaded documents (a process called Retrieval Augmented Generation or RAG)
I asked ChatGPT for suggestions. Its response seemed clever at first: A hospital could use retrieval-augmented generation to access specific protocols. Ask "How do you diagnose Cushing's disease?" and it would pull up the hospital's exact procedures. Ask "What is Cushing's disease?" and it would provide a general definition from its training.
Then it hit me: This suggestion was catastrophically dangerous. Imagine an AI fluently combining retrieved surgical protocols with general medical knowledge, introducing subtle errors into life-or-death procedures. One misplaced decimal point. One skipped step. One creative recombination of critical information.
The AI had displayed its own form of tunnel vision. In its rush to demonstrate technical capabilities, it had completely missed the human stakes. It was an adequate explanation of the principle, but a really bad example.
The Startling Truth
Our minds narrow under pressure. We miss obvious solutions. We make devastating mistakes. Now we're building AI systems that can scale these same limitations to unprecedented levels.
All this is reminiscent of the famous paperclip maximiser problem. You create an AI with one simple goal – maximize paperclip production. Seems harmless enough. But this single-minded AI might decide that human bodies contain useful atoms for paperclips. Or that the entire planet could be converted into a more efficient paperclip factory. The ultimate tunnel vision: a universe of paperclips, and nothing else.
But tunnel vision is remarkably useful. It's how humans get many things done. It's how AI makes remarkable breakthroughs.
The same focus that causes catastrophic email mistakes also drives innovation. The same AI 'blindness' that worries us about medical diagnoses might help solve our climate crisis.
The Paradox We Can't Escape
We're caught between two forces. The drive to focus narrowly. And the wisdom to think holistically, to see the bigger picture. The real challenge isn't choosing between them – it's the ability to hold two perspectives at once.
The same narrowing that caused my email disaster drives every innovation. Every breakthrough. Every solution to impossible problems.
Now we're building AI systems that scale these patterns to unprecedented levels.
The European Union's desire to control AI through regulations is a naive fiction, because you can't control what others won't.
Buckminster Fuller knew this decades ago. To change something, don’t go against the prevailing system directly, but build a new model that makes the old one obsolete.
The Trump administration, as much as I dislike it, at least grasps this. They're trying to build the good thing instead of trying to restrain the bad thing.
The Need for a Miracle
My observation is that we've dropped the ball on climate change completely, and I think we are on a very destructive path.
Humanity is like a smoker in his final days, facing lung cancer.
Quitting smoking isn’t going to help at this point. We need a miracle. Or a collective revolt against human nature. (Which would be a miracle’s miracle, so to speak.)
But AI is here. And it’s the closest thing we have to it.
It magnifies our brilliance and scales our blind spots. That same tunnel vision that sparks our greatest breakthroughs also threatens to produce our deepest failures.
We can’t regulate our way out of human nature (looking at you, EU.)
Nor can we stop the rocket mid-flight. The only choice is to build better engines— holistic AI systems designed not just to accelerate innovation but to temper the risks along the way.
The future won’t wait for perfect answers, and it certainly won’t pause for us to debate the risks.
We must embrace the paradox, striving to focus narrowly on solving the urgent problems while holding the broader view of the kind of world we want to create.
Because the rocket ship is flying.
Will it steer us to the stars or crash into the ground?
Subscribe at 10xbetter.ai for weekly insights on thriving in the AI era.