- 10x better with AI
- Posts
- LLMs may not be able to reason but they're still coming for your job
LLMs may not be able to reason but they're still coming for your job
While many people are happily arguing whether AI can really think or be creative (or whether GenAI is just a fad) precious few people are realising the magnitude of what's coming
Are we speeding towards a future where all of your colleagues look like those guys from Daft Punk?
In this newsletter:
LLMs may not be able to reason but they're still coming for your job
AI Signal of the Week: an LLM controlled robot that can be built for under 250 bucks
Future Signal: Your AI Assistant Is About to Get More Empathetic
Quick AI Power Move: The Right Way to Train Midjourney
It was 11 PM on a Tuesday, and I was staring at my screen, trying to crack a story problem. The plot followed a group of friends who'd stumbled into trouble at an abandoned factory, but something was missing. The tension wasn't quite right. The narrative felt flat.
I was investigating different plot directions with the help of AI, when ChatGPT did something that stopped me in my tracks: it remembered a detail I'd nearly forgotten—that one character had Tourette syndrome—and suggested their involuntary cursing could alert the guards at exactly the wrong moment.
The suggestion sent shivers down my spine. It wasn't that I couldn't have come up with this solution myself. It was that I hadn't.
Here was perfect conflict, emerging naturally from established character traits, creating exactly the kind of tension the story needed.
"But it's just pattern matching," the critics would say. "It's not really understanding character development."
Let's talk about that word "really."
The Dangerous Game of "Just" and "Really"
"LLMs are just fancy statistical prediction machines"
"It's not really thinking."
"It's just matching training data."
Sound familiar? These arguments are everywhere right now, especially after Apple researchers claimed AI can't reason mathematically. What's fascinating is how quickly the press transformed this specific claim about mathematical reasoning into sweeping headlines about AI's inability to reason at all.
It's rather like saying someone isn't intelligent because they're bad at calculus.
But there's a deeper problem with these "just" and "really" arguments, and they're poisoning the AI debate.
They take one thing and describe it from another angle to make it seem mediocre.
It's like saying that Einstein wasn't a genius because his insights were "just neurons firing," or that falling in love is "just hormones." While these things are true, they don't do justice to the incredible importance or meaning we attach to these things as human beings.
Describing X in terms of Y is simply unhelpful reductionism, and unfortunately a fallacy that even seasoned AI researchers talking about their own work fall into.
But it's an understandable reaction. The ego rebelling against the looming possibility of its own demise.
Many professionals are now experiencing what Kasparov experienced in 1997 when he lost against IBM's Deep Blue chess program and thought that it was cheating.
Whistling Past the Graveyard
I've been having fascinating debates over LinkedIn lately. People keep telling me what AI "can't really" do. They list its limitations with an almost relieved tone. "It can't really understand context." "It can't really reason." "It can't really create original ideas."
Let me be direct: These people are like that man who's whistling past the graveyard.
Here's what these critics are missing: AI doesn't need to become more magical to disrupt your career. The technologies that could make your current role obsolete already exist.
They don't need to be invented—they just need to be integrated better.
While people are debating whether AI can "really" think, existing systems are already transforming industries through simple integration and iteration.
And this obsession with what AI can "really" do — a position we could call "Essentialism" — misses something fundamental about how we actually interact with the world.
Beyond the Essentialist Trap
Indulge me and think about a table for a moment, and the different ways we can understand it, depending on who we are.
A quantum physicist sees subatomic particles and force fields. A carpenter sees joints and grain patterns. A designer sees form and function.
Which one is seeing what a table "really" is?
Why not all of them?
Each of them has an account of the table that is 'true' insofar as it conveniently helps them navigate their experience of the table. A quantum physicists account of a table is no ‘truer’ to reality than that of a carpenter.
This is exactly what's happening in the AI debate. People are hunting for some essential "true" understanding or "real" reasoning—and missing the profound capabilities emerging right in front of them.
Wittgensteins Linguistic Turn
In his early work, the philosopher Wittgenstein spent years trying to capture reality in perfectly logical language structures.
Eventually, he realised that it could not be done. But more than that, he realised that language's real power lies in its fuzziness. Language is imperfect, yes, but that makes it perfectly capable of doing the job we need it to do.
Today's AI critics are making the same mistake Wittgenstein initially made. They're hunting for some platonic ideal of "real" understanding, while missing the practical magic happening in the real world.
The Integration Revolution
Here's what the critics are missing: Intelligence isn't about having one perfect reasoning system. It's about integration.
When Khan Academy's new LLM-based AI tutor needs to calculate something, it uses a calculator. Why? For the simple reason that LLMs were not designed to be good at calculating things.
I expect a lot of AI technologies to be like this.
There will (probably) be a language-based orchestration layer and then there will be specialised tooling that can be deployed when things need to be very precise.
LLMs will become tool users just like us.
And when these technologies are integrated well, AGI will be a fact. Because that will be the moment that AI will be better than most humans at most things.
What a lot of AI critics are missing is that we don't need to invent anything fundamentally new to make this kind of AGI come into being.
And that is why even though, as many people say, we may now be entering the Trough of disillusionment according to the Gartner Hype Cycle, this says very little about the eventual productive value of the technology.
The Future Isn't About "Real" Intelligence
The development of truly powerful AI won't come from achieving some platonic ideal of "real" understanding, or developing some form of supreme monolithic intelligence. It'll come from getting better at this integration dance.
Remember my story problem? The AI didn't need to "truly understand" human psychology to make that brilliant suggestion. It needed to integrate character information, narrative structure, and cause-and-effect relationships in a useful way.
That's not "just" pattern matching any more than your own creative insights are "just" neurons firing.
Or to (perversely) paraphrase philosopher Gilles Deleuze:
Intelligence is not like a tree growing from the ground up into the sky, but more like grass growing: from the middle on out.
Intelligence, in this view, is a horizontal, not a vertical notion.
Or to put it differently: AI already beats humans in a variety of specialised domains but we are on the cusp of integrating these deep capabilities horizontally.
Or, in yet other words: AGI is already here, it’s just not evenly distributed.
Where This Leaves Us
The next time someone tells you AI can't "really" think or it's "just" doing X, ask them this: What does your own thinking look like when you strip away all the "justs"?
The future belongs not to those who get hung up on what AI can't "really" do, but to those who embrace its unique form of intelligence—including its differences from human cognition.
Because here's the thing: While the debates about "real" intelligence rage on LinkedIn and other platforms, existing AI systems are quietly transforming how we work. The question isn't whether AI can perfectly replicate human thought—it's whether you're prepared for the changes that are already here.
My story's plot twist wasn't just a happy accident. It was a glimpse of what's possible when we move beyond the "just" and "really" trap, and start exploring what AI can actually do.
And if you're still focused on what AI can't "really" do, you might be whistling past your own professional graveyard.
AI Signal of the Week: an LLM controlled robot that can be built for 250 bucks
Ever tried getting different workplace systems to talk to each other? That painful digital diplomacy where nothing quite connects? Well, our earlier thoughts about language models becoming universal translators just got real.
Berkeley and ETH Zurich researchers have demonstrated exactly what we've been discussing - ChatGPT acting as an 'orchestration layer' between human intent and physical action. Their robot arm understands natural conversation about spills, plans its cleaning approach, and executes the task seamlessly.
You really should watch the video:
Why It Matters
This shows language models genuinely bridging the gap between systems. Rather than learning specific commands for each tool, ChatGPT successfully translates between human language and robotic action. The accessibility of this setup - both in cost and complexity - suggests we're moving from theory to practical implementation of AI as an orchestration layer.
Your Takeaway
Look for processes in your workplace that need this kind of translation layer between human intent and system action. The technology is here, it's affordable, and it's waiting for the right applications. Where could a conversational AI bridge your system gaps?
Technical Note: The system uses LangChain framework to translate between GPT-4 and robot movements, trained on just 100 demonstrations. The entire setup is open-source and replicable.
Future Signal: Your AI Assistant Is About to Get More Empathetic
Remember last week when we discussed the rising prominence of AI dictation? Well, there's more brewing in the voice AI space that deserves your attention.
Hume.ai (I am not affiliated) has launched what they're calling an "emotion-powered" AI system. It can detect 48 distinct emotional expressions in your voice—not just whether you're happy or sad, but the subtle nuances that make human communication rich and meaningful.
ChatGPT's voice mode does something equally fascinating: it handles group conversations by distinguishing between different speakers, learning your speaking patterns and preferences over time. Each interaction becomes more personalised and fluid.
Speaking to AI feels surprisingly natural. Over the past month, I've caught myself dictating more emails, crafting more reports through voice, and generally chatting with AI as I would with a colleague. The keyboard feels increasingly... optional.
Here's a dead simple way to start: Pick one writing task today. Instead of typing, hit the microphone button and just talk it through. You'll fumble. You'll feel awkward. Perfect. That's exactly how new superpowers feel at first.
Some tasks that work brilliantly with voice:
First drafts of emails
Quick meeting summaries
Initial project briefs
Brain-dumping ideas
Group brainstorming sessions (yes, from my experiments multiple people can chat with ChatGPT Advanced Voice Mode now, and it will recognise each speaker)
Suggestion: Start your next workday by dictating your to-do list. It's quick, it feels natural, and it's an excellent way to build the voice-AI habit.
Quick Context: While emotion detection in AI isn't new (sentiment analysis has been around for years), combining it with conversational AI and speaker recognition represents a significant step toward more intuitive human-AI collaboration.
Quick AI Power Move: The Right Way to Train Midjourney
Ranking images in Midjourney
When I mentioned last time that Midjourney can be trained on your taste, many readers were genuinely surprised. (And I was surprised they were surprised – I thought everyone knew!)
But here comes the interesting bit: while discussing this feature on Reddit, I learned I'd been using it wrong all along.
Here's the mistake I made (and you might be making too):
I was always just clicking what looked to me like the better image, even if I did not like either image. I still got better than default outputs, but my method could be improved by hitting the “Skip” button more often, and only clicking images that I actually liked.
Here's how to train Midjourney properly:
Start with the prompt – Before clicking, read what the images were trying to achieve.
Judge against intention – Even if that landscape looks stunning, if you're rating anime-style images, pick the better anime one.
Use 'Skip' liberally – The golden tip. Not sure which better matches the prompt? Skip it. It's not being fussy; it's being precise.
The Bottom Line
Yes, this method takes longer than rapid-fire clicking on pretty pictures. But it's the difference between teaching Midjourney your actual preferences and giving it a confusing mess of mixed signals.
Think of it like training a highly capable but very literal colleague – be clear about what you're praising.
If this newsletter is helping you think more clearly about AI and your professional future, why not forward it to colleagues who might value the same clarity? The right insights at the right time can shift how we see our place in this changing landscape.
Forward this email to a colleague or subscribe at 10xbetter.ai