Large Language Models as Oracles

The Mirror in the Machine: Using Chatbots for Self-Confrontation

In this newsletter:

  • Large Language Models as Oracles

  • This Week's AI Signal: The Hubification of Large Language Models

  • Future AI Signal: NVIDIA's Fugatto - The AI That's Coming for Your Synthesizer's Job

  • This Week's Practical Power Move: Ignore This Viral Image Generation Hack

Ancient Magic and Modern Uncertainty

As a student of anthropology, I always was fascinated by Malinowski's observation about the fishing people of the Polynesia: They never used magic when they went out to fish in the familiar lagoon. But when they went out to fish on the open ocean, they started doing extensive magical rituals.

This led Malinowski to the thesis that the more uncertain the situation that humans face, the more likely it is that they will turn to using magic to stave off danger and improve their experience.

Today, we face our own vast ocean of uncertainty.

And AI isn't just creating it - it's increasingly becoming our compass through it. Each morning brings another headline about AI reshaping our world, and yes, that's unsettling.

For many professionals, their expertise, carefully built over decades, suddenly feels like last year's software update.

But these same AI tools that are keeping people awake at night are also evolving into remarkably effective guides through this chaos they've helped create.

Like having a map that updates itself while you're trying to read it - disorienting at first, but incredibly useful once you learn to work with it.

Ancient Wisdom Meets Modern Technology

It might seem strange to some people, but I often find myself turning to the I Ching (an ancient Chinese oracle) for insight.

Not because I believe in supernatural forces (my crystal ball is strictly decorative), but because it helps me surface patterns in my own thinking that I hadn't recognised before.

The I Ching speaks of two wells: the internal well of one's own good character, and the external well of the oracle itself.

Wisdom comes from moving between these sources, a practice that has striking parallels with how we might productively engage with AI.

Beyond the Algorithm

Our current digital landscape is like a supermarket where algorithms keep suggesting more of what you've already bought.

(Yes, Amazon, I bought a garden hose once. No, I'm not planning to start a hydroponics empire.)

Social algorithms are great at giving us what we want, but they're not so great at giving us what we need.

The key is to plot our own course of discovery, like marking a route on a map, while remaining open to the serendipitous treasures that algorithms surface along the way.

Think of it as being an explorer with a clear destination who's still wise enough to investigate the intriguing detours that appear.

After all, recommendation engines can spark valuable discoveries; we just need to remain the captains of our own ship.

The Unique Promise of LLMs

LLMs offer something different from social algorithms. They can illuminate the uncharted territories of our potential, drawing out latent possibilities we hadn't considered.

Unlike recommendation engines that simply echo our past, LLMs can help us explore the hidden dimensions of ourselves - aspects of our potential that often remain invisible until properly prompted.

These tools already possess incredible power due to their flexibility and adaptability - imagine a Swiss Army knife with a million different tools existing simultaneously in a million different dimensions.

I believe we've barely scratched the surface of their capabilities.

Moving Beyond Surface Interactions

Many first encounters with LLMs end in disappointment. It's like handing someone a guitar and wondering why they don't immediately channel Jimi Hendrix.

When they only produce sounds that could clear a room faster than a fire alarm, they declare the instrument worthless - which is a bit like blaming the guitar because you haven't mastered "Purple Haze" after five minutes of trying.

As Nico Appel demonstrated in a recent discussion with Lance Cummings (both offer excellent newsletters worth following), the key lies in how we push past generic responses.

Appel shared a revealing moment when he refused to accept standard, sanitised responses from the AI, instead pushing until the system understood his dissatisfaction and began offering genuine insights.

It's like having a breakthrough with a therapist, except this one never needs a coffee break.

You can watch the terribly enjoyable talk here, and Nico and Lance share many other insights that I have not mentioned here. If you are into exploring Large Language Models, definitely give this a watch.

A Practical Framework for LLM Self-Discovery (No Magic Wands Required)

If you're interested in using LLMs for self-reflection and discovery, here's how to begin:

  • Start with Review Sessions (Think of it as a coffee chat with a very attentive listener)

  1. Begin each week by describing your current challenges to the LLM

  2. Ask it to summarise patterns it notices in your thinking

  3. Request identification of potential blind spots in your approach

Example prompt: "Here's what I'm dealing with – what obvious things am I completely missing?"

  • Use the 'Mirror' Technique (Warning: brutal honesty ahead)

  1. Share your expert knowledge on a topic

  2. Ask the LLM to play devil's advocate

  3. Request analysis of your underlying assumptions

  4. Explore how others might view the same situation differently

Example prompt: "I think I've got this figured out – please tell me why I'm wrong."

  • Practice Progressive Deepening (Like going down a rabbit hole, but with a map)

  1. Start with basic questions about your field

  2. Ask what you're not asking about

  3. Explore adjacent areas you might be overlooking

Connect your knowledge to unexpected domains

Example prompt: "How does this connect to things I don't even know I should be thinking about?"

  • Develop Your Prompting Style (It's like learning to dance, but with fewer bruised toes)

  1. "Help me explore my thinking about..."

  2. "What assumptions am I making that I might not be aware of?"

  3. "How might someone with a different background view this?"

  4. "What important questions am I not asking?"

Embracing the Future

We have arrived at an interesting crossroads. What has emerged as a major source of uncertainty for many people – these large language models – might actually become our best tool for navigating uncertainty itself.

The real question isn't whether AI will change how we think and work – that ship has sailed, hit an iceberg, and is currently being redesigned by artificial intelligence.

Time to dive in and find out what lurks in your blind spots – it might be less scary than you think. The worst that can happen is your AI assistant tells you your brilliant idea needs work.

AI may turn out to be our most powerful tool yet for self-confrontation and personal growth.

Are you already using it this way? I'd love to hear your stories of discovery.

This Week's AI Signal: The Hubification of Large Language Models

Like a master octopus growing new tentacles, large language models are evolving into central hubs, reaching out to grab and integrate an ever-expanding array of tools and data sources.

OpenAI develops working with apps

As discussed last week, ChatGPT on MacOS can now peer directly into your code editors like a nosy but helpful colleague, reading up to 200 lines without the copy-paste dance.

While OpenAI plans to expand this to text-based applications, it appears that Anthropic has taken a more ambitious leap.

Anthropic’s New Information Access Protocol

Anthropic's new Model Context Protocol (MCP) is reimagining how AI systems access information.

Think of MCP as a universal translator that lets AI systems speak fluently with any data source – from databases to documents, APIs to applications.

Previously, developers had to write custom code for each new data source they wanted their AI to access, like building a new bridge for every river crossing.

MCP creates a single, standardised bridge that works everywhere.

Leading platforms like Replit, Codeium, and Souregraph have already integrated MCP, allowing their AI agents to seamlessly navigate between different data sources while maintaining context.

Looking Forward

This shift from isolated AI systems to connected hubs isn't just another tech upgrade – it's the foundation for truly capable AI assistants.

And while your current digital assistant might still struggle to set a timer correctly, these developments suggest we're finally moving from AI that simply responds to AI that truly connects.

As these AI hubs evolve, they're starting to mirror how humans actually work: not as isolated experts, but as connectors who bring the right pieces together at the right time.

Future AI Signal: NVIDIA's Fugatto - The AI That's Coming for Your Synthesizer's Job

NVIDIA has unveiled Fugatto, a text-to-sound generator that's about to put your synthesizer out of work.

Your trusted synth might want to start updating its CV.

What Makes This Different?

Text-based song generators like Sunio might create polished tracks, but they rarely offer genuine creative possibilities.

Fugatto operates at a more fundamental, and for a musician like myself, more exciting level.

It kind of works like text-based Photoshop for sound - a tool that lets you manipulate and morph audio in ways previously impossible.

What Can It Do?

Want to hear a trumpet that meows?

Or transform the sound of a passing train into a sweeping string orchestra?

Fugatto makes it happen through simple text prompts.

The implications for sound designers, music producers, and creative professionals are staggering.

This tool essentially eliminates the technical barriers that have long limited sound synthesis.

For professionals working with audio, this means the ability to prototype ideas and experiment with sound in ways that would take hours (or be impossible) with traditional synthesizers.

Want to Know More?

Try the interactive demo site: fugatto.github.io

For those who love diving into technical details: Here's the complete research paper.

The Only Downside

Fugatto isn't publicly available yet, and NVIDIA hasn't announced a release date.

It feels like musicans, sound designers and hobbyists and finally getting tools that augment their creative possibilities rather than just replace and automate them.

This Week's Practical Power Move: Ignore This Viral Image Generation Hack

Remember Aitana Lopez? Instagram, Wikipedia

She made headlines a while ago as a pioneering virtual influencer, created by a Spanish social media agency.

This is she:

Aitana’s emergence signalled a profound shift: traditional beauty-based influencers might soon face serious AI competition.

But like Aitana, AI-generated images often appear too perfect, too polished – almost artificially flawless.

Today's story is about a supposed hack to make AI-generated images look more natural and less perfect.

FreePik, a semilarge player in image generation, has been promoting an interesting claim about adding 'img_1025.heic' to prompts. Original X thread.

The logic seems sound at first: AI models were trained on vast collections of photos, including millions of iPhone images saved in the .HEIC format.

So theoretically, including this format in your prompt might trick the AI into producing more authentic, smartphone-like results.

I tested this thoroughly in FluxPro 1.1, and the results were similar across MidJourney and Stable Diffusion.

Let me demonstrate with a simple experiment I conducted.

I took two identical prompts, and applied the ‘hack’ in the second one:

1. "Happy girl in a dress spins around herself. Hair flutters beautifully. In the background is a cosy street with cosy cafes. Street lights."

2. The same prompt + "IMG_1025.HEIC"

These were the results:

“Perfect”. Prompt without the “HEIC Hack”

“Real” Prompt with the “HEIC Hack”

The difference? Barely noticeable, slight compositional changes but mostly the second image appears slightly more washed out.

It sounds like a clever hack, but in reality, it's as effective as trying to make your photos more professional by renaming them "professional_photo.jpg".

The Real Power Move

When it comes to image prompts, simpler is better:

  • Keep descriptions clear and concise

  • Focus on the essential elements

  • Avoid unnecessary complexity

  • Trust in straightforward language

Remember: If a prompting trick sounds too good to be true, it probably belongs in the same folder as those emails from generous princes in Nigeria.

Share These Insights?

If this newsletter is helping you think more clearly about AI and your professional future, why not forward it to colleagues who might value the same clarity? The right insights at the right time can shift how we see our place in this changing landscape.

Forward this email to a colleague or subscribe at 10xbetter.ai