Written by

George Mitchell

Category

Opinion

Tags

VanticLab Staff

Tags

VanticLab Staff

Tags

VanticLab Staff

Blunt Lessons After Six Years in AI

Six years building AI products. Hundreds of founders convinced they’ve cracked it. Thousands of hours watching humans—myself included—pretend we understand what’s happening. Here’s what I’ve actually learned: nobody knows anything, but we’re all deeply committed to the bit.

“AI doesn’t automate work—it redistributes confusion. We’re not building intelligence; we’re debugging expectation.”

The Changing Infrastructure

The need for manual configuration is gone. The infrastructure has changed, and VanticLab is building software for the world as it really operates. With six years of building AI products, hundreds of founders convinced they’ve cracked it, and thousands of hours observing humans, here’s what I’ve actually learned: nobody knows anything, but we’re all deeply committed to the bit.

  1. Certainty Is Inversely Proportional to Shipping

    The people most confident about AI’s trajectory have usually built the least. Those who’ve shipped real systems talk in maybes; the loudest voices talk in absolutes.

  2. Every Prediction Ages Like Milk

    Read any AI forecast older than six months and try not to cringe. We all laugh at 2023’s predictions—while making the same ones about 2026. We’re astrologers staring at different stars, pretending they make sense this time.

  3. The Demo-to-Production Abyss

    A lab demo collapses the moment it meets real humans doing real work. Users are chaos agents—using your product in ways you never imagined, mostly to solve problems you didn’t know existed.

  4. “It Learned to Do X” ≠ “It Reliably Does X”

    The first happens in a lab. The second happens rarely, unpredictably, or never. Reliability is the white whale of AI—and everyone’s playing Ahab.

  5. Product Problems in AI Clothing

    Most AI issues aren’t technical; they’re design and communication failures. You don’t need a better model—you need better defaults and clearer limits. But “we set better expectations” doesn’t make TechCrunch headlines.

  6. Optimists Are Equally Useless

    The “just autocomplete” crowd and the “basically AGI” crowd are both wrong. The truth is boring: useful but limited. That’s why it doesn’t trend.

  7. The Pattern Always Changes

    Just when you’ve optimised for GPT-4, GPT-4.5 breaks everything. We’re building skyscrapers on quicksand and calling it strategy.

  8. Humans Are the Unstable Variable

    Put an AI in front of people and watch them attribute feelings—or blame it for mistakes they made themselves. We built tools. Users built mythology.

  9. Regulation Roulette

    Nobody knows what’s coming—not lawmakers, not lawyers, not labs. Yet we all write five-year plans like the rules aren’t about to shift entirely.

  10. “Emergence” Is Fancy for “We Don’t Know”

    Scale anything enough and strange things happen—good, bad, or just weird. We call it emergence because “unexplained chaos” sounds less fundable.

  11. The Seven-Hour Net Gain

    AI saves you ten hours, breaks three hours’ worth of stuff, and leaves you wondering if the seven were worth it. That’s the real user experience.

  12. Prompt Engineering Is Half Science, Half Séance

    You’re talking to a brilliant alien—slightly drunk, occasionally helpful. Some days it listens. Some days it ignores you. Nobody knows why. Try again tomorrow.

  13. Research Papers and Production Systems Exist in Parallel Universes

    What dazzles in an academic paper with cherry-picked data collapses the moment you give it to three users in rural Australia on patchy Wi-Fi.

  14. The Graphs Lie

    Pitch decks show clean exponential growth. Reality looks like a seismograph in an earthquake—chaos, dips, spikes, luck.

  15. Speedrunning Software History

    AI is replaying every software mistake—faster. Microservices yesterday, models today. Most people don’t need either, but only realise that after spending the money.

  16. The Bottleneck Is Always Human

    The model is fine. It’s the data, the UI, the onboarding, the training, the expectations, the trust. Pick one—it’s probably your actual problem.

  17. The Content Trap

    If you run a media channel and ever declared “models have changed the game,” you just signed up for a job that never ends. Every update, every API tweak, every hallucinated feature—you’re back at the mic, narrating progress at the speed of hype.

  18. Hype Has Seasons

    Today’s miracle becomes tomorrow’s baseline. Today’s flop becomes tomorrow’s infrastructure. We mistake noise for signal because we’re inside it.

  19. Managing Expectations Is the Job

    You’re not building AI—you’re managing belief: your own, your customers’, your investors’, and the model’s.

  20. The Only Real Lesson

    AI works better than skeptics think and worse than optimists promise. It changes everything—and almost nothing. Nobody knows what happens next. We’re just building, watching what breaks, and pretending it was intentional, which, to be fair, is how humans have always done it.


The closer you get to the code, the quieter the certainty becomes. Everyone’s an oracle until it’s time to deploy.

In the end, if any of us are to accept we know very little about a whole lot, it’s worth realising that most people—at their core—mean well. Perceptions will keep fracturing, and the fallout can’t be predicted by the smartest language model on the planet.

While the models—and the humans building them—will keep guessing what comes next, they’ll never feel the pulse beneath it. Meaning still belongs to us: fragile, conflicted, human. That’s where the real intelligence will live for now. In a world obsessed with prediction, understanding may soon be the rarest pattern of all.


Newsletter

Newsletter

Newsletter