At the start of this year, Asim sat down on the sofa, opened YouTube, and watched a compelling video from what appeared to be a senior IMF official. The man was articulate, credentialed, and persuasive — the kind of content that makes you feel properly informed. Asim watched the whole thing and wanted more.
So he clicked through to the next video on the same channel. The same IMF official sat in the same room, wearing the same shirt, in the same pose. But he had a completely different voice and accent.
That moment of dissonance — the moment Asim called his wife over to confirm he wasn’t going mad — is where Season 2 begins. The whole video, it turned out, was AI-generated. The person didn’t exist in the way he appeared to. And the really unsettling part? If that second video had used the same AI voice as the first, Asim would never have known.
Welcome back to House of Life. We’re kicking off Season 2 exploring the nature of truth in a post-truth world.
This is not a new crisis
We tend to think of fake news and misinformation as a recent crisis. But deception is as old as human beings. What’s genuinely new — and genuinely alarming — is the collapse of what Asim calls the epistemic backstop.
For most of recorded history, if you wanted to establish that something happened, you needed testimony. And testimony, as Asim explains in this episode, was deeply shaped by class and social hierarchy. A general’s account outweighed a gardener’s, not because it was necessarily truer, but because of who was doing the talking. This was the world that the ancient Athenian concept of isagoria was designed to push back against — the idea that the argument itself should matter more than the status of the person making it.
Then came recordings. Audio tapes, photographs, CCTV footage, video. Suddenly there was an independent category of evidence that didn’t depend on who you were. A gardener’s home video could disprove a minister’s alibi. A voice recording could be played in court. The playing field tilted — and perhaps, as Asim suggests, it quietly helped chip away at class society in ways we’ve never fully credited.
That backstop is now gone. Completely and permanently.
“I don’t think people have fully grok’d yet that it’s gone,” Asim says. “There’s no ability for us to trust anything.”
The Liar’s Dividend
The playing field is now being tilted again and it brings to the surface a dynamic that we rarely talk about: the liar’s dividend. It describes the other side of the AI-generated content problem — not just that fake content can be created, but that genuine content can now be dismissed.
If a recording emerges of a powerful person saying or doing something damning, all they need to say is: it’s AI generated. The technology that creates false evidence also provides a built-in defence against real evidence. It poisons the well in both directions simultaneously.
This is why the collapse of trust in institutions and the rise of AI-generated content form such a dangerous combination as they create a negative reinforcement loop.
So how do we beocme more aware of what is real and what is not?
Nothing is perfect
There’s a strange paradox at the heart of how AI content gets detected. The very thing that makes AI videos so compelling — the flawlessness of delivery, the unbroken eye contact with the camera, the 30-minute monologue without a single “um” — is also what gives them away.
Human beings are imperfect. We look away, we lose our train of thought, we stumble over our words. When Asim rewatched the fake IMF video with fresh eyes, he noticed the eyebrows moving almost algorithmically — precisely timed, a fraction too consistent. The eyes never drifted. Real people talking to cameras, as Tom and Asim both know from experience, just don’t behave like that.
Asim finds both irony and hope in this. “These AI-generated videos are just so perfect,” he observes, “because that’s the way we’ve been taught by the algorithm to be perfect.” The race for views, likes, and engagement trained human creators to cut pauses, tighten intros, and optimise every second. AI then learned from all of that optimised content — and produced something uncannily frictionless.
But perfection, it turns out, is a tell.
We want to believe
The episode takes a detour into AI-generated music, that illuminates something profound about why authenticity matters beyond just accuracy.
Tom describes hearing AI-generated songs played by a friend. They were genuinely beautiful and he felt emotionally touched by them in the moment. But he hasn’t gone back to listen again. There’s something he calls an “icky feeling” that he can’t quite shake.
Asim’s framing is sharper: music with real lyrics and a real singer is testimony. You’re not just hearing sounds that are pleasant — you’re hearing someone’s experience of heartbreak, or joy, or grief. You’re entering into relationship with a person’s interior life. We want to believe the human story that we have told ourselves about what we are listening to. When you discover the song was artificially generated, that relationship turns out to have had no one on the other end of it. It feels empty.
This maps onto something important about the AI news video too. The reason Asim felt so cheated wasn’t just that he’d been misled — it’s that he thought he was connecting with a real expert’s hard-won knowledge. The illusion of testimony, it turns out, is not the same as testimony itself.
Note: Checkout the note at the end of this post for an ironic twist on this part of the conversation.
Don’t mention the Moon!
In one of the episode’s more audacious tangents, Tom raises the question of the upcoming moon landing — and what it means to plan a return to the lunar surface at precisely the moment that photorealistic AI video generation has become viable.
He’s careful: he’s not saying it will be faked. But the question is genuine and unsettling. If a group of people had the technological capability to simulate a moon landing convincingly right now, those people would include the same governments, space agencies, and technologists currently involved in the actual mission. “How will we know?” Tom asks.
Asim’s response is to wonder whether pre-AI evidence might acquire a kind of sacred status — things we know were recorded before the inflection point might be trusted in ways that post-2024 footage simply cannot be. History could effectively split into two epistemic eras: before the models, and after.
So, how do you know this podcast is real?
The episode ends not with despair but with a kind of productive defiance. Both Tom and Asim arrive, from different directions, at the same place: imperfection as proof of humanity.
Asim describes his own evolving approach to social media. He uses AI constantly in his research, but when he writes and posts, he wants it to sound like him — the slightly odd sentence structure, the run-on clauses, the things that don’t quite land perfectly.
“What makes you you isn’t your perfection,” he says. “It’s your mistakes. And the unique ways that you make your mistakes.”
Tom draws a parallel with independent restaurants. A chain can simulate the aesthetic of an independent café — exposed brick, mismatched chairs, quirky signage — but there’s always something that feels off, because the real character of a place comes from an actual human being who actually lived there and made actual choices. You can’t manufacture that across 50 locations.
The same, they think, might be true online. AI will produce an ocean of frictionless, optimised content. And some people will be perfectly happy swimming in it. But increasingly, others will seek out the rough-edged, wandering, imperfect thing — not despite its flaws, but because of them.
If you’ve listened to House of Life before, and especially if you have watched the videos, you’ll know that it has been intentionally imperfect from the very beginning. That’s how you know this podcast is real.
Tell us what you think
As always, we’d love to hear your thoughts. Have you been fooled by AI-generated content? Do you find yourself seeking out imperfection as a signal of the real? And do you think the information matters more than the form it comes in — or are they inseparable?
Leave us a comment on Substack, and if you enjoyed the episode, we’d be very grateful if you liked, restacked and shared it with people you know.
Big thanks
- Tom and Asim
Oh, and a little twist…
Dancing off the discomfort of AI
Having talked in this episode about how AI generated music could be simultaneously really good music and at the same time feel somehow empty, Tom unexpectedly got inspired by a dream he had and wrote an album of his own, by hand, then used AI to produce the finished music so that he could listen to his own songs.
This turned out to be transformative, as there was now a human soul behind the words, which were deeply meaningful (not to mention funny). The album was unexpectedly incredibly well received by many people who resonated with the words and also enjoyed the quality of the music production.
In some ways this isn’t radically different from electronic music production, which Tom has actually been studying, but it is radically easier. It’s interesting to note therefore that while many deeply enjoyed the album, there is a segment of people who are resistant to listen to it if they know that AI was used in the production.
Tom follows the principle of wanting to be transparent and honest, but does social resistance create a perverse incentive to conceal use of AI? In a world where AI is increasingly pervasive, it’s a question we surely need to think about.
You can read the background to Tom’s album, Stealth Defenders of the Status Quo on his other Substack, Humanitas et Machina.
It contains tracks such as Sand Ramp Pyramid, Meat Suit Immortality, the controversial Double Slitted Skirt, the Buddhist anthem Bins and Taxes, and Tom’s love song to humanity, Total Eclipse of the Truth.
Listen to it (and read the lyrics) on most major music apps such as Spotify, Apple Music and Youtube Music. And here it is below.












