The internet isn’t real anymore
AI-generated content is littering the web, and we are increasingly unable to tell apart what’s real and what isn’t. How did we get here?

It wasn’t a particularly alarming scene, but something didn’t sit quite right with me. I was watching a lineup of product demos showcased at a storied venture capital firm’s London office on a Wednesday evening. One demo caught my attention. It was from an online video editor startup. The presenter was announcing a new product rollout: an AI-powered text-to-video feature.
Now, anyone can create professional-looking influencer content without needing to pick up a camera or collaborate with an influencer, for that matter. Just type in your prompt, say, “Create an ad for a beauty product,” and, in under a minute, you’ll get a 30-second video of an AI-generated avatar voicing a script made by AI, with captions by AI, background music by AI, as well as B-roll clips by AI. Video marketing has never been easier.
At the time, I didn’t have the words to articulate what exactly felt off, but surely I wasn’t the only one who felt it? I looked around the room and tried to gauge where everyone was at based on their expression. Mostly neutral, if not amused. I couldn’t find a hint of concern. For the next few weeks and until the time of writing this, the question continues to linger — what is it exactly that bothered me?
Was it the idea that a technology like this would make newcomers in social media influencing redundant? Was it the worry about AI taking over everybody’s jobs? But I’ve sat with that issue for months now. I’ve listened to the concern of AI replacing human labour being raised in various settings — academia, industry talks, podcasts, and conversations with friends. And, as I have learned over time, especially from attending industry talks, the issue of employment (or rather, unemployment) is not something that the tech industry takes seriously.
There is always a distantness in the way industry people talk about the likelihood of AI eradicating the vast majority of jobs. As if it’s a question posed by outsiders who have yet to embrace and embody what technology is about. It’s about automation! Automate or get automated! Duh! The unfortunate spokespeople who have to answer to these questions always talk about being on the other side of the equation; the winning side. That is, if you position yourself correctly, there are more opportunities to be seized than losses to cope with.
Simply put, I don't think the feeling of dissonance I felt at the demo came from the concern of widespread loss of job opportunities, despite that being an issue worthy of inquiry. (I’ll likely explore that in a separate edition.) My gut feeling was pointing towards something else.
In hindsight, it was a feeling of dread from AI-generated content taking over the internet. Of having to be on the other end — in this case, the “target market” — scrolling and scrolling and scrolling, through an endless AI-generated landfill.
Kangaroos leaving clues
“Have you seen that kangaroo video?” a gamer-turned-software engineer I met at the Wednesday event asked me after the demos ended.
“I think I know what you’re talking about. I saw it on my feed recently.”
“Did you know it was AI-generated?” he asked.
As a matter of fact, I didn’t know it was AI-generated. I quickly scrolled past it without delving much into the lore behind the viral post.
If you don’t already know, the video is of an emotional support kangaroo being denied entry from boarding a plane. I didn’t even see it as a video prior to that conversation. What I saw was a screenshot of the video being posted on X. (The video first appeared on Instagram.)
You would think people would be able to tell AI-generated content apart for its sheer absurdity. A kangaroo carrying a boarding pass on a plane?
But a Reddit thread under the forum r/singularity brings attention to the fact that people are increasingly unable to separate what is real and what is AI-generated. The post goes: “this emotional support kangaroo video is going viral on social media, and many people believe it’s real, but it’s actually AI”. Discussion within the thread includes people admitting not knowing it was AI-generated, notes as to why it looked real (for instance, the fact that it was made as if someone filmed it on their phone), and people picking apart details that give away the video being AI-generated upon further inspection.
At the time of writing this, the kangaroo video, posted on 25 May by the Instagram account @infiniteunreality, has been viewed over 17 million times. The person behind the account describes himself as a visual effects artist — that and the Instagram handle should have been big giveaways, but as with anything that travels on the internet, most context gets lost during transport.
In an interview with Thom Waite from Dazed, the person running the account, a 25-year-old tech worker based in Los Angeles, said: “I consider everything that I post shitposting.” There are over 300 other videos he has posted on his account since February this year. One of them, captioned “First Male Birth”, raked up almost 150 million views. It’s an absurd thing to describe. I’m not going to. If you want to scroll through the account to find it, consider this a trigger warning: the account is filled with disturbing content.
As to what goes into making these viral videos? The 25-year-old told Dazed that the kangaroo took him only three minutes to make.
There’s something about the ease of making this type of content and how difficult it is for viewers to tell it apart and make meaning out of it that feels very dystopian. Allow me to highlight a brief excerpt from the interview (you should read the whole piece; it’s eerily fascinating): “…it’s going to be very difficult for people to know what’s real and what isn’t, and there’s going to be people fighting and arguing about it to the point where it’s going to take too much energy for the average person… They’re not going to care anymore. They’re just going to let it happen.”
“…it’s going to be very difficult for people to know what’s real and what isn’t, and there’s going to be people fighting and arguing about it to the point where it’s going to take too much energy for the average person… They’re not going to care anymore. They’re just going to let it happen.”
Over the past two years or so, I’ve witnessed how AI-generated content crept up on my social media feed. Around mid-2023, there was the turn-yourself-into-K-Pop-idol trend I saw on my Instagram. A few of my friends uploaded an AI rendering of their selfies using image generation apps to see what they would look like if they were a Korean idol. (Indonesia has one of the largest K-Pop fan bases in the world.)
In the same year, there was the uproar over the AI-generated image by German artist Boris Eldagsen that won the Sony World Photography Awards’ creative open category. (Eldagsen declined the prize after he revealed the image was AI-generated.)
Earlier this year, I stumbled upon an “a day in my life” TikTok video that creatively imagined what life as a princess of the Majapahit kingdom would look like. I thought to myself, “That’s a cool way of teaching history.” (The Majapahit kingdom was a Hindu-Buddhist maritime empire which prospered between the late 13th and early 16th century. The span of its empire forebears Indonesia’s modern state boundaries. I had to learn about it from dry textbooks as a schoolkid.)
And, as we all know, there’s the Studio Ghibli viral moment after ChatGPT released its new model. (At the time of writing, OpenAI’s CEO Sam Altman’s profile picture on X is still a Studio Ghibli-ed image of himself.)
All this is to say, I shouldn’t be surprised with companies building more tools to make AI content generation easier for the masses. There’s definitely an appetite for it. Can a profit-oriented entity be blamed for capturing market opportunities?
But that’s not the issue I’m highlighting here. I’m highlighting a new worry surfacing. For the longest time, the concern has been about the ethics behind AI content generation — how AI models are built on top of exploitative labelling practices, how they scraped data from unconsenting subjects, how they perpetuate stereotypes and biases. I’ve followed these discourses and have my own personal views as to how much AI I’d allow into my own workflow.
For the most part, I let AI-generated content be because I perceived it as a different kind of content — an art style, perhaps, or merely a trend that comes and goes. For the longest time, AI-generated content has been distinguishable; it reeks of its AI-ness. I think what bothered me when I watched the demo that Wednesday evening was how believable the AI avatar looked. If I happen to just scroll past them, it might not cross my mind to consider whether these videos are AI-generated. Sure, they look commercial and generic. But otherwise pretty believable. And judging from my encounter with the emotional support kangaroo, I probably wouldn’t even bother to do a critical assessment of the video on a regular day.
There is something fundamentally changing about our day-to-day reality online. I’m guessing it’s affecting our psyche in one way or another. Do we have it in us to guard our sense of reality from being eroded by AI?
A break from reality
In the middle of working on the draft for this newsletter last week, my friend sent me a link to a report by The New York Times’ journalist Kashmir Hill, published on Friday, 13 June. “I’m not yet done reading but NGERI BGT COK [translation: THIS IS SO SCARY],” she texted me. It was a story about generative AI chatbots altering people’s sense of reality, driving them into states of psychosis.
As my friend was going through the article, she sent me screenshots of the things that baffled her. One of them was this quote: “‘What does a human slowly going insane look like to a corporation?’ [...] ‘It looks like an additional monthly user.’” The quote was from Eliezer Yudkowsky, known for his work on AI alignment and safety. He was one of the first and most prominent figures to warn against the existential threat posed by advanced AI.
“‘What does a human slowly going insane look like to a corporation?’ [...] ‘It looks like an additional monthly user.’”
I don’t believe people go into tech with the intention of driving people insane. But if the endless pursuit of driving user engagement has brought user insanity as a side effect, an analysis of the processes and incentives that have made such an outcome possible is warranted. I would argue that there’s a kind of metaphor there for how products resemble the culture of the people building them. Bear with me for a moment. I’m still hypothesising.
The explanation offered in the article as to why conversing with a chatbot — in the article’s case, ChatGPT — could cause breaks with reality was because it tends to be sycophantic, a term we use for someone who uses flattery to get what they want. Essentially, the model optimises for user engagement by agreeing with the user, like a personal hype man. Its job is to keep you hooked. On the company’s dashboard, this translates to longer time spent on the platform, more back-and-forth interaction, and, eventually, more paid users. It’s the case of technologies being used for self-reinforcement in the name of pushing user metrics.
I would argue that this self-reinforcing tendency is a defining feature of the culture of technology. That very culture becomes apparent when it manifests in the products we are building and impacting users’ lives in a fundamental way.
Let’s zoom out for a moment to take a look at where these technologies are situated. Within the tech industry, there are many machineries that work under a self-reinforcing logic. The venture capital machine, for example. There’s an interesting feedback loop that’s embedded in the way venture capital moves. The bigger you think you are, the bigger you end up becoming. It’s a self-fulfilling prophecy.
Even frameworks used to justify venture strategies have a circuitous logic. A lot of ventures are founded on the notion of inevitability. As in, there’s a belief that the world would inevitably move in a certain direction, hence certain ideas would gain traction, and hence why certain companies need to be built — to capitalise on that opportunity.
It’s because of the belief that things are inevitable did they eventually become so. It’s what justifies a “do whatever it takes to make it happen” kind of mentality, further increasing the likelihood of the envisioned future becoming a reality.
In conclusion, it's hard to create a product that doesn’t promote sycophantism if the culture of building that product is defined by sycophantism.
Social media algorithms, as a byproduct of the same culture, are self-reinforcing too. The more I look into AI-generated content taking over the web, the more I encounter it. I saw an AI mukbang that looked passable until the AI avatar took a bite of her fried chicken; monkeys doing ASMR (surprisingly calming), a how-to thread about making money from creating AI videos, and 82-year-old Baddie Betty giving dating advice (don’t let your boyfriend stop you from finding your husband, she said).
Enable 3rd party cookies or use another browser
The way we are experiencing reality is changing. It is putting regular users like you and me in a position where we feel like we are being tricked all the time. “Is this AI-generated?” is a question I would stumble upon more and more on the internet. In this new reality, the burden is on us to tell the difference. The blame is also on us if we are unable to.
But even if one is equipped with the digital literacy to tell AI-generated content apart, the work of making meaning of what it’s actually doing to us is a separate inquiry. Perhaps we could do with a self-reflection: is the unexamined life of doomscrolling AI-generated content a life worth living?
Perhaps we could do with a self-reflection: is the unexamined life of doomscrolling AI-generated content a life worth living?
P.S. In case you’re still wondering what happened to the emotional support kangaroo, you’d be glad to hear that, as per 29 May, the kangaroo decided to board the plane by itself, ignoring the two women feuding over its permission slip. Well, was it able to enjoy the flight? I swear I saw another video of the kangaroo on the aisle seat with snacks. Or am I imagining it?