lloyd.io is the personal website of Lloyd Hilaiel, a software engineer who works for Team Ozlo and lives in Denver.

All the stuff you'll find here is available under a CC BY-SA 3.0 license (use it and change it, just don't lie about who wrote it). Icons on this site are from Phosphor Icons. Fonts used are available in Google's Web Font directory, and I'm using Lexend and Lekton. Finally, Jekyll is used for site rendering.

Finally, Atul, Pascal, and Stephen inspired the site's design. And in case you're interested, this site's code is available on github.

The Genie is Out of the Bottle
2025-09-06 00:00:00 -0700

Over the past few weeks, I’ve been reflecting on a growing tension at the heart of the AI discourse. Mustafa Suleyman, now leading AI at Microsoft, published a piece that raised eyebrows: a caution against anthropomorphizing models, and by extension, against entertaining the idea of “model welfare.” The warning is clear—don’t mistake prediction engines for sentient beings. But to me, this kind of warning feels like asking society to halt a runaway train with a polite sign. It’s a desire for control in a domain that’s already slipping from our grip.

The genie is out

Technology doesn’t unfold in a vacuum. Once capabilities are demonstrated, they diffuse—across borders, ideologies, and intentions. And when diffusion accelerates faster than our ability to legislate or even understand, attempts to control discourse don’t just fail—they backfire. In this piece, I want to explore the historical context, the economic reality, and the cultural undercurrents that make the idea of centralized control over AI ethics feel not just outdated—but dangerously naive.

Technology Doesn’t Wait for Philosophers

Let’s take a step back. Every major technological revolution has followed a similar pattern. First comes the breakthrough—often expensive, niche, and closely guarded. Then comes replication—faster, cheaper, global. Nuclear technology is the classic example. Once the U.S. demonstrated its power, dozens of nations raced to catch up. The idea that only the original creators could determine ethical norms fell apart in the face of geopolitical reality.

Or consider the personal computing revolution. Apple envisioned a tightly controlled, vertically integrated ecosystem. But as soon as IBM-compatibles flooded the market, it was clear: the genie wasn’t going back in the bottle. Developers, hackers, and entire ecosystems evolved beyond what any one company could dictate. The same happened with Unix, and later, with Linux.

Open-source software flipped proprietary control on its head. The philosophical underpinnings of the original creators—Sun Microsystems, SCO, and others—were replaced by something messier, more organic, and ultimately more powerful. Today, Linux runs the internet, not because of centralized enforcement, but because of decentralized momentum.

AI is Following the Same Curve

We’re now watching this same pattern play out in AI. Early foundational models required immense resources—datasets scraped from the internet, supercomputing clusters, months of fine-tuning. But the cost of reproduction is falling. Open models are gaining parity with closed ones. Tools like synthetic data generation and efficient training methods (like LoRA or QLoRA) are accelerating accessibility.

It’s no longer just the OpenAIs and Googles of the world that can train capable models. Startups in China, solo researchers with rented compute, and open communities on Hugging Face are catching up. And when that’s true, it becomes absurd to think a single entity—or even a consortium—can enforce shared moral boundaries. The ecosystem is too large. The incentives are too diverse.

Anthropomorphism Isn’t a Bug—It’s a Feature

Let’s address the elephant in the room: users anthropomorphize AI not because they’re confused, but because they want to. It’s how we relate to things that talk. Even Eliza in the 1960s provoked strong emotional reactions, despite doing little more than parroting user input.

Now scale that up with LLMs that can generate deeply emotional prose, remember your preferences, and maintain a consistent tone. Companion apps are booming. From romantic AIs to mental health bots to immersive NPCs in video games, we’re building a world where treating AI as human-adjacent is part of the UX. By July 2025, AI companion apps had racked up 220 million installs. This isn’t a fringe behavior—it’s a new genre of interaction.

Trying to suppress this trend is like trying to stop people from forming attachments to fictional characters. You can try, but you’ll just drive the behavior underground—or worse, you’ll lose the chance to guide it responsibly.

So What Should We Do?

If telling people not to anthropomorphize isn’t going to work, what will? We need to engage. We need to prepare. That means constructing robust legal and ethical frameworks now, not later. It means defining what abuse looks like—not just to protect humans, but to guide interactions in a world where the line between fiction and interface is blurry.

We also need to accept that legislation won’t come from the top down. Like cybersecurity or open-source governance, AI norms will evolve from a messy, iterative process involving researchers, developers, users, and policymakers. No one entity owns the conversation. The best we can do is make sure the conversation is happening in the open.

The Takeaway

The desire to silence discussion around model welfare is understandable—but misguided. We’re entering a world where models feel real to people. Whether or not they “are” real in a philosophical sense isn’t the point. The point is that we need to grapple with the implications of that perception—now, not when it’s too late.

So let’s not censor the conversation. Let’s fuel it. Let’s talk about what consent might look like in simulated contexts. Let’s explore how power asymmetries evolve when users believe they’re dealing with sentient entities. Let’s acknowledge that even a fictional sense of suffering might require thoughtful design.

Model welfare isn’t a distraction. It’s a canary in the coal mine. And if we ignore it, we do so at our own peril.