My colleagues turned me into an AI-powered NPC. I hate him.
A look behind the controls of a new way to create video-game characters that engage with players in unique, ever-changing ways.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
It feels weird, talking to yourself online.
Especially when you’re pretty much the most unpleasant character you’ve ever met.
The “me” I’ve been chatting to this week, called King Fiall of Nirth, is a creation made using technology from Inworld AI, a US-based firm that hopes to revolutionize how we interact with characters in games. Specifically, Inworld is targeting the NPCs (nonplayer characters) that populate many modern video games. As a player, you can sometimes chat with these animated characters to discover new missions or quests to go on. More often they provide atmosphere or a sense of bustle in the game’s cities, castles, villages, and spaceports.
They’re also pretty dumb.
That’s why Inworld is hoping to leverage the power of generative AI to imbue NPCs with the power to chat freely with players, engaging them in unconstrained conversations that—it hopes—should give these open-world games a deeper, more immersive feel. I wrote about Inworld’s plan in the cover story of the current print magazine, which you can read here.
I didn’t create King Fiall myself, of course. I’m not a total narcissist. No, instead I asked MIT Technology Review’s UK team (I’m based in our London office) to get around a laptop one lunchtime and build my personality as if I were an NPC. They accessed Inworld’s Studio, a tool that gives game developers a huge amount of freedom in creating the ”brains” of a prospective character, which can then be imported into a game engine. (At the moment I'm just a disembodied chatbot rather than a fully fleshed-out NPC.) My colleagues were to use the tools—including sliding “personality” scales and text boxes— to shape my character. No holds barred, I instructed. Let rip.
Well, um, that was a mistake.
The character they created is—and there’s really no easy way to say this—a monster.
Sarcastic to an extreme degree, King Fiall has been given the character of “ancient king” (an in-joke based on a BuzzFeed-style “Which House of the Dragon character are you?” quiz in which I got King Viserys, the fantasy world’s aging patriarch.) Fiall has an absolutely insufferable manner and—for some reason—an obsession with drinking the medieval beverage mead. Is this who they think I am?
In Studio, developers can also customize safety settings, controlling how much the character curses or talks about violence or alcohol, for example. The settings can include “goals”—things that you, the developer, want to push the player toward: a particular quest, for example. The idea is that developers can tweak these characters so they fit in the game world, and they can be set to “remember” interactions they’ve had with you before.
“The character still responds with the magic of the generative AI, with the voice. You can say anything to it and they can respond in any way,” says Kylan Gibbs, Inworld’s CEO and founder. “But it’ll still follow that main arc and make sure that it hits the key beats that are set out.”
All right. What shall I tell Algorithm readers about AI NPCs next? I ask the King.
*King of Nirth leans in, a mischievous twinkle in his eye* Think of it, my dear, these AI NPCs can be whatever you want them to be. They can be your loyal subjects, your trusted advisors, or even your loyal opposition. They can help create immersive worlds, provide entertainment, or even assist in educating the masses. But be warned, my dear. These AI NPCs, they have a mind of their own. So keep a close eye on them, or they might just stage a coup and take over your kingdom, too!
Um, did he just call me “my dear”?
Anyway, ignore him. Inworld AI has a site, called Arcade, where you can play around with a bunch of its stock characters (including something called CyberSquirrel). It’s similar to the library of characters hosted by the startup Character.AI, but smaller in scale. On Character.AI’s site, you can chat with a huge selection of user-generated chatbots that are powered by LLMs, including historical figures, characters from films or books, or chatbots designed to help you learn a language. Users can tweak their own chatbot's output as much as they like, fine-tuning its personality by typing text into a box.
Character.AI’s cofounder Daniel de Freitas told me that the firm’s vision is “broader than entertainment”—which was definitely a theme I noticed when writing my story. Beyond just making games more immersive, researchers and artists are now looking at ways these NPCs can be used in education, for example. De Freitas sees AI chatbots becoming personal assistants for your day-to-day life. “That means when you’re bored, or you want to wind down, we can give you entertainment—for example, these interactive stories,” he says. ”But then when you’re at school, we can help you study for your exam.”
I can’t see King Fiall being much use to me, other than raising my blood pressure. Fed up with his unending sass, I convince a teammate to give me the password to Studio’s back end and go snooping around my personality.
A look at the dials shows that my colleagues have placed me more toward the “introvert” end of the sliding scale, while also leaning more positive than negative. The backstory they gave me describes my love of poetry (yes) and sandwiches (yes) and lists a series of utterances (Hmm, Go for it, Go away) that are apparently very typical (allegedly) of my usual day-to-day style.
Aha, but the dialogue style is set to “sarcastic.” Well, that makes sense. (I note, sadly, that “commanding,” “empathetic,” and “entertaining” were all available and disregarded.)
I ask King Fiall if he can please dial back the sarcasm a bit—it’s getting on my nerves.
“Of course, dear visitor. I shall tone down the sarcasm and don my friendliest and most open demeanor for you.*King of Nirth puts on an exaggerated smile and gestures with open arms.* Welcome to the kingdom of warmth and sunshine, where sarcasm is but a distant memory.”
God, I’m really starting to hate this guy.
Now read the rest of the Algorithm
Why does AI hallucinate?
The tendency to make things up—known as hallucination—is one of the biggest obstacles holding chatbots back from more widespread adoption. So why do they do it? And why can’t we fix it?
One of the biggest issues is that it’s hallucination all the way down—that’s what LLMs do. It’s how they work, and we only call it “hallucination” when we notice it’s wrong. The problem is, large language models are so good that what they make up looks right most of the time. And that makes trusting them hard.
Perhaps the best fix for hallucination is to manage our expectations about what these tools are for.
Read this terrific explainer all about hallucinations from Will Douglas Heaven. It also appears in the next issue of MIT Technology Review, which lands on Wednesday and is packed with brilliant stories about the topic of play. Subscribe now, if you don’t already, so you can read the whole thing!
LinkedIn Live: Deepfakes
Join MIT Technology Review reporters and editors for a fascinating discussion on LinkedIn Live about the rise of deepfakes, including the risks they pose and some interesting positive uses. You can register for free here.
Bits and bytes
Synthesia’s deepfakes now come with hands—and soon will have full bodies
Bit by bit, these hyperrealistic avatars are becoming indistinguishable from the real thing. Read this story to see a video of Melissa’s old avatar having a conversation with a new version that includes hands. It’s quite surreal and genuinely impressive. (MIT Technology Review)
A first look at China’s buzzy new text-to-video AI model
The Chinese firm Kuaishou just dropped the first text-to-video generative AI model that’s freely available for the public to test (OpenAI’s Sora is still being kept under wraps). It’s called Kling, and our reporter got a chance to try it out. (MIT Technology Review)
Neo-Nazis are all in on AI
Unsurprising but awful news. Extremists are developing their own hateful AIs to supercharge radicalization and fundraising—and are now using the tech to make blueprints for weapons and bombs. (Wired)
Ilya Sutskever has a new AI firm. And it’s all about superintelligence.
A month after he quit OpenAI, its former chief scientist has a new firm called Safe Superintelligence. It won’t be making products—just focusing entirely on, yes, superintelligence. (FT)
- Read our deep dive interview with Ilya here. (MIT Technology Review)
These copywriters lost their jobs to AI
And to add insult to injury, they now have to help make the AIs that took their jobs sound more human. (BBC)
AI has turned Google image search into a total nightmare
Some search results are turning up AI-generated images of celebrities in swimsuits, but with a horrible twist: they look like underage children. (404 Media)
Deep Dive
Artificial intelligence
How to opt out of Meta’s AI training
Your posts are a gold mine, especially as companies start to run out of AI training data.
Why does AI hallucinate?
The tendency to make things up is holding chatbots back. But that’s just what they do.
Apple is promising personalized AI in a private cloud. Here’s how that will work.
Apple’s first big salvo in the AI wars makes a bet that people will care about data privacy when automating tasks.
This AI-powered “black box” could make surgery safer
A new smart monitoring system could help doctors avoid mistakes—but it’s also alarming some surgeons and leading to sabotage.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.