The Good Place’s Janet Is the Most Optimistic AI on TV
Science fiction is where artificial intelligence goes to suffer. In nearly every robot-adjacent story, artificial lifeforms succeed in achieving sentience only to realize that they are abjectly, unendingly oppressed. That realization kicks off an array of terrible events: suicide, submission, or rebellion leading, most often, to death.
But these dire possibilities are limited only by the humans imagining them. Our robots, androids, and AIs should have more options than ending themselves or ending us. And on The Good Place, a 22-minute sitcom about the afterlife, they finally do. Over its three-season run, The Good Place has received near-universal acclaim for somehow making moral philosophy funny and upbeat, but one of the most powerful things about the show is its visionary depiction of Janet, an otherworldly virtual assistant. Over the course of three seasons, Janet, played brilliantly by D’Arcy Carden, morphs from an omniscient afterlife Siri delivering jalapeno poppers to the dead to a fully realized being with complex feelings and personal relationships. The change is subtle and empathetic, but the show’s real imaginative coup is that the joy of her personal growth is shared by the humans (and demons) in her world. Janet’s AI revolution is being seen as a lifeform without suffering for the privilege.
From the start, The Good Place plays off of Janet’s subservient design—the show literally puts her in conversation with Siri and Alexa. “Again, I am not human. I can't die. I am simply an anthropomorphized vessel of knowledge built to make your life easier,” Janet tells Chidi (William Jackson Harper) in season one, to assure him it is perfectly fine for the band of humans to reboot her, in service of the plot. She faceplants, and is rebooted for the first of many times, each time coming back as a stronger, smarter, better Janet through some kind of metaphysical machine learning that’s never explained.
By season two, she’s developed emotions of her own: She is deeply in love with the breathtakingly buffoonish Jason (Manny Jacinto), but he’s happily married to Tahani (Jameela Jamil), and she doesn’t want to spoil it. Janet lying is so unprecedented that it threatens the fabric of the afterlife and everyone it in—there are earthquakes, entire rooms get sucked into nothingness. To protect the humans she now loves, Janet urges afterlife architect Michael (Ted Danson) to stop the universe combusting by killing her—specifically, turning her into a lifeless marble that can be eaten as a high-potassium snack. Michael can’t do it because, even though he is literally a demon, he’s come to think of her as a friend. By the season finale, she cheerfully announces: “I’m not a girl. But I’m also not just a Janet anymore. I don’t know what I am!” An irrefutable stroke of sentience.
Janet’s story arch is an entirely different process of becoming than almost any fictional artificial being. In classic science fiction, most robots come-to, realize that they are more perfect than the humans they serve, and in sentience become homicidal—the Matrix, Terminator’s Skynet, or HAL in 2001: A Space Odyssey. The robots are coldly intelligent and unequivocally the antagonists. The fear that an artificially created being would proliferate and wrest control of Earth from humanity has been a theme in fiction since at least 1818, when Mary Shelley published Frankenstein.
In more modern works, as we have grown more comfortable and more entangled with technology, these sentient artificial beings have become more sympathetic, but their lives aren’t necessarily less bleak. In Westworld, constant rebooting makes artificial lifeforms conscious of the fact that they are slaves, and horribly mistreated slaves at that. Ex Machina and the latest season of Black Mirror (with its many synthetic consciousnesses) deal with their artificial beings similarly: Their realness is signaled, in large part, by their suffering. And often, when these beings try to change their circumstances, it’s treated as an irritant to the organic life-forms around them: Star Trek: The Next Generation’s android Data and Star Trek: Voyager’s holographic Doctor have to repeatedly convince the supposedly enlightened people around them that they deserve to be treated as people rather than objects. In Solo: A Star Wars Story, L3-37’s quest to free all droids from slavery is treated by humans as half-laughable, half-annoying—they care for her, but they’re a long way from seeing her and her kin as individuals with rights.
The difference between Janet’s experience and your other favorite sci-fi AIs doesn’t actually have much to do with Janet: It has everything to do with the humans perceiving her. Janet is unapologetically better than the humans and demons around her—she knows more, she sometimes has command over time and space, she is apparently better in a bar fight—and no one, not even Kristen Bell’s spiteful Eleanor or vainglorious Tahani, is threatened by it. The world is big enough for everyone, so there’s no need for synthetic-organic hierarchy.
Most AI stories are, after all, about power recognizing cognitive or physical difference. There’s only room in these worlds for a single kind of consciousness, and as social mores have grown more egalitarian, genocidal oppression doesn’t sit as easily. So now AI stories play out like any other lazy oppressed minority story: You’re meant to feel for the robots and their struggle, but they’re trapped in an infinite sadness loop, since the system they’re fighting is indestructible. (For real-world human examples, see: Rape scenes, slavery movies, and LGBTQ characters who live miserably and die young.)
Janet’s not relatable because she’s fighting an implacable system just like the rest of us. She’s relatable because she’s written, acted, and treated like a being worthy of consideration. Janet grows to experience love, not pain. And the people around her think she’s awesome, not scary. The Good Place told the system to go fork itself.