The new Fox series neXt opens with an ominous quote from America’s preeminent futurist, Elon Musk: “With artificial intelligence we are summoning the demon.” Minutes later, a panicked character is spotted by a WiFi-enabled security camera and gets broadsided by a car whose controls have been hacked.
The omnipresence of technology in 2020 makes it hard to see the threat alluded to by Musk. We already live in a world where algorithms dictate what ads we encounter when browsing the internet, what songs appear in our custom-made playlists, and what news headlines we’ll see (and click) on social media. While we tend to think of the AI threat as a Terminator-style killing machine, it’s more like an ever-changing and undetectable radio frequency that knows us better than we know ourselves.
There are a lot of heady questions raised by neXt, but the show’s urgent pacing and techno-thriller format means it never feels ponderous. John Slattery stars as Silicon Valley visionary Paul LeBlanc, who is kicked out of the company he founded after sounding the alarm about its powerful AI project, neXt. Fernanda Andrade co-stars as a Special Agent Shea Salazer, who teams up with LeBlanc to stop the AI from instigating a global crisis. Michael Mosley, Gerardo Celasco, Eve Harlow, Elizabeth Cappuccino and Jason Butler Harner round out the cast.
We asked the stars about their new show and the role of technology in their own lives. Check out their answers, edited for length and clarity, below.
Sling: The dark side of technology has spooked audiences for decades. What makes neXt different?
John Slattery: I think what makes different is, it isn't so much about the big, dark, spooky stories of AI. It's the small, scary ones. You know, it's not a huge, high concept story, but it's something that seems right around the corner. And I think that's the scary part of this thing isn't so much the giant brain that's going to take over the world. It's something that becomes smart enough to set us against each other.
Sling: [Show Creator] Manny Coto said that one of the inspirations for this series came from his son's Alexa talking to him. What unsettling or unusual experiences have you had in your own life?
John Slattery: I've had a couple of conversations and then the next day [saw] ads over Instagram or Twitter about some of the things I was talking about. I've heard that happening to people. I don't know. I don't have that much fear of electronics because I don't lead a very exciting or deceptive life, I suppose. The scariest thing is how boring I am.
Sling: Since working on neXt, has your behavior around technology been impacted? Do you act differently?
John Slattery: I've watched a lot of a lot more Ted Talks and YouTube videos on AI than I probably would have. I'm probably more attuned to articles that I read about the uses of artificial intelligence because it's everywhere. If you're actually looking for it, they're literally everywhere.
I just was reading something in the New York Times about the use of AI to discover whether brain tumors are malignant or benign, which is an amazingly benevolent or positive use of artificial intelligence versus taking over the world and turning us all into slaves.
Sling: You've shared in the past that you're not like Mad Men's Roger Sterling. Do you have any commonalities with your character on Next?
John Slattery: The character I play on neXt has a debilitating brain disease known as Fatal Familial Insomnia, in which he doesn't have the ability to sleep. He never sleeps. And so he starts to go insane, and get paranoid, and have hallucinations, and is a genius, and obscenely wealthy. So all of those things have been taken from my real life.
No, I have virtually nothing in common with this character. What appealed to me was because he doesn't have a filter, because he doesn't have the energy to behave the way people behave, social decorum and social cues don't really interest him, because he's probably spoiled rotten because of all the money he has. But also he's just so fried that he says things that he shouldn't say and doesn't give a damn about what people think about him.
How susceptible are we to the type of technological threat seen on your show?
Eve Harlow: I think that the threat we see on neXt is actually far closer to us than we care to admit, which is why I think this show is so interesting, because everything that is introduced in the show is technology that already exists.
Michael Mosley: I think that's the question the show is asking. I mean, we are definitely more plugged in than we were five years ago, 10 years ago. I mean, we just used to have that big PC, one in the house. And now we've got everything the world's ever learned in our pocket. It's pretty wild.
Elizabeth Cappuccino: We're super ill-equipped, I think. I don't think it's that far out at all. I think people are painfully unaware of what they're signing to when they, like, say yes to the cookies just so they can finish reading the article. We're really painfully unaware. I don't know, is ignorance bliss in this? I think we're too far gone already.
Since we can't really perceive the type of danger created by AI, what warning signs should we be looking out for?
Fernanda Andrade: I think that we wouldn't see it. And I think it would try to be as seamless as possible so that it could slowly creep into our lives and do what it wants without us noticing. I think that the way the show explores it is that it kind of seeps into it and tries to turn us against each other, so that it's still undetectable. I feel like you wouldn't have any warning signs. Stuff just goes down.
Jason Butler Harner: I think the danger signs are just, if somebody gathered your information and decided to contextualize it incorrectly, that seems really scary and dangerous to me. It could tell a story that may not be true—let's say some crazy crime happens on your way that you know nothing about, but your car happened to drive by it. A lawyer could argue, you were in the vicinity of this crime. And you could say, I was in my car. I had no idea. And they could create a doubt. That part scares me.
Eve Harlow: I think the warning signs are out there in the world right now, in terms of research that has been done [about] how technology [has] affected us badly. The Atlantic released article a few years back showing how iPhones are affecting teenagers, and suicide rates are higher than they've ever been before. And it's correlated directly to iPhone usage. If you look at something as benign as, like, social media, sometimes it brings out the worst in us.
I think the problem with technology is that, for example, the people who created the interface of our phones are also the same designers who created the gambling machines in Vegas. So we are creating these products to be addictive. And I think that we should just be more aware of it. The warning is important, because then people can take it into their own hands.
Michael Mosley: I think if the phone just refuses to do what you want to do, like if the phone says, I'm sorry I can't do that, Hal. That would be scary. Yeah, if Alexa or Siri or whatever just starts talking to me, I think that would be a good time to power off. [LAUGHS]
Can mankind be trusted with technology?
Fernanda Andrade: That is the quintessential question. [You have] Elon Musk saying that artificial intelligence is like summoning the devil. And then there's people that are like, no, this is our saving grace. This is what's going to make us better humans. This is what's going to make us more compassionate, more efficient. You know, the medical benefits are very seductive.
I think that the answer that I heard that was the most impactful to me was that we should not create artificial intelligence until we've figured out our own moral compass. And we seem to really have not gotten that as a race, you know? So how can we possibly create something that would be responsible, and ethical, and moral and ensure that it's for good? It's nearly impossible. We can't even do it ourselves. But there's that hope that we can get our bleep together and then be able to create something that helps us do the same.
Elizabeth Cappuccino: I'm an optimist, and I do think mankind can be trusted with the technology. Having said that, I think we are in the Wild West here, and it hasn't been regulated the way it needs to be. And I think, like any kid with their toys, there needs to be ground rules. And we have to really understand that this is a main form of communication, security, and education, so we need to start monitoring it, and put a policy in place that makes it a safe tool for everyone to continue to use. Because it's a double-edged sword; there is a lot of bad, [but] there's also a lot of good that can come in democratizing education, etc.
Michael Mosley: I don't know. I guess we're going to find out. [LAUGHS] It's kind of like Twitter—It's pretty faceless. It really can kind of give you some courage, right? It's kind of a dumpster fire, Twitter. It gives people all kinds of license to kind of say whatever they want to each other, which is kind of disheartening and dehumanizing, I think.
Jason Butler Harner: I think the jury's out. I think technology has grown so fast, we never could have imagined it going to where it is now. And with this show, what we're learning is that we're going to have to put buffers on how much we use technology, even though we really want to.
New episodes of neXt air Tuesday nights at 9pm ET on Fox. Use the link below to get a FREE* HD antenna to get your local channels. *When you prepay for two months of SLING