Tristan Harris and Liv Boeree talk Game Theory: The Driving Force Behind Humanity’s Biggest Issues

What do climate change, political polarization, nuclear proliferation and the race to AGI have in common? Tristan Harris, founder of the Center for Humane Technology, and Liv Boeree, poker champion and risk expert, explore the misaligned incentive structures that underlie most of the planet's biggest issues.

This talk was recorded at Summit At Sea in May 2023.

About the Presenter

Liv Boeree, Game Theory & Filmmaker, Longview Philanthropy

Liv Boeree

One of the world's most successful poker players turned researcher and educator on game theory risk.

Tristan Harris, Co-Founder & Executive Director, Center for Humane Technology

Tristan Harris

The tech ethicist working to keep extractive technology from hijacking our brains, behaviors and beliefs.

Transcript

Welcome to the stage co-founder and executive director of the Center for Humane Technology, Tristan Harris, and game theorist and filmmaker, Liv Boeree.

[applause]

I think what we wanted to do is use this time together really interactively. We don't have — we have a rough plan of things we want to talk about, but it's just: how do we get better at pointing our attention not at bad guys, not at the bad CEOs, at the bad corporations — which there are bad CEOs and bad corporations — but notice bad games? Can we point our attention at a bad game and see if that is the kind of deeper structure underneath why many problems are getting worse? Because that's been a big transformation for us in both of our lives. You can speak to it first — I'm happy to speak to my experience with how I started seeing that with social media.

Yeah, I mean I think the first time I sort of noticed these kind of dynamics emerging in my own life was with poker. You're playing a game of poker and everyone agrees — the reason the game works is that everyone is in agreement of what the rules are and what is allowed to be done versus what isn't.

What I noticed with poker when I first got into it — it was very much a game of art more than anything. No one understood the mechanics of the game specifically, and the best players in the world were very intuitive, kind of like hustler people who were just playing on their street smarts. But through the march of technology, when we got online poker and the ability to save your hands in digital format after a playing session, it created the incentive for everyone to start building software — analysis tools to examine all of this information. And it created this arms race dynamic where if you didn't use the latest software to examine your hand histories or study the game-theoretic optimal solutions, you would get left behind by everyone who did.

At least in poker, this arms race dynamic — in some ways it created bad outcomes because it took away a little bit of the magic of the game. It's become very scientific, and you don't stand a chance if you don't study with these analysis softwares. But overall, the externalities of this competitive situation are fairly limited. In all of these examples we're going to talk about, the externalities are strictly negative. And it's this same dynamic of a combination of short-term incentives acting on the individuals that in aggregate are misaligned with the good of the whole.

So what I hear you saying is there's this game of poker, people are developing strategies, one set developing good strategies, another set developing good strategies. Suddenly this new group comes along and they're like, this AI-powered poker player, and they're starting to out-compete everybody else. So the other guys can't win by just trying to not do the AI-powered thing — they have to also analyze the data, build a predictive model, get better and better and match them in that power, whether they want to or not.

Whether they want to or not. And so it becomes this race to the bottom, a race to the edge, in which we end up playing a game where we're not even having fun. Until then you get to this new phase and you're playing a better game. And I think ultimately what we want to do is get better as societies at noticing these bad games so that we don't spend all of our energy lobbing against that one poker player, being upset at him, and instead realize that we need to change the game — so that either everybody gets those AI-powered poker capacities, or no one gets them, or something like that.

Another example of this I like is these beauty filters that are on Instagram and TikTok, everywhere. Has anyone here ever used them, tried them, felt like they noticed the picture and then were like, "I look really good in this, I'm gonna post it"?

The reason this is a similar dynamic — this Moloch dynamic, we haven't explained why it's called Moloch yet, we should do that — is that the world of influencing is incredibly competitive these days. It's actually the number one job choice amongst Gen Z, at least in the US and in the West. The number one most aspired-to career is social media influencer.

And what is the best way of getting likes and follows? It's posting hot pictures. By and large, for the average brain, if you see a side by side, most people's brains tend to click on or notice the one that has the beauty filters on. So there's this massive incentive pressure for individual social media influencers to use these things, even if they know they're being a little bit inauthentic to their followers. And even if they know that the filter isn't good for how it makes them feel — when I've used these things, I'll take a picture that I love, like, "Damn, that's a great picture," and then I'll apply the filter to it and compare it to the original, and I no longer like the original picture. It's horrible for your mental health if you keep doing this continuously — not to mention if you're on social media and you're seeing everything and can't tell what's real and what's not, but everyone seems really hot, it's gonna make you feel bad about yourself.

But everyone is trapped in this situation where, well, I might as well use them because I know that everybody else is, and if I don't, I'm not going to be competitive against my peers. So yeah, another example of Moloch.

To deepen that example — since we do all this work on social media, we hear from so many kids. There's like a teen girl who will say, "I'm worried my boyfriend will break up with me if I stop using the beautification filter because he's hooked on me based on a false impression of what I look like." So you go from these people who are caught in a race to add beautification filters...

On the Social Dilemma side — how many people here have seen the Netflix documentary The Social Dilemma? Okay, awesome, a bunch of you.

So just to give a little backstory on that. In 2013, I was actually at Google and I was starting to notice the tech is going in this worse direction. It seems to be kind of pushing society into this more addicted, outraged, distracted, polarized, validation-seeking society. It's like, what is that? There's this weird invisible force pushing in that direction.

I remember just sitting there, and all my friends in college at Stanford were the co-founders of Instagram, and they built these things. We were all in this lab that we've talked about many times together — there was a class called the Persuasive Technology class with BJ Fogg, and we were all learning these techniques about how do you influence people's psychology. They were building that into the way Instagram worked.

So these apps — when you're talking about the influencers who have no choice but to add a beautification filter — well, the apps have to compete also. Mirror, mirror on the wall, which of these apps makes me look best of all? If Snapchat adds a beautification filter, TikTok has to follow. And Liv has done a great video you should all Google later called the beautification wars, or "Moloch Beauty Wars." In your video, you talk about how TikTok was even found to invisibly, without even asking people, beautify your photo by less than 5%, 3% or something like that, because the point is we prefer the thing we use that makes us look the best.

So you're talking about it first on the user side, that users are caught in this arms race. Then there's the people making the things. Now, can Instagram or Snapchat choose to take away the beautification filters now that they're there? They're just going to lose to TikTok or the other one that keeps adding them.

So that's where this phrase came up — in the presentation I gave in 2013 at Google that went viral and caused this stir in the company — the phrase "the race to the bottom of the brain stem." It's a race for who can reverse-engineer more secret backdoors into influencing the human mind. If you innovate another one called pull-to-refresh, you're just going to out-compete the guy that doesn't do the pull-to-refresh slot machine. And if you innovate like TikTok did — a full-screen video takeover where you swipe and it takes over the full screen — the reason TikTok is winning over Instagram is because Instagram has continuous scrolling where you can be halfway between one photo and the next one, but TikTok's winning because they created this format that snaps to take over the entire thing.

So they're figuring out more and more of the edges and features of the geometry of what keeps us engaged. But in net, in aggregate, that produces this more addicted, distracted, polarized, narcissistic, misinformed society.

I think it's worth mentioning that the reason is that what rewards the companies directly — their goal is to generate as much ad revenue, and they get that by maximizing user minutes. In other words, engagement. But engagement is only loosely correlated with what is actually good for people. If someone in their highest, wisest self was to say, "What do I want in life?" — yes, I want to be entertained and have a nice time. But engagement only partially does that, and in some cases it's completely decoupled and even inverse from it. It's like the rats on heroin — technically the rat will keep pressing the heroin button because it feels nice, but we know, and probably if the rat could have a higher mind, it would also be like, "This is not what I want to be." But it can't stop because it's so effective at doing that.

These technologies — they're throwing more and more resources into winning this game, not because they want to turn everybody into zombies, but because the nature of the game demands it.

Let's also talk about some other examples. Another one is loneliness. We have a massive loneliness and mental health crisis, right? But if I don't maximize engagement — you sitting by yourself on a screen, doom-scrolling — I'm going to lose to the guy that will. So that's another tragedy of the commons where we get mass loneliness everywhere around the world because everyone is caught in this bad race.

Another one is ego-pumping. Literally TikTok and Instagram are in a competition for who can promise the most visibility. If you post the same video on Instagram as you do on TikTok, it's a race for who can promise more people will see it. The way TikTok has won that race is they designed it so you scroll really fast through a lot of stuff — the number of swipes per hour on TikTok is going to be a lot faster because they engineered it for that rapid swiping, which means the view count for every video is going to go up a lot faster than on Instagram, where people are scrolling more slowly through a long list of content. So the teenager who just posted their video is like, "I reached a million people today" — but really you reached like this little micro part of their brain for a microsecond. This number is ego inflation, and they're competing to boost that number.

Now, the reason I wanted to say this is because it says something about where you intervene in a system. Is the solution to this problem: let's go to Washington DC and regulate content moderation? Let's regulate — this content is good, this content is bad — and if we just got the bad content off of TikTok, Instagram, whatever, we'd somehow live in this utopia where social media is good for humanity? I want you to see that the reason that solution is not good is because it doesn't deal with the race dynamic.

I just wanted to say that because the point of this, I think, is to train our attention on how we solve problems more effectively in the world. Instead of getting upset at one company screwing up teens' mental health — like Snapchat is doing a bunch of negative stuff — we could put all of our energy on that, or we could realize that Snapchat is caught in a bad dynamic. So I think that's maybe a helpful place to then introduce Moloch, which is: how do we all get good at slaying Moloch?

So what is Moloch? Well, let me give a little bit of the history of where this term comes from. It's M-O-L-O-C-H, by the way. It's actually from an old Bible story. There was this war-obsessed cult in the Canaanite times, apparently — hopefully they aren't real, because they were so determined to win wars that they started idolizing this sort of god-demon effigy thing. It looked like a bull, called Moloch. And they would sacrifice their children — make the ultimate sacrifice, the thing they love the most, their children — to this thing, in the belief that it would bestow upon them the military power they could ever want to win wars.

From there, in more modern terms, it has become sort of synonymous with this idea of unhealthy game theory, where the design of the game incentivizes people to sacrifice more and more important values — the things that they hold dear — in order to win that short-term thing. So the social media companies are literally sacrificing our children for short-term gain, more profit, quarterly numbers going up and down. It's actually quite literally that story coming true.

Or in a sort of environmental tragedy, the commons-type situation — it's not like cattle farmers in Brazil necessarily want to destroy the Amazon rainforest. But they are trapped in their own game: this is the easiest way to feed their families, expand their business. They can turn that patch of wood into a pile of money and put their cows on it. And even if they don't want to, they feel like they might as well because the farm down the road will just do it as well.

So it's this idea of sacrificing other things in order to optimize for a narrow metric. That's kind of what Moloch does. If Moloch had a personality, it's like that brain worm that gets into people and makes them feel like they have no other option but to sacrifice the really important stuff in order to win.

Another example: there's a great book called Salt Sugar Fat — how many people here have heard of that book? Actually, a lot of people. It really goes through how the processed food industry — here's another example. You can say we have an obesity crisis, a diabetes crisis, a huge health crisis. And we could say we need to educate people about what they eat, and that's obviously one thing we definitely need to do. But also, what are the driving forces? These companies are caught in a race to create addictive foods, just like social media companies are in a race to create addictive apps.

It turns out that just like narcissism and pull-to-refresh and auto-removing the stopping cues and infinite scroll are the mechanisms of social media being addictive, in food it's just salt, sugar, fat — more clever combinations of salt, sugar, fat. Those are the backdoors to the human gustatory system. And the companies are literally in that book in an arms race to add more and more.

It's actually really interesting because the book opens with this famed meeting — I think it was in 1999 — where the food scientists and health scientists from the major food companies like Tyson Foods and General Mills all got together at a hotel, I think in Michigan or Detroit. They actually acknowledged that there's this collective kind of "climate change of bodies" happening from their arms race. And they said, can we coordinate? Can we actually agree to set limits on salt, sugar, and fat?

The opening chapter of the book goes into it — essentially the talks failed because the leading company that was most profiting from this said, "We're just giving people what they want." And the other companies were only going to agree to something if the leading big company were to change course.

I think it's a really good example — across everything you see, whether it's deforestation, climate change, air pollution, obesity, social media polarization. A lot of people here probably work on protecting democracy, strengthening democracy. How do you do that when it's an arms race to be the best division entrepreneur you can be? Because the outrage engagement economy rewards you the better you are at identifying a cultural fault line in society and creatively adding inflammation to that fault line. And you're in an arms race to be a better division entrepreneur than the other guy. So you see how the apps competing for that thing then causes people who are now influencers — because it's also colonized the meaning of a future career — to also be in that race.

I think one thing we should double-click on as well is: is Moloch simply capitalism, or is it something a bit deeper than that?

My answer to that is it is something much more fundamental than capitalism. Obviously capitalism, as a system that relies heavily on competition, is vulnerable to these arms-racy dynamics, these Moloch-y dynamics where it's hard for people to coordinate. But there have been many things that have happened in capitalism that have drastically improved the world as well. Capitalism is a tool that can be used for good or for bad. That's why it's important to understand that Moloch is something deeper — it's forces of economics, forces of game theory that can manifest, and if they are not designed in a conscious, smart enough way, they can often become misaligned with the good of the whole.

A good example is the first human tribe or civilization that adopts the plow. A tribe that adopts the plow is going to gain efficiencies and gain extra caloric surplus in a way that they're going to out-compete the other tribes. Once they have more surplus, they're going to have more people, a bigger population, more efficiency, and they get to be more powerful. When there's finite resources, they are the ones that win in that competition against the other tribes. And that's before capitalism — that's just power advantages.

But it goes everywhere. For example, for those who care about animal rights: which civilizations are going to out-compete the others? The ones that see animals as resources to extract the meat and food and transport from them, or the ones that see animals as sacred? You can have values and say animals are sacred, but your society will probably get out-competed by the society that actually sees animals as resources.

Daniel Schmachtenberger, who I'm speaking in his place today, would give the example: can you have animism and believe that animals are sacred? You could believe that, and then let's imagine the plow comes along, technology changes your society. Now you've got to beat an animal all day long. It's hard to yoke an ox, beat an animal, and believe that animals are sacred after that becomes the basis of how your society and civilization works. Then it becomes the new normal. People forget — they don't even question whether that's a good thing to do or not. And you're kind of there again with the same thing with social media. We now accept it as if it's just part of the fabric of society, but it's just become part of the game, the stack of power in the Moloch competition.

One thing I think we should dig into a bit as well is the role of AI in all of this. There's the classic AI arms race that is playing out — the race to artificial general intelligence, where arguably the companies that go as fast as possible are incentivized to go as fast as possible, even if they know the big picture is probably not good for humanity if they go as fast as possible and don't pay attention to safety. At the same time, if they don't do it, they assume they're going to be better stewards of this thing than the other company in the other country. So they might as well. There's this competitive pressure that makes safety sort of go by the wayside.

But also the thing about AI is that it's an incredibly universal tool. Anything it can be used for, it will be used for, as long as there's the incentive pressure to do so. We're seeing more of these narrow AIs that can be used for all kinds of different industries, different games. Some of those games are good, which is great — that's why we might be curing cancer in the next 10 years, and all these other wonderful things on the horizon.

But some of these games are very bad. If an individual deforestation company — farmers that want to cut down the rainforest — now builds an AI that helps them figure out how to cut it down faster, that would be bad. And mining and extraction now use AI and satellite imagery, ground-penetrating radar with AI doing better signal detection to figure out how to do extraction better. So AI will supercharge all of these win-lose games. If I don't race to extract the copper from the mines in Chile and forget the externalities, I'm going to lose to the guy that will. But now AI is going to come in and supercharge every one of these win-lose games — optimizing supply chains, optimizing extraction, optimizing social media for addiction, optimizing a fake relationship on social media to be as engaging, flattering, addicting, and sugary as possible.

So we've got about 10 minutes left. I think we should talk about some examples where we have actually successfully gotten ourselves out of a Moloch situation — where we've redesigned a game to make it healthier, or we've all collectively figured out a way to just stop playing it entirely. You've got a really fun example from Colombia.

Yeah, there is this traffic intersection in Bogota where there was like the worst traffic in the world in the city. Of course, in that game, if you don't just keep racing to do the amoral, don't-look-past-your-hood, keep going as fast as you can thing, everyone just kind of races to do the micro next short-term win for them. So it's just the worst traffic intersection in the world.

The mayor of Bogota was sitting there figuring out, how am I going to solve this problem? How do we create coordination? Coordination is the answer to Moloch — how do we coordinate a collectively better game, a better outcome for all of us to play? But he found this really creative solution: he hired a bunch of mimes. Mimes — like, not talking and making fun of the situation. So when someone would inch their car or take a weird turn around someone, they would scoop behind the car and kind of make fun of them. And when someone was jaywalking, they would do something funny behind the jaywalking person.

[laughter]

And so everyone went from this ruthless Moloch mindset of "I just need to get where I'm going" to suddenly it conjugated the experience into a more expansive place of "we're laughing with each other about how we move through this space." Because there's this thing with traffic — when the traffic gets worse, people get more and more Moloch-like, "I'm just damn tired of sitting in this traffic, so I'm gonna keep doing that amoral thing." And he figured out how to get this traffic intersection to play a different game. I think that's a really beautiful example.

Yeah, it's almost like showing people a way to transcend through humor, which is a very good way. People take — not taking a situation so seriously. Some cutthroat game — if you can just have that moment of taking a breath, stepping outside and looking at it and going, "Huh, that's actually kind of ridiculous what we're all doing" — that gives people a chance.

The other way you can obviously get people to sort of wake up is through shaming. And I think there is some value sometimes to naming and shaming when someone very clearly — are you all familiar with the term prisoner's dilemma? Yeah. So essentially what Moloch is, is a multi-way prisoner's dilemma. The classic instantiation is just two people — they'd be better off if they cooperate, but because of individual incentives in the short term, from an individual perspective it's better to defect. And what Moloch is, is basically that but scaled over hundreds or thousands of people.

I can give another example. For a long time in interpersonal relationships — how many people here know Marshall Rosenberg and nonviolent communication? A good chunk of you. So there was a time before Marshall Rosenberg invented nonviolent communication, and when people got into fights, they just said the nasty thing that came to mind directly to the person. The other person — what do they do? They said, "That hurt," and then they said the nasty thing right back. So that's a downward spiral where the short-term win for me means net bad for the relationship.

Then Marshall Rosenberg comes along and says, you might think that's the only game two people can play when they're in conflict — if you lived in a world or time before people had figured out that something else was possible, you might think that's the only option. But he says, actually, if I name the feeling that I'm having — the technique is you say, "I noticed that when you said that, I felt this." You're not saying, "You did this." You're saying, "I noticed that when you said that, I felt this." And then, what did you feel? And so you keep going back and forth.

It's a different protocol. He's like putting in the mimes into an interpersonal conflict. And I think what we need to be really good at — what we're kind of exploring — is how do we all get better at being the Marshall Rosenbergs and the mayors of Bogota that invent and transcend the current game so that we can all play a better game?

I've personally found it helpful. I used to be a pathologically competitive person, which is probably why poker appealed to me so much — I could sit there and get in someone's mind and figure out how to beat them. It was like a boys' game, and I can beat the boys at their game. But over time, as I matured, I can now look back at that old person and be like, wow — while I was very good at winning that direct thing, I was not a very happy person. And I was often quite a jealous person. If there was someone that I considered a peer or competitor and they won a tournament that I felt like I should win, the sort of emotions that overlap with Moloch are jealousy, scarcity, narrow-mindedness.

So I started thinking, okay, if Moloch is the god of unhealthy competition, lose-lose games, what's the inverse of that? And the best name I can come up with is a god called Win-Win — who loves a bit of competition, who is not like, "Oh, we must all just be Kumbaya, hold hands all the time." Yes, lots of coordination is probably the best way. But it also allows for spaces of competition in a conscious way. We can play a win-lose game as long as we're aware of the externalities, and that everyone's having a good time playing the game, and there's no one on the periphery getting hurt by our slinging of weapons or whatever.

So I just wanted to put that out there as a thing: think about, is what I'm doing right now — as I'm trying to raise capital, as I want to win at this particular industry — just have those moments of checking in. Is this industry truly a win-win industry? And it's okay if there are little pockets of zero-sumness, but overall, is the world better off for that game having been played in the first place?

I think the ideal form of transforming a game is actually figuring out an entirely new win-win game rather than putting a bottom on the race to the bottom. But there are different strategies.

One interesting example: the Sabbath. If Daniel were here in my place, he would say one interesting interpretation of the Sabbath — why is there a Sabbath? Taking one day off completely from work, from technology. In fact, I think there are something like 29 penalties if you violate the Sabbath. Why is this such a serious deal? The answer is that if the work week becomes the place to relatively get your advantage over the other guy, versus everybody taking that one day to have Shabbat, to have Sabbath, to step back from that narrowness, that scarcity, into a more abundant place — it's an interesting interpretation of where religion is putting a binding on what would otherwise be a race.

One of the other inspiring examples, from my chief of staff, is actually the 1995 Protocol to Ban Blinding Laser Weapons. This is a time when there was a future technology that people realized was going to be a thing — weapons that if you point a high-energy laser at someone, it'd be enough to blind them. This is a really unique time where we all saw this technology, and before it was deployed, I think it was the International Commission on Human Rights that got together and said, we want to put a ban on that technology before it gets used.

I'm saying that because here we are with AI, and we're about to ship all these new god-like powers into the world. I'm giving a talk tomorrow on the AI Dilemma, and if you can come, I highly recommend it. We're really worried about all these god-like powers that are going to get out there.

One of the things about AI is it moves at a vertical curve because AI makes better AI. Intelligence makes better intelligence. They're literally using AI built on GPUs from Nvidia to design better chips and GPUs. So the thing is getting faster and faster recursively. And as we have something that's going to move that vertically and that fast, we need to get good at looking ahead at what we don't want to be the bad game that we're caught in.

Because we saw what happened with social media — we let it get fully entangled with our society, we let this game perversely screw up children, and then it created other games where if I choose an individual kid to not use it, they just lose to the kids that still use it. That's one of them, for parents out there — that's why it's so hard. As a kid, if I choose to not use Snapchat, I'm just losing out on all the social gossip, homework being passed around on Snapchat. I'm out of the game, I'm socially excluded.

So we were too late to the party on social media's Moloch dynamic. We're not too late to the party with AI. What we really need to do is become a Moloch-literate culture that sees this natively — that's how we talk, that's how we talk to each other — so we can slay these Moloch games before they get out of control.

Just another example on a really big scale. I'm not saying that we shouldn't try to ensure that Moloch is not the force building AI and instead have more of a win-win force building it. But also, a thing that gives me a lot of hope — because this is obviously a pretty dark topic. The future will ultimately be built by optimists. I think it's important to keep that sort of healthy balance — be realistic about the nature of the problems that we're facing and how very difficult they are, but also keep optimism alive.

I was recently just taking a deep dive into the history of nuclear war and proliferation, which is about as Moloch-y as it gets. But despite these incredible incentives to keep building more and more nuclear weapons and bolster your arsenals, and all these technological improvements which technically threaten the quasi-stable state of mutually assured destruction — even under all those situations, we managed to build treaties and third-party organizations like the IAEA to help these individual states, which are at loggerheads, coordinate for the greater good.

In some ways, we are okay. Things feel a bit sketchy right now, obviously, with the Russia-Ukraine war. But when I was born back in 1984, there were 60,000 nuclear weapons on Earth. Sixty thousand. A few years ago, the last count was basically 12,000 — which is still a ton, it's an unacceptable amount. But that is a huge improvement, which from a Moloch perspective shouldn't really be able to happen, because these guys don't trust each other. Something made them get to that point, but something also helped them roll it back. So it is possible to even rectify games that we are already entangled in.

We need a sort of double-pronged approach. But we agree — with AI, we limited nuclear weapons to nine countries. And I want to say there were nuclear scientists who actually committed suicide soon after the birth of the atomic bomb because they said it's over. There's a story — I think it was Feynman — back in a taxi cab, looking at a bridge being built in New York City in the 1950s, and he's like, "What's the point? We built nukes, it's all over." Why? And some of those scientists committed suicide because they saw the logic of Moloch.

So we're trying to do this weird thing here where we want to train ourselves to point our attention at this thing that often makes people feel helpless — because it's like, how are we ever going to change the game? But we've done it before. We also signed the Nuclear Test Ban Treaty, and people don't do above-ground nuclear tests. And that comes from human-level trust.

One of the other famous stories — I know Zach Bell is here — we've heard the story from some people at Esalen. How many people here know Esalen in California? Of course, at Summit most people would know. Most people in the world probably don't. Where they used to hold these things called Track Two Dialogues, and they had the KGB and the CIA agents dropping acid in the hot tubs together to create a different way of relating to this thing that they were stewarding, which is the prospect of nuclear war.

[laughter]

How many people here know the film The Day After? Only a few. So this is actually really important. There was this film made in 1983 called The Day After, which was the largest made-for-TV film event in human history. A hundred million Americans tuned in on prime-time television to watch a made-for-TV movie about what would actually happen if there was a nuclear war. It followed the course of some families in Kansas who lived near the nuclear silos. They hear the ringing of the alarms and they see their nuke go off and shoot from the silo, and they know there's 30 minutes until something happens.

This film — at the time, there was this shared fate of nuclear annihilation, but humanity didn't really take it seriously. We put it at the back of our mind. The director says we were literally repressing it at the back of our thoughts. He said what we needed to do is actually bring it from the back to the front so we can actually stare face-to-face with what we're really dealing with here.

A hundred million Americans tuned in, and apparently Reagan saw it. His biographer said he got depressed and cried watching the film. And when Reykjavik happened and Reagan and Gorbachev — because the film was also aired in the Soviet Union four years later — it created a shared image of "this is the shared catastrophe that no one wants." The guy who was in Reykjavik emailed the director and said, "Your film had a lot to do with this happening."

So that's a really inspiring story: if you can create a new shared reality about an omni lose-lose outcome that no one actually wants, you can change the course of how the game is going. We need that kind of thinking around all these games — whether you work on health, animal rights, education, social media, or strengthening democracy. What is the Moloch multi-polar trap that we're in, and how do we communicate a different game that we can all play?

I think we're near time. Yeah. Thank you. The only thing I'd like to add is my current favorite catchphrase, which is: don't hate the players, change the game. Because so much of particularly political polarization — the fact that we are being told to blame the enemy, "These are the guys causing climate change, these are the guys preventing us, building some terrible new thing" — all the while we do that, it makes it harder and harder to coordinate. It makes us lose sight of the real enemy here, which is this Moloch creature.

That's why I think there's some real wisdom in that story from the Bible, from the Canaanite times. It's the same thing. Giving it a face and a personality makes it easier for people to point to it. If we are going to need a scapegoat, it is the real scapegoat, because it's the real enemy.

Thank you so much. And tomorrow I'm also giving a talk on the AI Dilemma — if you want to go deep into how AI issues are playing out in this way, you should come. Thank you.

[applause]

More talks and performances to come.

Subscribe to stay posted.

Attend

Not yet a Summit Community member? Join us.

APPLY TO JOIN

We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies. or Learn More

Cookie Preferences