Big Technology Live: Alex Kantrowitz interviews Astro Teller, CEO of X, Alphabet's Moonshot Factory

Looking to cut through the hype surrounding AI and discover a nuanced understanding of how it can transform our world? On this edition of Big Technology Live, Alex Kantrowitz interviews Astro Teller, CEO of X, Alphabet's Moonshot Factory. Listen here as they talk about the transformative potential of AI in solving our generation's greatest challenges - from healthcare to climate change.

This talk was recorded at Summit At Sea in May 2023.

About the Presenter

Astro Teller, Captain of Moonshots, X

Astro Teller

Oversees X, Alphabet’s moonshot factory for building magical, audaciously impactful ideas that, through science and technology, can be brought to reality.

Transcript

[applause]

Welcome to the stage, founder of Big Technology, Alex Kantrowitz. Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond. And we are here with Astro Teller, who's the Captain of Moonshots and the CEO of X at Alphabet. Thanks for having me.

All right, now as you can hear we're doing this in front of a live audience here at Summit at Sea. And folks, I think you just clapped a little bit, but I really want you to be on the recording, so let's hear it. You gotta be loud. Doesn't it feel so good to be back in person again? For sure. Is this your first time doing something in front of a live audience? I've done it a few times but nothing as boisterous as this. This is amazing, so thanks everybody for coming.

[applause]

You know, Astro, as I was doing my research for our conversation, which is billed as maybe an optimistic look at AI coming from you, I was looking into your background and saw that your grandfather Edward Teller is the father of the thermonuclear bomb. Yes, it's going to be a slow conversation. I'm kidding.

So I guess a bit of a curveball at the beginning, but here we are. Is that weird? And also, what do you think about when you think about AI — what do you think about in terms of our ability to develop technology that could be quite destructive?

Let me give you two thoughts. I guess you can dig in further, but the first one is I have always been inspired by the idea that getting really bright people into an intense environment to work on something really hard that really mattered could have sort of profound positive impacts for the world. And I think the NASA Space Program, the Manhattan Project, Bletchley Park in England — there are lots of these examples. And that was one of the things that inspired me when I was young, and we'll probably get back to getting a group of people together to work on something hard in particularly hard ways.

But the other thing is, the nuclear bomb makes sort of a good headline — sort of the mushroom cloud, we all have emotions attached to that. But the emotions that we've attached to our fears, our frustrations, understandably about nuclear bombs, translated in the 60s and 70s into such a negative narrative about nuclear energy that we as a society completely missed the boat — pun intended. The disaster which is climate change right now would not be happening if we as a society had not let our fears about the first thing translate into an inability to use the upside of nuclear power to save us from what is now arguably the biggest problem in the world. So as we sort of move forward and talk about the technologies of the day, I would encourage us to think about that duality. Not that nuclear power has no downsides, but that if we weren't careful — and we weren't in that case — we missed a lot of the opportunity for the upsides.

Okay, it's an almost perfect precursor to a discussion about artificial intelligence, because AI can help our society in countless ways, and in fact it's already in place in certain ways helping us. But it also has this capacity for destruction. I mean, you have — I think it's an often-cited statistic — but one in ten AI researchers saying there's a chance that it could effectively turn civilization into paper clips. So my question to you is, when we create such powerful technology with the capacity for good but also the capacity for bad, what calculation do you think needs to go through our head before we decide to move forward?

Again, this is a very big conversation, so we may have to come at it from a couple different directions. But artificial intelligence, number one, is algebra on steroids, just so we're clear. It is a very big field of study, and when you get out a microscope and look down inside the computer, you cannot find the AI in there anywhere. It's just math. And sort of depending on how you measure it, humans have been working on artificial intelligence by that name for 70 years, and it has been making progress over the last about 70 years.

So it's not like we were at zero and then there was a huge step function. The plane that flew you here flew itself almost the entire way using artificial intelligence, and we all rely on artificial intelligence every day whether we realize it or not. So this is not to say no to your question, but just maybe for us to set the table that this is not like the lights just got switched on where we're sitting there by the light switch wondering whether to turn them on. Do I think that things are picking up speed? Yes. But this is part of a much longer narrative, and I think that we need to be really thoughtful about how, as we develop any technology, can we get the most benefit out of that technology and at the same time, as wisely as possible, see potential downsides from that technology and then find ways to mitigate against them or corral them into places where they won't be a big downside for society.

So you're a professional inventor, effectively, who manages professional inventors. And I wonder, what is it about us, about humans, that we'll go forward and create things that we know have the potential for great destruction and kind of hope, kind of be optimistic that we're going to be using them for good?

So inventing sounds like a monolithic effort, but let me separate it a little bit into two different things. One of them is learning — the discovery of new knowledge. If humanity can't survive the discovery of new knowledge, I mean, I don't believe that. Maybe you do, but I believe in humanity. I think it could be bumpy at times, but I believe in humanity, and I believe we can survive discovering new knowledge. I don't want us to need to infantilize ourselves as a species by preventing new knowledge. That's the first half.

Now the second half of invention is what you do with the knowledge, and in instantiating the opportunity. A new thing, like let's say electricity — should we put electricity into the things around us? You can say sure, generically. But when you start to do it in specific cases, you can say things like, who will benefit from this? Who will be harmed by this? You can say, almost for certain, with something like either electricity or artificial intelligence, we can't say for sure all of the ways in which this will play out. Great. So how can we sandbox this discovery, this invention process, so that as we try to instantiate it in products and services, we can put it out in the world before it's done — not to be irresponsible but to be responsible — and to say to people, what do you think? How should we change this? What can we learn from this?

And if you'll allow the metaphor, you know, for a long, long time Waymo, the self-driving cars, which actually came from X — we had a person sitting in front, right by the steering wheel, with their hands really near the steering wheel, eight hours a day, just making sure that nothing bad happened. So the car was practicing driving itself, but there was a backup. I think there are lots of ways in which we can learn in the real world and do it responsibly by engaging the rest of the world in what's happening and getting that feedback, so that we can get these unforeseen consequences out into the light so that we can design around them.

So is there ever a moment where we say stop? Like, I think about the letter that Elon Musk and a bunch of others signed about, we need to stop researching AI — which seemed to be a bit of a pipe dream to me. But is there ever a point where we say, this type of stuff we shouldn't keep going with, or is it inevitable that we push ahead?

I can't speak for the whole world. I think the reality is that lots of people who signed that letter and lots of other people in the world are going to keep working on it no matter what. I think what's important — the only thing I can control is what we work on — and I want to work on what we're working on responsibly, so that we can get as much benefit to humanity as possible.

I think that the world is overrun with serious problems. I would put at the top of that list climate change, nowhere close to second place. I would put nuclear weapons, and I would put AI doing something particularly horrible for humanity a very, very distant third. Yes, I'm sure such a line exists. I'd argue by the time we've gotten to that line, it will already be too late.

So this is actually me agreeing with you. I think way before that line, we should be saying, what are we doing? How are we doing it? And can we put intelligence into the things in the world around us in ways that benefit humanity? And how, as we're doing that, even if our vision is really well-honed to be in that positive for humanity — can we be on the lookout for downsides and get ahead of them? Because I don't want to ever get to that line. And I really think if we get to that line and then some half of humanity says, okay, we're out, the other half of humanity is just going to keep going. So I think we need to be worried and thoughtful and responsible about these things starting now, not starting when we get to that line.

So okay, we're going to go ahead and build AI. Do you think there is a similarly positive impact that artificial intelligence can have, the same way nuclear energy can have in terms of preventing climate change? Is there a way that AI can have that impact? And is it the AI that we've been developing all along — meaning the optimization technology, computer vision, stuff like that — or does this generative AI wave have a role to play in this world as well? And what at Alphabet X is now happening to tackle these issues?

So I mean, let's start with — you know, artificial intelligence is a really big basket of things. Many people may have heard a lot about large language models recently. That is a particular piece. So artificial intelligence has lots of baskets. Machine learning is one of those baskets. Deep neural networks is a subset of machine learning. Those will continue to have a lot of impact on the world for sure. But I'd rather focus — I think it's actually more productive to go one or two steps back up to machine learning in general and to say, how are we solving problems? We should be falling in love with the problems, not falling in love with the technology.

And so, for example, one of the projects that's near and dear to my heart that's at X — I think it's a really big issue in the world. The world's electric grid is the world's most complex machine. That is by far the most complex machine humanity has built. And the people who run the grid all over the world, from their many pockets — obviously these are good people, they're trying hard, but it is very complicated. There are now all of these solar fields all over the world that want to jump onto the grid, all these batteries that want to jump onto the grid, all these electric vehicles that want to jump onto the grid, and they have no way of figuring out how to maintain their system.

Because here's the crazy bit: they don't know what their system is. If you go to a system operator and say, show me the digital map of where every inverter, every transformer, every wire is in your grid, they will just say, we don't have that. And that's why it takes them 10 years. When you wonder why there are 10-year waits in most states in the United States to get a solar field onto the grid, it's not because people don't want to put solar fields on the grid. They don't know what will happen to their machine if they plug that solar field into the grid.

So what if machine learning could help them to learn about their grid, virtualize their grid, and then answer in a minute instead of in a year or 10 years, what will happen if you plug this solar panel onto the grid? Think about the tsunami of renewables that are already waiting. They've literally already been built. They're just sitting there in the dirt — wind farms and solar panels — waiting because we don't have a virtualization of the grid. That is an example right now that X is doing to try to use machine learning through the energy infrastructure to make the world better.

And let's talk about the wave of generative AI, large language models. Where do you see the potential there?

So generative AI, as you've probably seen play out in the media recently, leans more into things like asking Bard to write you a poem, going to one of these image producers and saying, hey, make me a picture. That's real. That's going to continue to be a thing, but that's the tip of the iceberg.

So think about it this way. We're in the middle of a process of lifting up people, moving them away from the craftsman mechanical detail work of designing and making things, lifting them up into guiding computers who help them make things. So if you work at a car company and you have a car strut — you want it to be really strong when you pull it, but also when you smoosh it together it's got to be really strong, it has to be really strong in torsion as well, but you want it to be cheap to make and you want it to be as light as possible. So instead of designing what you think would be the best one, what if you went to a system that could try millions of different possible car struts, so many that it started to hill-climb in car strut design space? And you were watching it and telling it things like how much you value fast to make, cheap to make, low carbon footprint. And it comes out with a car strut at the end which is better than any car strut a human could have designed.

We're going to see this sort of thing play out in every discipline in the world over the next 30 or 40 years. X is really interested in some of these spaces where we can help the people of various industries to be inventing and designing much faster.

So you're working completely on moonshots. Are you worried maybe that you're going to have a little bit more competition?

Exactly the opposite. The world is not going to run out of problems, and the fundamental goal of X is to get a bunch of people together to work on those problems as efficiently as possible. The more people can work towards solving humanity's problems, the better off we'll be. So I hope it does democratize things. I'm watching it currently start to democratize things, and I'm super excited about that.

What are you doing with the aquaculture experts?

Oh, aquaculture. So let me take a step back. This is our project on ocean health. We call it Tidal. Humanity gets about two and a half, almost three trillion dollars a year from the oceans, and we are destroying the oceans, as many of you probably know, faster than we're destroying the land or the air. We need to get more value per year from the oceans, and we need to do it in a way that is not only not destroying the oceans, but we need to regenerate the oceans.

There is no possible way to do that unless we find a way to take automation to all the things that we currently do in the oceans. Aquaculture actually helps us not to overfish the oceans, and because the carbon footprint of a pound of fish is one-eighth the carbon footprint of a pound of beef, we are wildly better off as humanity moving to producing food through aquaculture.

But when you go to a huge pen — even our partner Mowi, which is a sustainable aquaculture farm in Norway, the largest salmon farmer in the world — the state of the art, when they wanted to find out how much their fish weighed, they would, in a pen with 250,000 salmon, pull 20 salmon out of the water, put them on a scale, weigh them, and put them back. So what we're doing is enabling them through computer vision and machine learning and automation and specialized sensors to allow them to look at the health of the fish, the weight of the fish. We're helping them to automate the feeding of the fish, which makes it much more sustainable, because the runoff from overfeeding on these fish farms is actually one of the big problems in aquaculture. So we're making the farmer better while we're making the world better, using machine learning.

So is X there to effectively try to insulate Alphabet from the innovator's dilemma? Shouldn't X have been front and center building the first ChatGPT and not letting an OpenAI, for instance, run away with it?

Let me take a step back and remind you sort of how X functions. Our goal is to invent and launch moonshot technologies that, if we do it right, help tackle some of the biggest problems in the world and lay the foundations for enduring, sustainable businesses. One of the early things that we did was a thing which, when we graduated it back to Google, was called Google Brain. Google Brain is the origin of much of what you now think of as Bard.

About five years ago, we said we can feel on the horizon we're going to get to the place where the ability is going to be there for us to be working in much tighter loops with software developers, like a partner to them, where we can complete code when they start to write it. And so that work happened for about five years at X. About seven or eight months ago, we moved it back to Google, and it was just announced recently as Cody.

So I want to ask a follow-up question. You mentioned that Brain started in X and then graduated to Google. Doesn't it become a little bit harder to say I'm going to make something that's competitive with the bread and butter?

Scaling anything up to serve more than a billion people a day is super complex. We're talking about hundreds-plus languages. There is a lot — technically, legally, ethically — just to do this, that X is totally not set up to do. For the same reasons that X may be a particularly good place to do rapid prototyping and learning of new things, we are not the right place to move something to being ready in a really thoughtful, responsible way to serve a billion people a day.

The goal of X is to create really good seed crystals. We don't want to do it all ourselves. We want to get the ball rolling. And so we try a hundred things, and 99 of them don't work out. And our job is to be wrong about those 99 as efficiently as possible, to move past the ones where we're wrong about them with evidence as quickly as we can, so that that one which we can double down on over time can sort of go on to have a really productive future.

So do you want Brain back?

No, no, no. Our job is to catch new waves for Alphabet, to catch new waves for the world. What excites us isn't sort of empire building. We want, in a really efficient way, keeping this balance of audacity and humility just right — audacious enough that we'll try almost anything, but then humble enough that we're honest, right after we start out on each of these investigations, that we're probably wrong and that we need to spend our money figuring out that we're wrong, rather than trying to prove that we're right.

So Astro, I'm actually curious, how do you decide what to fund inside of X?

I do not believe that anyone, certainly including myself, is any better than random at predicting the future. I know that that's not the cool thing to say in Silicon Valley, but I just don't believe that any of us are better than random at predicting the future. I think we can discover the future a good bit more efficiently than maybe other people discover it, but that's very different.

So the core kind of map to beginning a journey at X is we make these three circles. The first one is, there has to be a huge problem with the world that you want to solve. Number two, there has to be some radical proposed solution for how to fix that problem — some science-fiction-sounding product or service, however unlikely it is that we could make it, that if we made it would make that huge problem go away. And then there has to be some kind of core technology opportunity, some breakthrough technology that gives us the opportunity to start on that quest. And those three things together are a moonshot story hypothesis.

That doesn't mean you're right. In fact, you're almost certainly wrong. But if you can propose those three things, it has the form of a moonshot. And then the answer is: great, gorgeous moonshot story hypothesis. What is the smallest amount of money and shortest amount of time you think it'll take to kill your idea? Because your idea has a 99% chance of being wrong, and there's no way to avoid that, because if that wasn't true, it wouldn't be radical. We're only interested in the over-the-horizon stuff. And so as soon as we sign up for that, we are explicitly signing up for being wrong most of the time.

More than half of what we're doing right now is in the climate change space — but not because I mandated that, but because that's what people were excited about, and that is legitimately some of the biggest problems the world is having right now.

Is there anything from your process they could put into place inside their companies that might help them achieve those 10x moments?

I would actually argue what I just described is the most efficient you can be in trying to find something new. One of the things that we say at X is, if you're starting out on a journey — I know it's no fun to kill your ideas — but let's say for argument's sake that the idea you have isn't going to make it. Would you like to find out now for one dollar, or find out three years from now for like 10 million dollars? And everyone of course says, well, I guess I'd like to find out now for a dollar. Great. Welcome to X.

How are we going to do this? Are we going to be intellectually honest or intellectually dishonest about our discovery process? What I just described is not rocket science. It's not like X invented this. It's just so hard to do. All of human nature drives us in the opposite direction from what I'm describing. So what X spends all its time doing is trying to create the infrastructure, the social norms, reward systems, so that people actually do these things.

Let's check in on Waymo. I think I'm seeing more progress in self-driving cars now than in maybe the past year.

For sure, it was harder than it turned out to be. One of the things that caught Waymo by surprise is that there are a lot of things that human drivers do that, if we want to follow the laws, we can't do. So there are a lot of edge cases to make sure that this is super safe. There are three cities in the United States where you can get a ride with nobody in the front seat — in Los Angeles, in Phoenix, and in San Francisco — from Waymo. And I don't know exactly when, but I pretty much guarantee there will be more cities over time.

Another project that I want to talk to you about is Project Loon — these balloons that are supposed to beam internet down to everyone no matter where you are.

Loon was a goal: could we make a worldwide stratospheric layer of balloons, like cell towers but floating at 65,000 feet, that were talking to each other in an ad hoc mesh network and beaming LTE or 5G to the ground, so that people in rural communities around the world would have good internet connection? It took a long time. We built it. It was working. We were beaming LTE and 5G to hundreds of thousands of people in multiple countries. We couldn't figure out how to get the business to close with the owners of the spectrum, so we turned it off. It made us very sad.

Starlink is a cool company, but they're solving a different problem. There's a fixed amount of bandwidth you can land when you beam something from a satellite in any one region. It was crushing for all of X for us to kill Loon.

And we have this saying at X: we're very focused on moonshot compost. Whenever we stop something, the people, the code, the patents, the know-how — we try to keep them all at X. They recirculate, working on new projects. One of the technologies that was allowing these balloons at 65,000 feet to communicate between the balloons at very high bandwidth were lasers. And so when Loon ended, someone said, well, how come we couldn't put those lasers on the ground? You know, like on a pole.

And that sounded almost embarrassingly too simple. But fast forward five years: we have these things, they're about this big, smaller than a traffic light. It shoots a laser up to 20 kilometers. It's eye-safe, it's unregulated, it's near-visible light. And it's received by another box that's two feet tall. You strap them on two poles. If you plug the internet — like a fiber optic cable — into one, you have 20 gigabits per second up to 20 kilometers away, for less than one one-thousandth the cost of trenching fiber. And we're rolling these out right now, mainly in Africa and India. And that project is now moving more data to real customers per day than Loon moved in its entire history. So go moonshot compost. That project is called Taara.

[applause]

What about some medical or physical issues? Is that ever something that you'd want to take on, or societal issues? One person said community. Is that something that X would ever take on?

No, not at all — as long as we could be proud about the output and there's a technology solution to the problem, we would be excited to work on it. We have had various explorations in the social justice space, because the temptation is to think that something like belonging or systematic bias in society is just not amenable to technology helping. And maybe that's true, but that's not written in stone somewhere. Like, shame on us for not at least trying.

Education — I will consider it a failure if X doesn't eventually have a great moonshot in the education space. We've tried like 30 things. It's so painful.

And then someone yelled out carbon sequestration. Yes, we have done a bunch of interesting work in carbon sequestration, in green hydrogen. We're doing explorations in lots of other parts of the space. We're excited about the possibility for making much lower carbon footprint cement. These tend to be large machines, chemistry problems, where inverse design can be really helpful.

We had this project — it was one of my favorites. We had a device that was taking seawater and producing methanol you could burn in a gas tank. There are four billion internal combustion engines in the world. This project was called Foghorn. That felt like a real save-the-world moment. It was actually working, and we could not convince ourselves we were going to get it cheaper than about $15 a gallon of gas equivalent. And that's just not going to save the world. So we turned it off.

Let me ask you one last question. Why do we continue to try to invent and build?

I'll give you two answers. The first answer is, humans are fundamentally explorers. I think we all have some pioneer spirit inside of us, wanting to learn, wanting to grow, wanting to find new things and do new things. It's a very fundamental part of who we all are.

The other thing is, there is enough pain and complexity in being alive, enough reasons to think short-term, that people are going to by and large do what is in their somewhat narrower self-interests. So if we want to save the world, if we want to make the world a radically better place, we have to find ways for doing the right thing to be cheaper than doing the wrong thing — especially when it comes to climate change. And the only way that we're going to get to the place where doing the right thing is cheaper than doing the wrong thing, when we can dig the problem out of the ground and burn it, is going to be radical innovation. So that's why I believe we're working on it, and I think that's why the whole world is working hard on it right now.

Astro, thanks so much for coming. My pleasure. Thank you for having me.

[applause]

More talks and performances to come.

Subscribe to stay posted.

Attend

Not yet a Summit Community member? Join us.

APPLY TO JOIN

We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies. or Learn More

Cookie Preferences