Hi Robin! So, I wanted to ask you about a puzzle. Something that bothers me,
bothers a lot of philosophers—maybe will bother you less than us—but I’m going
to frame it the way that the philosopher Samuel Scheffler did in his book
Death and the Afterlife, in terms of an infertility scenario. And if you’ve
seen the movie Children of Men…
…or if you’ve read the book—okay, that’s basically the scenario in that movie,
which is that, like, at some point, you know, there are just not going to be
any more people. We learn that there are not going to be any more people. No
one’s going to become pregnant again. And, you know, I think if we learned
that we were in that scenario—if we learned that, like, whoever the children
were that were most recently born would be the final children—
We would be— I, imaginatively projecting myself into that scenario, become
filled with panic and dread. It seems horrifying to me.
And more horrifying than, say, like, imagining my own death. Even imagining
the death of everyone I know. Right? I mean, in the long term, right? And, in
fact, like, what the conceit of the movie—and Scheffler sort of goes along
with this—is that, if we knew we were the last generation, then that would sap
our energy for most of the things that we do. So we just no longer feel…
Lie around on the couch, staring at the wall—what’s the point?
Yeah, and not just that, but also, what’s the point in behaving cooperatively
to others? So you get wars, you get, just, people treating each other
horribly. Right? So what’s the point of ethics even, right? That’s the conceit
of the movie, whether you agree with it or not, right? So the idea would be,
like, some have— Scheffler’s thought is, like, “A lot in our lives now is
riding on the thought that we’re not the last generation.” Right? And were we
to believe that we were the last generation, our lives would collapse, and all
the things that we take to be meaningful, or at any rate, most of them, would
just no longer strike us as meaningful.
Okay. So this is sort of his
thesis. And, essentially, his point is just: Future generations really matter
to us. But for me, there’s something very disturbing about this, because it
suggests that humanity is a pyramid scheme. Because there would— We know for
sure that we’re not going to last forever, right? And however long we last, it
won’t be forever, because forever is a really long time. And so there’ll be
some last generation, right? And their lives will have no meaning. Right? And
then the generation before them, right, if they— Like, if we knew that the
next one was the last one, then what we’re doing is promoting the lives of
people whose lives have no meaning, right? And it trickles—run it backwards.
And so, do our lives actually have any meaning? And is it legitimate for us to
depend in this way, on future generations?
Well, certainly, a lot of pieces of fiction like this make some presumption
about, sort of, the linchpin of meaning for us. And then they present a
scenario where that linchpin is gone, and everything falls apart. And a lot of
disaster movies of various sorts are like that. And they kind of lie in the
sense that, you know, in real disasters, people are much more cooperative and
helpful and “continue on”-seeming, you know, than disaster movies seem to
present. So, I would bet in the actual Children of Men scenario, we won’t get
all these wars and terrible things. It would be sad, and I think people then
would be sadder. I don’t think they would be so sad that they would just lie
around on the couch and do nothing, or go out and slash each other’s throats
just for the hell of it. I just don’t believe that.
But nevertheless, I
do think there is a point to valuing future generations. And I do— I
appreciate your reaction that it sounds horrifying, because I’m often
horrified by people who have the opposite reaction. People who say, “Well,
there’s no point in humanity lasting much longer,” because, you know, global
warming, or inequality, or something like that. That just seems like a crazy
overreaction to decide to give up on the future because of that.
game theory, you may know there’s a standard analysis of repeated games. And
one— The standard analysis says that if you have a finite number of
repetitions of the game, and we all know what the last game is, then on the
last game, the prediction is we don’t cooperate because there’s no threat of
the future. And then the game before that we don’t cooperate because we don’t
expect to cooperate in the last game, and so on.
So this is the standard iterated Prisoner’s Dilemma problem: That there would
be never any cooperation, because there’s never a threat in the future,
because you know exactly when the last period’s going to happen. If we just
change the game to give a random last period, where the the period before the
last period you don’t know that, that it’s the last period, well, everything
changes. Now, as long as, you know, the chance per period is low, now people
do cooperate, and they do each expect to have a future, and then one of them
is surprised to be wrong. So, that seems to me a reasonable attitude we can
have with this. To say that, yes, there will eventually be one that’s the
last, but we— it won’t know that it’s that. Or— And the ones before it won’t
know either, the five ones before, or something.
And so then they don’t
have this problem of having no expectation of the future. But you could just
say, you know, Why put so much meaning about future generations? Why should
that matter so much? And, in some sense, that’s just part of being part of
communities, right? So you might say, Why would someone lost on a desert
island sort of feel despondent. They’ll never get to meet anybody else, right?
Because, you know, they can think thoughts, and they can draw some art, they
can make a meal… You know, why be so despondent that you’ll never meet another
human in your life? Well, because we’re very social creatures, right? We care
about being around each other. And so you could think of that similarly across
generations. We would be much lonelier as a generation to think that we
weren’t going to be part of this continuity of future generations. And it’s
important to us to be part of this longer history.
So, just with respect to the people behaving well in disasters, I mean, it
might be they behave well in some disasters, but not other disasters, right?
So, like, if you think about friends, close friends, you put them in some
situations, like where they’re threatened from the outside—they might band
together and like, become even closer friends?
You put them in other situations, and they might turn at each other’s throats.
The kind of disaster we’re talking about isn’t one where anyone’s going to
immediately starve, or— in any disease, or whatever. It’s just a piece of
knowledge. Right? And that knowledge is, there are no more future generations.
And I guess my thought is that I feel inclined to make some inferences from
the kind of panic that that fills me with, to like: What would it do? What
would it be like if a whole society of people were filled with that kind of
And I think the answer is not that we would get a bunch of
prosocial responses out of people. It wouldn’t be like the disaster of a giant
wave, or a giant army, or whatever. Like, I’m inclined to think that, like,
James—PD James—is right that it would be one of those kinds of disasters that
makes people less social rather than more.
I’m not sure that that much is riding at stake on our predictions about how
people react to this disaster. I mean, the interesting question here is, how
much do we value the future generations? And how much should we value future
generations? And, in some sense, what form should that value take? So I have a
survey question I did a long— many years ago, that I thought was especially
interesting. I’m not even sure if I did it as a survey, but I did it as a blog
post, I think independently.
So the idea is, imagine, you know, a
civilization that is, you know, a billion people, times ten generations. And
now, imagine another civilization that’s a million people, like, times 10,000
generations. So you could— Two different civilizations, one is spread across
space, and the other is spread across time. And the question is, which
civilization do you respect the most? Which world would you rather live in?
it seemed clear that people really preferred the long-lived civilization. That
was a much more noble, impressive, grand scenario, the million people living
across 10,000 generations. Which is interesting, because of course, in some
sense, it’s spreading— preventing contact, right? When there’s more people,
there’s more of them you could go meet! There’s more people you could go
And so, by spreading them across time, you’re preventing
many of them from ever being able to find or meet each other. So you might
think, you know, that’s bad! I mean, it might be like spreading this billion
people across a long line. They can only travel a small distance on the line,
right? They couldn’t meet as many. But still, people— It shows that people
really, sort of, put this value on this across-time connection.
Right. So let me, like, give you my worrisome hypothesis about why people have
that preference. And it’s also why I don’t feel very reassured by the thought
that the random occurrence of the final extinction event, or whatever, really
solves this problem. So, I think that people are imagining themselves into
these worlds, right? And when you have the world that lasts a long time, they
imagine themselves into it. They’re like, “I’m very far from the end,” right?
Whereas the other one, they could be close to the end, right? And now, why
would it matter to you so much to be close to the end, as opposed to far from
the end, like, where you’re located?
And I think, well, maybe here’s the
thing: Humanity really is a pyramid scheme, in that we’re all in some sense
predicating the value of our own lives, and what we’re doing, on something
that actually can’t underwrite that value. So we’re sort of writing these
empty checks. But if we’re far enough away from that event, we can deceive
ourselves about that, and not make it apparent that that’s what we’re doing.
And that’s why they prefer the longer civilization.
So let’s go with the other spatial analogy here. So, you know, a world of a
billion people: In that world, we often find people saying, “My life wouldn’t
be meaningful unless I could help everybody else.” So it’s a similar sort of
pyramid scheme, right? You say, if you just selfishly live your life, that’s
not a meaningful life. You need to live your life helping others. And of
course, if they have to live their life helping others, then we’ve got a
similar pyramid-scheme logic: Who’s the ultimate person who’s helped by all of
us helping each other?
And, you know, you might say, “Well, if we each
put 90% weight on helping each other and 10% weight on ourselves, then at
least, you know, maybe it sort of all adds up!” We each put a lot of effort
into helping lots of other people, but— And they put a lot into helping us,
then maybe we all end up with a lot of nice feelings, and experiences, and
support. But, you know, you wouldn’t need that 10%, right, if the only meaning
from our life was to help others, and we all only help each other, then we
might, like— There might be nothing at the end of that, right? So isn’t that
Very similar. I think altruism is a pyramid scheme. And it’s just a big
mistake to think that the altruistic life is, like, a good or even coherent
life. It’s an interesting fact that if you look at, like, a philosopher like
Aristotle, who early on in his Ethics, he considers a variety of lives. And
he’s like, is this life a good life? Is this— He’s like, which is the best
life? You know, so he considers a life devoted to bodily pleasure, he
considers a life devoted to honor, he considers a life devoted to making
money. The life devoted to virtue.
He dismisses all of those as not being
the best life. He doesn’t even consider the life devoted to helping others. It
doesn’t even show up for him. And I think the reason is, like, it’s obviously
a pyramid scheme, right? That is, it’s obvious that, in some sense, the
meaning of your life, right, would then be— in a sense, you’ve shifted the
bump in the rug onto other people, and then if they’re also altruists…
And I think maybe the most basic, like, I don’t know, premise, or
dogma of, like, of ancient ethics, which is sometimes called “eudaimonism,”
right—but it’s shared by, you know, Plato and Aristotle—is that, like, the
good of your life, whatever it is, is something that has to come home to you.
Like, it can’t be located in another person’s— Your happiness can’t be located
in another person’s life.
So that’s pretty individualistic, we might say, and that word should frame the
fact that there are other people who— other cultures who see themselves as
less individualistic and criticize us for being too individualistic. And so,
let’s imagine someone who joins or makes a family. And they say to themselves,
Family is the most important thing to me. And it’s an example of a larger
common phrase, to be something— Part of something larger than yourself. And
people say they get meaning and satisfaction out of being part of something
larger than themselves.
And an example of that is family. Probably the
most ancient, most, you know, common example. So if you devote yourself to
your family, you are part of the family. But the family isn’t you—family is
this larger unit, and you can feel good helping them. So if you take the
family on a nice vacation, and we all get along together, you got along
together too. Take the family, has nice food, you’ve got nice food too. You,
you know, had good experiences, you told jokes, the idea being we’re all
getting along, that feels good to you and felt good to them. But it’s framed
as, you know, building and helping the family.
So Aristotle’s ethics is actually much less individualistic than our
contemporary ethics. He says man is a political animal, and he thinks that
happiness is only conceivable in the context of a community. So, when I meant
to say, like, Aristotle would not even consider the altruistic life, I meant
specifically a kind of altruism where you’re benefiting people at a distance
from you. They don’t— It doesn’t need to be a physical distance.
point is where their happiness is really distinct from your happiness. And
that’s different from a case where, in effect, you’re embedded in a community
such that your pursuit of happiness is coordinated with theirs. So that
there’s a shared pursuit, right?
But then you’re not benefiting them sort of independently of benefiting
yourself. You’re also benefiting yourself, right?
But so, the, you know— You only get the pyramid scheme going if the benefit to
others is in some sense independent of benefiting to yourself. Which, like,
plausibly, in the case of, say, giving to charity, or whatever, even if
there’s some small benefit you get, the idea is supposed to be that the larger
benefit goes to the person receiving the charity, or the person that… So the
point is, like, with respect now to the— Taking that now to the diachronic
version, it’s like, if the throw weight, right, of the benefiting is in the
future, you know, that’s one picture. Versus if, in some sense, you have a
shared pursuit of the good with these future people, it’s more like your
family; that would be a different case.
Right. So if you’re imagining the good appearing at small spatial-time scales,
and somebody somewhere gets the benefit, that’s different than imagining the
benefit is to the unit. That is, you know, you— I can be proud of being part
of humanity, and trying to advance the progress of humanity, and to be part of
this great story of humanity. In which case, I can want there to be future
generations, and that to be part of the sort of reason why I exist. And what
I’m doing is to help not just me and you, but help them and connect us to
It’s very different, though, to think this is awesome, there should be more of
it. And to think this only has any value insofar as something comes later.
But that’s just too extreme.
Well, the way I was setting up the infertility scenario, which is the— And
maybe it is a caricature, but, like, let’s take on the caricature case, like—
I say, it fills me with existential dread and panic, such that I could see
someone being faced with that thinking, “None of what I am doing has any
But that’s an exaggeration. But let’s take the at-one-time thing, right?
Imagine, you know, you’re on a Mars colony, and you see the asteroid hit
Earth, and now you realize 99% of humanity is gone, and it’s only you on Mars
who are left. You could be sad. That would be completely reasonable to be sad.
That was a huge loss, you’re still continuing, but 99% of humanity was lost.
And similarly, if you thought we’re going to have a long future, and suddenly
it’s not there, but 99% of humanity is lost—and that’s completely reasonable
to be sad about that.
I guess, like, the picture that you get in these novels is not that people are
sad. Like you might think if you’re sad, you actually want to cherish the,
like, the little bit of time we have, or whatever.
The last time we have. Absolutely.
Right? Versus, like— You know, you might think of it as, like, two very
different pictures of parenting. Here’s another way, right? So, like, I sort
of grew up, you know, in a kind of generation of people—my grandparents were
Holocaust survivors, where there was really a mentality of, like, living for
your children. And like, you know, every decision— Like, my parents came to
this country, they framed that decision as for me and my sister.
Laying on the guilt!
I don’t think it was intended to make me feel guilty.
No, but it succeeds at that purpose.
I think that they would have felt guilty to frame it any other way. And there
have been times when they have been shocked that I’ve made decisions, where I
reasonably could expect that decision to be worse, overall, for my kids. And
I’m like, “But my view is like, well, it’s good enough for them, and it’s a
lot better for me!” Right?
But that mode of thinking was, like, not
permissible to, like, my parents and their parents. Like, every decision you
made had to be good for your kids because in some sense your life was for the
sake of their life, right? And my thought is, like, well, it’s not a good way
to live. Because like, do I— You know, if I live for the sake of my kids, do I
want them to sacrifice their lives for the sake of their kids? And, like, does
the benefit— Does anyone anywhere ever get the benefit, is the
question—right?—of a pyramid scheme.
Right. But again, I’m countering with this sort of
becoming-part-of-something-larger-than-yourself, and imagine yourself part. So
there’s an objective question of what would happen in Children of Men, and I
guess I disagree with this person. I think it’s more like, 99% of humanity
died, you’re sad, but perhaps you’re more committed to, you know, continuing
with the 1% that remains. And yes, I would more predict that it was the last
generation people would, like, take their lives a little more seriously. And
they wouldn’t just go out and slash each other’s throats. They would be
especially, like, high minded about— Like, this is important we get this last
So this is a really interesting case where you have kind of, like, the naive
and noble view of humanity, and I have the cynical view. But let’s just
suppose for a minute, right, suppose that there were a way to acquire data
about this, right? Not an easy thing to do. But suppose there were a way. And
suppose we acquired a bunch of data. And suppose it turned out that I was
right, and James and Scheffler, and in fact this is how humanity would behave.
Then, would you— What else would you adjust in your view here? Like, would you
then become disturbed by the pyramid scheme?
I might just say that they had a threshold of meaning. Like, they wanted to
build this grand thing where humanity stretched across many generations. And
there was some threshold of that being enough, and it isn’t enough yet. And
now, it’s never going to be enough. And so the whole thing is lost.
I see. Okay, so your thought would be like, but you know— But given that the
threshold was at least achievable, it wasn’t a kind of incoherent project,
which is the thing I’m worried about with the pyramid scheme.
So there’s a related set of issues here that sort of press at me more in the
communities that I’ve been in, which is not just, “Do we value the future, and
how much?” Or “How much do we sacrifice for them?” But like, “How is our value
for them contingent on them sharing values with us?” The key question is,
like, say you run a shop, it’s a grocery, and you wanted your children to run
the grocery after you. They decide they don’t want to run your grocery. They
want to go off and do software or something, right?
Some people in history have been tempted to disown them, and say, you know,
“This is the family. This grocery shop’s been in the family for ten
generations. And that’s who we are. We tried to teach you as best we could
that groceries are our thing. And you defied us and rejected our values.” And,
okay, maybe you think that’s a little trivial. But you might grant that at
some point, there’s some difference, some way in which they could go so far
away from your values, that you might no longer want to embrace them. I mean,
you know, I don’t want to lay out the specific scenario, but you could
probably start to imagine just how far away it would be. What would it take
for you to disown your children? Just what sort of heinous crimes would they
have to do? And embrace them—not just, you know, do it temporarily and
mistakenly, and then repent—but just embrace them and support them, and have
them all the more, right?
Can I interrupt you with an anecdote?
When my son, my oldest son, was five, I was talking to him. I explained to him
one of my views, which is that being— That acting is not a worthwhile way to
spend your life, because you’re just pretending like you’re someone else. And
it’s, like, maybe the one job that I have trouble respecting. And he said to
me, “So if I became an actor, you wouldn’t love me anymore?” [Laughs] Or you’d
be disappointed in me, or something, I can’t remember. Anyway, maybe acting…
So you might think that, like, you know, what if they became Nazis, or
something like that. Or racists, or something like that, and then you might
less embrace them. You might even reject them, just, in some degrees. But I
actually, you know, have a book called The Age of Em: Work, Love and Life when
Robots Rule the Earth, and a larger sort of set of discussions about the
future. And my middle prediction for the future is that our descendants get
very different from us. And that happens relatively fast, and they can get
really, really different.
And so this becomes a live question. So, my
book— Many people look at that book, and they say, Those creatures and their
lives as I depict them, that’s terrible. And they’ve already gone past the
line. I do not embrace those as my descendants. I do not want that to be the,
you know— If that’s what’s going to happen, I’d rather just everything
And so, people are drawing these different lines. So it’s a very
live, I think, real question: How far can they go? So you might think about it
at levels of abstraction: You might think, “Well, as long as they like music,
I don’t— They don’t have to like my kind of music.” Or, “As long as they get
together and have meals, they don’t have to like my kind of food,” or
something, right? And you might say, “Well, I just want the overall
structure—where they, you know, they have a family, and they have a career,
and they get together, and they listen to music, and they hear stories. And if
the details of those are different, that’s okay, as long as it’s this larger
But what if the larger structure is different? You know, how
far can they go? And so, I mean, the key thing that really drives this is
realizing how different we are from our ancestors, which I think you are more
aware of than most, being a scholar who studies very old people from a long
time ago, right? We are really different from our ancestors. And if they saw
us, and our attitudes toward religion, and our ancestors, and community, and
war— You know, they would be horrified and disgusted by many of our changes.
And we have to expect that that will happen for our descendants too.
You mean that our descendants will look back at us and say they would have
been horrified with us?
…And we would look at them and be horrified, by some of their changes.
So, but is the conclusion you draw from that, It’s quite reasonable for people
to be horrified because the ancients would also have been horrified by us.
Well, it forces you to ask, Were the ancients right to be horrified by us? So
if we imagine a conversation between us and our ancients—ancient ancestors—I
imagine them going first—you know, they hear about us—and then they go, “Eww!”
And then we go, “Wait!” And we try to persuade them that we’re not so bad, and
show us all the great things we are. I think, you know— and then they may or
may not be convinced, but that’s somewhat of an open question.
Well, why wouldn’t you think that they would have engaged in this very same
ratiocination, and said, “Hmm, people even way more ancient than us…”
Right? And thus…
They just may not have done that, right? Of course. But they could.
Right. They could have, right? So, I mean, like, one thing you might think is,
like, “Look, this is the tragic fate of humanity, is that most of us most of
the time will be put off by these large differences.” It’s not clear it
matters so much. Like, because…
It has to do with, like, how invested are you in the future of humanity? I
have an affiliation with the Future of Humanity Institute at Oxford. And so,
that phrase, you know, evokes that, you know, we at that institute do many
things, thinking about what could get in the way of the future of humanity,
and then how we might do things to prevent that. And then we often find many
people don’t seem to care very much.
And so, we’re trying to get people invested in thinking about the future of
humanity. And the question is, Then how much do you value it? And it’s a key
question about allowing change. So one of the approaches to solving this
problem of the future becoming different is to stop allowing change. And many
people I know in this community are actually quite serious about that. They
call it value drift. And they think it’s a horrible evil, but must be
So even if we have accepted value drift across time in the past,
they do not want to accept it. Now for the future, they want to find a way to
lock in values to make sure they do not drift. Here’s an interesting thought
experiment—I mean, certainly as a philosopher, you’re probably familiar with
this sort of thing, because I got this— A philosophy teacher told me that they
talk about the Star Trek transporter to their students. And so, you know, that
you get into the transporter and it reads where your atoms are, and then it
sends that information down to the planet, and then it makes a new arrangement
of atoms in the same form, and it throws away the old atoms, and now you
appear on the planet. And there you are.
And so the question is, in the
philosophy class: You know, are these two creatures the same? And there’s two
ways to ask the same question that get very different answers, which is
relevant here. If you say “You’re about to get into the transporter. Will the
thing that comes out, will that you?” It’s about 50/50. People aren’t so sure
that the thing that comes out will be them. But if you say, “You just stepped
out of the transporter. Was the thing that went in you?” It’s 100% yes.
when you look back in time, you’re much more willing to embrace the path that
led to you as being essentially you, than when you look forward in time, and
see where you might go, and to embrace that.
Makes a lot of sense to me. Because the backward-looking perspective has a lot
more knowledge. But I guess it’s— I don’t get why the moral of this story
isn’t just, “This doesn’t matter very much,” in the sense that it’s just not
clear to me— Like, the ancients also wanted to stop allowing change, and they
wanted to lock in values, and everyone at any time has— Every time has
wondered that there was, like, “Ah, the young generation, they’re screwing
And like parents do that with their kids. And it’s not clear
to me that those moves… Like, it seems to me that the change that future
people are capable of bringing about is not dependent on our present
acceptance of it, and may even be— They even may make bigger changes based on
our rejection. Which is, like, how children can rebel more against their
Well, if you thought it was simply impossible to constrain future values, then
you just shrug and say there’s nothing that can be done. But in fact when
people think about the future and technology, they imagine a pretty wide space
of possible technologies, and they start to imagine that it would be possible.
And it’s not entirely crazy. And so, I mean, what might be crazy, just how far
you try to go in that direction. But, for example, you know, we may have
machine-based minds as our descendants. And they could be direct brain
emulations like I describe in my book, or they could be minds that we have
constructed more directly. And these minds can have a range of similarities to
our minds. They could be rather different or pretty similar.
it’s also possible, with machine minds, to put more controls in to limit how
they can change. So our human minds just have their plasticity and their
nature, which is pretty opaque to us, and so we can’t do much, right? You
can’t just go into your kid’s mind and make sure he likes your kind of music.
But that might be possible for future machine minds. Which then raises the
issue, should you?
Yeah, so I think that there’s a question, like, How much open-ended scope for
new forms of greatness and creativity are there that would be, like, the
trade-off for value drift, right? So like the ancients couldn’t foresee that,
like, you know, if we give up on some of their attachments to a certain kind
of family structure, to a certain kind of size of community, right? Their
communities had to be small. To wars as being a really important, just, part
of culture—like everyone, every man has to have fought in a war… That sort of
If we give those things up, which were deeply valuable to them, we
get really big rewards for those. They couldn’t have foreseen the kinds of
rewards, and they couldn’t have foreseen the, like, the sort of mental agility
and creativity that would reap the rewards.
And so I think, I guess— I
think the question of how sort of, like, accepting we should be of these
changes would be a function of how much potential there is there, right? And
it seems to me that if that potential is big enough, it’s going to elude our
predictive abilities precisely in the ways that prevent us from controlling
it. That’s exactly what has happened historically, right?
But we don’t know if they would have approved of us. We know that from our
point of view we have realized big gains. We’re not so sure that they would
think of these as gains.
Absolutely. And I don’t want to presuppose that they would have. My point is,
just, the kinds of moves that we’ve made were not foreseen.
And so we should predict that the kinds of moves our descendants make—even if
our descendants, our AI descendants, or emulations or whatever, they’re going
to operate in ways where we— There’s just, like, a bunch of possibilities we
haven’t considered, right? And so, like, if I imagine— Suppose the ancients,
you know, suppose Aristotle tried to make a description of, you know, what
would 2020 look like, right? And I just think like even though he was the most
capacious mind I’ve ever, like, encountered, I just don’t think it would have
been a good prediction.
So this intersects with people’s concerns about capitalism in an interesting
way. And with competition. So, a lot of people in our world, and for a while,
have seen humans as substantially different from other animals. And that
humans put a lot of time and energy into things that aren’t, you know,
competitive. That don’t give us direct competitive advantages, like art and
music and dance and fiction and romance, and the wide range of things, right?
And so if you see humans as somehow having beaten the odds, and found a way to
spend a bunch of time on all these useless, you know, unproductive activities
from the point of view of competition, then you can be afraid that if
competition continues, or even gets stronger, then it will take them away.
is, if you’d say, “Well sure, you know, our descendants need to mate, but they
don’t need to fall in love.” Sure, they need to talk, but they don’t need
beautiful language. Or they need to give instructions, they don’t need to tell
wonderful stories. And for many people, it’s somewhat of an axiom that a lot
of the things that we value in our world come at the cost of competitiveness.
And so a continued or increased competitive world will take away these things,
sort of, just as a matter of, you know, biological-evolution logic.
then they say, “Well, that— We predict, therefore, that our descendants will
just be these workaholic machines, drones, who, like, again, don’t have music
and stories and love and, you know, friendship, etc. Because these are all…
Yeah. I mean what’s interesting here is, in a way, your thought that we should
be accepting of this—the workaholic-machine-descendant future is sort of in
tension with your wanting to resist worries about the pyramid scheme. Because
you might think, like, the way that we resist the pyramid scheme is that we
invest in the present, right? And we have some present way of experiencing the
value of our lives. And it’s just a given, like, it’s stories, and love… And,
like, that’s how we’re doing it, right? And we could detach from that a bit,
right? And then we could correspondingly get a greater attachment to the
future, right? And to however humanity is going to value then, right?
we can, maybe, have some element of choice in terms of— It’s like a parent,
right? Choosing how much to invest in the joy of their own life versus their
child, right? And so like in effect, like, the more that we sort of try to
like get into the mindset of saying these workaholic, love— Unromantic,
story-less, you know, human—er, AI, or whatever—the descendants, okay? These
descendants, their lives are valuable—the more we are sort of stepping into
worries about the pyramid scheme.
Well, if that was the only, you know, axis of choice, that seems relevant. But
I mean, people are imagining other options. For example, they might imagine,
Let’s have a world government, or we make sure that we prevent competition.
Or strong regulations to prevent competition.
Well, but you don’t like that.
I’m less enamored of that, yes.
Right, my point was just these two ideas of yours are in tension with one
another: The we-don’t-have-to-worry-about-the-pyramid-scheme, because the
people would just be sad now, if they— but they would still appreciate their
lives, on the one hand; and the pro-change,
So, you know, one set of responses is to say that the things that humans have
that they value are competitive. That is, they arose in the context of a
competitive evolutionary process.
Those are not the same claim, though, right? That they are competitive, and
that they arose in the context of the competitive…
They’re related claims, though. First of all, you know, the “arose” claim is
stronger. Easier to justify. Right? That all these human features that we
treasure did arise through competition.
Mm-hm. A lot of bad stuff might also come through competition.
Of course, right? So, war, for example, arose through competition as well,
right? Or torture, and, you know, all sorts of mean things. But also the
things we value most, they also arose through competition. So that doesn’t
mean, you know, they all have to go away. But now if the competitive process
changes, then some of those won’t be exactly optimal for the new competitive
worlds. But there might be other, similar, related things that become
So, and that’s the hard thing: Like, stories may change, like,
the kind of stories we told 100,000 years ago may not be the kind of stories
we tell in the future. But does that mean they’ll never tell any stories, if
stories have some overall functional use? So, one set of answers is just to
try to show people how functional many of the things humans have. And so that
was part of my story in Age of Em, was to say, Well, you know… And many of
them remain functional.
And it’s also true that just many of them are
deeply embedded in our nature. So the more deeply embedded these things are in
our nature, the harder it will be for competition to take them out, unless,
sort of, competition shall replace this wholesale. And so that becomes related
to the issue of How much like us will descendants be?
So if we imagine
the, you know, the choice between something like a brain emulation, which
very— it rose very similar to a human and then put— to change that much from
the human, at least over short timescales, and just a completely newly
designed mind, you might think the newly designed mind could then make bigger
changes with respect to things that aren’t— that are deeply embedded in our
nature, because they need not be deeply embedded in theirs.
Let me ask you a slight tangent question (but it’s related). Suppose— So you
also feel very, very sad at the thought of the infertility scenario, right? I
don’t know, maybe you don’t feel the full force of like the panic and
existential dread, but you feel you have a response to it, like a visceral
It’s like seeing the Earth being destroyed while you’re in the tiny colony on
Right. I mean, so I think it’s not like that. It’s way worse than that,
because the thought is, like—
For sure it’s even worse in that book, right?
It’s the nothingness. It’s the prospect of the nothingness. So what if like
you knew that yes, we would all be destroyed, but an alien civilization is
going to come and— Now, first I might say inhabit Earth. I’m not sure it
matters whether they inhabit Earth. They inhabit some places.
Yeah. Some places, right. Absolutely.
And, like, how reassuring is that to you?
It is actually reassuring to me. So, I mean, we may have time to talk about
this at some other point, but I’ve done this recent work on what I call grabby
aliens: Trying to predict where in the universe there are aliens. And I
predict they, in fact, are out there, and we’ll meet them roughly in a billion
years. And they’re sort of, you know, one-every-million-galaxies sort of
frequency. And so, but you know, that’s very infrequent for where they
started. But they spread out, and within a billion years or so they fill up
the whole universe with stuff of them.
So that future is reassuring to me
in the sense that if we die, there are creatures out there who find their
lives valuable who would go on. We have— The whole future of life in the
universe doesn’t rest on us, which is, you know, the Children of Men scenario,
you could sort of think of it that way. Like the universe is entirely empty
just for us. We are the only creatures— Sure, humans are the only, sort of,
advanced creatures on Earth who have some sort of culture and, you know,
And Children of Men says, “And that’s gone,” and then the
entire universe is empty after that. And that’s a really sad vision, right?
And so, in a way, like, for you… Yeah, there almost is no— Like, given the,
like— I would say you probably think there’s a high probability that there’s
at least one advanced civilization.
Right? And so, you know, even if humanity goes out, then there’s just this…
5000 years ago, if we lived on an island, and a volcano comes on the island,
we can see the lava is going down the hill and all of us who live on the
island are going to die. If I remember that people from another island once
visited I will go, “Well, at least somebody will go on.” Thankfully, it isn’t
everything that will die. It’ll just be my culture and my family and my
friends, which is terribly sad, but not so sad as to think everything dies.
And, do you have any allegiance to humanity over the aliens? Like, suppose
there was some kind of trade-off situation where if we…
I do, but I’m not sure how strong it is. And that’s, like, a really deep,
interesting question. So— And this is related to your stuff about altruism. So
I know a lot of people in this “effective altruism” movement, and they are
really tied emotionally to this concept of altruism. And their concept of
altruism is a pretty broad, unspecific target of altruism, like what you were
thinking. And, you know, something I said in a talk at an event once was
basically that, Look, so far in history, the main way anybody has ever
influenced the future is by having descendants. Overwhelmingly, the most
influence on the future has gone through that channel. And that means you
should consider if you want to have an influence on the future, thinking about
using that channel.
And the influence of having descendants is tied to
the idea of, like, having an allegiance or an affiliation, right? So even
think about nations in the world today, right? You might say, I want the world
to do better. And then you could, like, be supporting the United Nations, or
various multinational organizations. Or you could say, Well, I’m going to
affiliate myself with my country. And my country, like, has a military
institution, and I’ll help them, or it has a research arm, and I’m going to,
like, make an alliance with other people in my country where we’re going to
help our country help the future. And you might think, Well that’s not as
altruistic. Right? You’re not trying to help everybody, you’re trying to help
But I say— But evolution, cultural and genetic, is this
process by which things help themselves. And you know, that’s the main way all
influence has happened. And I worry that if you create these communities’
organizations that create this habit of just trying to help everybody, those
things don’t survive evolutionary pressures, cultural or genetic. They would
go away, right? That is, the habit of just helping the world indiscriminately
might just not have heritage, might not have descendants in a way that helping
your country, your community, even your ethnicity, your family, your
Yeah, so as you were talking I was sort of thinking— I sort of had this flash
of like, How would Aristotle see this idea of helping people who were not in
your family? Right? And just kind of, sort of devoting your life and even
sacrificing yourself and the goods of your life to helping them? I think he
would call that slavery, because I think he— That’s what he thought a slave
was, is somebody who the goal of their life is the happiness of another
person. And I think what he would have said is, like, having slaves that are
sort of the slaves of everyone is not a very effective way to have slavery.
That is, a slave has to belong, like, to a particular community and to a
particular person so that—
Will they take care of them, instruct them? You know, develop them?
Who can specifically, who can give them instruction as to how to help them,
right? And so what you might think is, like, the, you know, the effective
altruists would like to be slaves of everybody. But that’s not a kind of
coherent beneficence project. Because it’s actually hard to know what is good
for someone else.
There are several things wrong with it. But of course, you know, that’s one of
them. But it doesn’t have to be deadly. It could be just a limitation of the
cost as opposed to something that ends the project. Of course, you know, I
would say that another problem of that community is just how do they
coordinate to know if they’re actually being effective altruists? Because they
usually sort of just trust, sort of, meeting each other and feeling like they
think the other person has the right motives or something, which may not be—
Right, and a slave has much more intimate knowledge of whether they’re
benefitting or not.
Like they might be beaten, if they don’t do certain things.
That’s right. But the thing— I mean, I tend to sort of come back to sort of
long-term processes. And I do tend to think natural selection, or selection,
will just be a continuing force for a long time. And the main alternative is
governance. And so I actually think one of the main choices that we will have,
and the future will have, is the choice between allowing competition and then
replacing with governance. And both of them have downsides and risks.
mean, obviously, competition has this risk that the things we value will be
competed away. So I even have, you know, a colleague, Nick Bostrom, who has an
essay about imagining that consciousness would be evolved away, right? We’re
not sure where it comes from or why it’s there. So competition might decide
that it could do without it, right? And then we just have all of these, you
know, they say, “Disneyland without the children.”
Because there’d be nobody there. Right? And so that’s in a sense a risk of
competition. There’s also, like, I have this paper called “Burning the Cosmic
Commons,” like, imagining all the waste that would be produced by an
uncoordinated race to go out and colonize things. Because nobody owns stuff,
and you just want to go out and grab it first, then a lot of stuff would be
burned up. So there are substantial costs from not coordinating. But there are
also substantial costs of governance.
So you know, in our world so
far—and when governance is local—that if people do governance bad in one
place, then they have a competitive disadvantage at the larger scale. And so
competition disciplines governance. But if you’re trying to solve, you know,
the basic problem of competition with governance, then you have to create a
governance that isn’t disciplined by competition, in which case, what
disciplines it? Like if we have a world government and it’s powerful, and it
can preserve itself against, you know, civil war and rebellion and even, you
know, people’s bad-mouthing it through censorship? Well, what ensures that
that evolves well, or doesn’t degrade badly?
Well then that would be a larger-scale competition! Right? But of course,
again, that would just bother people all the more, right? They don’t want
competition between the aliens because then that produces competition, which
has all these costs, right?
But it— I mean it doesn’t maybe necessarily matter so much what they want,
right? As like…
Like but it would suggest, for instance, the more we take the alien
possibility seriously, the more we might want to move towards a world
Well it depends on how badly we think we will manage it. Right? So I mean,
again, the governance has the two sides of the risks, right? If we have big,
say, problems like global warming, and we don’t have world governance to solve
them, then we end up realizing those problems, and that’s expensive, right?
Or, say, war: We don’t have a world government to stop war, then we keep
having wars. Which is expensive and damaging, right?
On the other hand,
if we do choose a world government, and then it entrenches itself, and then it
becomes this big bloated parasite that, say, limits free expression, limits
innovation, limits growth—then, like, it could prevent the growth and
innovation that would have allowed us to meet aliens on their terms.
And so it’s a really big choice we’re making, and you know in some sense the
recommendation would be, drift a little in that direction but go slowly, and
check and test. Don’t just jump all the way in into a world government and
hope for the best.
I mean, it’s not— It doesn’t seem to me that we can easily implement a world
government in any case. So there’s the— The jump doesn’t seem possible.
I’d say we already have, halfway. So a lot of governance in many communities
isn’t formal, it’s through an elite community which shares an elite opinion,
and then sort of disciplines each other in the sense of deciding who’s elite.
And so all communities in history have had that, even before we had any formal
law or governance. And at the moment the various elite communities in the
world are not competing with each other so much as having merged into a single
world elite community, which then is not being disciplined by competition.
you know, I was struck by, in the pandemic, at the very beginning of the
pandemic, various health officials, like, had their recommendations of what to
do, and then elites around the world talked about what they thought was going
and what should be done, and they came to an opinion, and they declared it and
then everybody fell in line. All the experts just said, “Yes, sir!” And
basically, the entire world pretty much followed the same pandemic strategy,
and things that I was interested in pursuing, like variolation or challenge
trials, were simply not allowed anywhere.
And this is also true of a
number of other areas of regulation, say telecommunications regulation,
nuclear power regulation, a lot of different areas. Basically there are these
elite communities are in the world, they talk to each other, and then sort of
decide together what they think the right kind of regulation is. And then they
all do it the same way. And there really isn’t much deviation anywhere.
in that sense, we actually do have a world government, and we’re already sort
of halfway there. The problem of, if this elite community like makes the wrong
choice, there isn’t another elite community out there to compete with them. If
you, say, as a potential elite, say, do something different to what everybody
else said about the pandemic, then you will just be tossed out of the elites.
And you won’t have a basis to do other things, you will just— And everybody
knows that so they know they need to go along with what the elites decide.
So can I tell you what confuses me about this whole situation, is that it
seems to me to be the product of competition. That is, so competition goes
along with— People who like competition tend to like, like, a lot of, like,
global-level exchange and open borders. And so then what happens is, you get
like everyone trying to go to the same top schools, etc. Right? We’re all
competing in the same place.
And there’s this weird way in which competition actually leads to coordination
of a certain kind, and of a certain maybe noxious kind.
And so, like, your thought here that we have like these two options, namely
competition and governance, where competition is like pro-innovation, and
governance is pro-coordination, doesn’t seem right to me.
I mean a certain kind of competition, like competition between societies.
Right, so when we’re all in the same society, and we share some concept of
status, then each of us through competition is forced to, sort of, pursue that
kind of status.
Yeah, but how do we get the competition between societies? So is it that we
close borders, we lock the internet…
Well in the past, it was just the long travel distances and language barriers,
Right, but now we need heavy government regulation to shield us from those
other people, right?
So obviously many, like, space-colonization fans have said, “Ah, we just need
people to head off to another star” or something, and then we will return to
this world of separation, and that in some sense would be true, but we’re a
long way off from being able to implement that. And so yes, this is a—yeah,
so— It’s sort of a common concept of, sort of, cultural evolution, or social
evolution, that some kinds of evolution happen between separated groups. That
is you know, so— Say, norms.
Social norms are a powerful influence on
social behavior. And how would we know that we have good norms? And so the
historical story was to say, well, there were all these different societies
that just randomly had different norms. And then some of them, the norms
promoted their, you know, innovation, survival, winning in war, etc. And so
the norms we inherited until recently were the product of that sometimes
fierce cultural evolution.
But it was a, you know— But these were
different societies, relatively isolated, and each of which had the different
norms. But now that we’re all together in one society, all sharing one set of
norms, the norm evolution process is no longer going to be disciplined by
that. And that’s an issue now. And I’m not offering a solution to that. I’m
just pointing it out. I’m so— you say— I say, Well, government’s a problem.
You say, Yeah, but we’re a long way from that. And I say, No, we actually are
halfway there. In the sense that, you know, our norm-based, you know,
elite-status-based coordination has been actually, been integrated at a world
Right. But what I was saying is, you said there were two distinct paths, and I
see them as one path. That is, competition leads to coordination and
entrenchment of the status quo. That is the result of competition, not
Of course. Absolutely.
And so we don’t— It’s not like we have the world-government-coordination path
versus the competition path. There aren’t two paths, is my point.
We can choose how much to allow world government to go forward. So it’s a— I
mean, if we ask just what is our levers of choice here?
You know, if somebody proposes to give the United Nations stronger powers, or
to create, you know, the larger units—we have a choice about whether to
embrace those or resist those. So there is some degree of choice there. And we
also have choices in each society to what degree to, like, try to become
integrated with the world society, or to be somewhat different. Because those
are the choices we have. And then, they have a limited degree of influence
over the outcomes.
But you might think, for instance, if we don’t pull the levers for the world
government there’s still—we still might be pulling the secret levers for the
elite coordination that you’re describing, right?
And so it may be whether or not we pull that lever doesn’t matter so much,
because we are being driven by these forces of…
I mean it seems to me the really— The important question is how could one
construct competition that doesn’t tend to lead to this kind of noxious
And in fact, you know, that’s the question of better forms of government. So
one of the things we should do is in— As we drift toward stronger world
governance, is to consider more strongly different alternative forms of
governance, and which of them might be better by these criteria.
As you may know, I have a proposal for a form of government, and maybe we’ll
talk about that in a future episode. But exactly, that would be a criteria to
bring to bear in evaluating proposed forms of government, is to say How well
can they deal with this problem? That is, how well can they create an internal
competition? I mean for example democracy: The claim is, you know, instead of
military competition we have democratic competition. Instead of going to war
and fighting each other, we fight over who gets more votes, right? And the
claim is that that’s a substitute and we have less damaging fights. Right?
So could we make— But that same sort of democratic competition could be more
susceptible to this cultural elite, you know, merging, which all of the
political parties that are—have any chance of winning, all have to pick from
the same elite people who pick the same elite policies, right? And so the
question is, could we— Are there alternatives that would more resist elite
consensus in terms of the policies chosen, via competitive democratic
Right, so it seems like the question isn’t, “How much should we go for world
government versus how much should we go for competition?” The question is,
given that we are going for world government in one way or another—that is,
informally, and you know, if not formally—what kind of world government should
we aim for? Right?
There’s choices on a lot of different margins. But even the world government
is open. It’s still an open to question at the moment whether sort of the
Chinese elites and leadership and their culture merges with the European and
American elites and culture, or whether it diverges, at least for a while, and
creates, you know, an actual, more distinct alternative. And so many people in
the U.S., for example, who have disliked U.S. elite positions on, say, genetic
engineering, have looked—or our nuclear regulations—and looked to China and
said, Well, those in China seem to be willing to try that stuff. Maybe
competition between China will allow more variation than has been true so far.
And it’s still too early to tell, basically.
Interestingly though, like, if one really wanted, in some sense, there to be
this true cultural evolution and cultural competition, one wouldn’t want to be
too welcoming or friendly to the Chinese—like one would want to be so inclined
to reject, right?
Because in so—
But it’s similar to the counter— You know, the paradox of war, if you like,
right? On the face of it, in immediate terms, war looks terrible, right? There
are very few things in our world that look more terrible than war to us.
Although the ancients, like, had a different view. But we have to admit that
it seems to be one of the main engines of cultural evolution over the last
One of the main ways in which societies who did things
better won out over other ones was through war. And so, you know, losing war,
and we’re losing that engine of cultural innovation. And it’s a paradox—we’re
torn, right? Similarly for cultural integration at the world scale, right? The
more that we have these separate cultures that are suspicious of each other
and fight each other, there’s going to be direct cost for that, right?
So might it be the case that, like, there are a lot of substantial benefits,
where if we look in advance it’s like the transporter thing— If we look in
advance the benefits are just not going to be apparent to us, like with war or
But in fact at the end, maybe they are they are beneficial, you know, in ways
they’d eventually recognize. But that the project of trying to get people to
see those things in advance would be pointless, like, because you’re asking
them to understand something that’s…
Right, but you just want to show enough historical examples to say, “But
they’re probably there.” And so in my mind the most dramatic example is the
transition from foraging to farming. So humans were foragers for hundreds of
thousands of years, if not millions of years. And then roughly 10,000 years
ago there was this transition from foraging to farming, and the transition
probably spread over 50,000 years and a lot of different ways. But it was a
very wrenching, dramatic transition. And by most accounts, at an individual
level, life is just worse after the transition.
So humans have these very
strong egalitarian values and values for leisure, and so foragers were, you
know, they lived in small groups, they had a lot of free time, they didn’t own
much property, they were relatively promiscuous—didn’t even have like, you
know, ownership of mates. And they were very sort of—enforced egalitarian
rules in the sense of making sure nobody took charge, and there wasn’t an
elite, and everybody got together and talked about all the key decisions.
you know in many ways that fits our ideals of what human life would be like.
And then because we had so much cultural plasticity, we became farmers. And
farmers had war, and trade, and slavery, and marriage, and class inequality,
and less—worse nutrition because they had a more limited range of things they
ate. And they didn’t get to travel as much, they had to work more hours a day,
and had more disease from density. And just, so, all the usual measures— Like,
this transition from foraging to farming was, like, was worse! Everybody’s
worse off, right?
And so if you could have imagined that transition but
not be able to see farther in the future, you might have said, Let’s not let
humans become farmers. Let’s stay foragers. And that could have made complete
sense from the point of view of just looking at those two comparisons. And of
course it would have taken a lot of foresight to be able to see what farming
life would be like, but we’ve actually seen this happen in times where like—
Even like, you know, when European colonists run around the world, often, you
know, there was a conflict between the European lives and the local lives.
interestingly, we often saw Europeans go native and move over to the local
cultures, and we didn’t see very much of the other nearly as much, right? Not
so many natives, you know, decided to join the European culture as went the
other direction. So you know, in some very local sense, people didn’t make
those choices. And many cultures have looked at, say, many foraging cultures
looked at farming cultures and said, No, we don’t become those. But you know,
larger evolutionary pressures.
So let me ask you a final question. So is your strategy something like this?
You know, you ask the person, Is that going to be you when you step into the
transporter? And they’re like, “No.” But then you’re like, you have them watch
a lot of people go through, and you have them watch and see every time the
person goes through, they say “No,” but then afterwards, they say, “Yes!”
And your hope is that you get enough of these cases, and then the person is
gonna say, Yeah, that’s gonna be me. That’s your strategy?
Yeah! Well, it’s one of the valuable applications of history. So you know that
there’s this story we hear that, you know, people who don’t see the past have
to repeat it or something.
But I mean, is that actually your prediction? Like, do you think that the
Well, most of my prediction is just— But there’s a whole bunch of other things
in play here, right? So I mean a lot depends on, like, just how much more
different you expect future descendants to be than past people have been. So I
know a lot of people who say they’re completely willing to embrace all the
range of human culture we’ve ever seen in history, but they say, Well, machine
minds, those are different. You know, those will be different in so much
larger ways that I don’t want that to happen. I just want to stay within the
range of human minds we’ve ever seen. And I don’t want to go to this larger
space of machine minds. And that’s where they draw the line. So you know, and
so for them, they’re doing that in the knowledge of all this human history.
They’re saying, Sure, look at all these things that happen for humans, but
they’ve all been humans, and we like them all.
It’s kind of like, Well yeah, I saw all the other people go to the
transporter, but is it gonna be me? I just— it was just them. Right?
Yeah, right, but it could be like, now you’re going to a metal planet or
something. All the people who went through the transporters, they went to a
garden, and now you’re going somewhere different, and going, Um, this one’s
Okay, we should stop there.