Fertility
Robin:
Hello, Agnes.
Agnes:
Hi, Robin.
Robin:
I suggested we talk about fertility because that's what I've been thinking
about the last week or so a lot. You probably not so much, but we'll find out
what we can find to talk about. Okay. You have studied the ancient world a lot
and the idea is that we might be facing a big change in civilization in a
century or so that could last for several centuries that would be like the
ancient world in some ways and not in others. But the observation is that
fertility has been falling quite consistently over time, seems to correlate
with increasing wealth. And the usual projections are that most of the world
is already at below replacement fertility, and the parts that are above are
heavily, you know, vastly declining. And so that relatively soon world
population will peak and then start declining. And it's quite plausible, at
least in my recent analysis, that when population starts declining soon
afterwards, the economy will start declining, the world economy, and that soon
after the world economy starts declining, innovation will come to a halt. And
this decline could take a long time. So for example, at the rich nation's
current fertility levels of say 1.4 children per woman in her lifetime,
population would fall by two every two generations. And then it would take
1600 years eventually for humanity to go extinct. But we don't believe that
eventually something else will happen. But there could be several centuries in
between, before something happens to revive fertility. And during that period,
innovation would basically come to a halt. And the world would start to lose a
lot of scale economies, a lot of ways in which we benefit from having a larger
world, specialization, etc. And the sort of scenarios that seem plausible as
ways to revive fertility are all somewhat disturbing and somewhat difficult.
So there's some surprise coming, some way in which this would end. So as a
futurist, this is very striking to me. That is, I've usually been counting on
continued progress and growth as my baseline for all sorts of projections,
including, say, my book, Age of M. and lots of other analysis. And it has been
very consistent growth for a long time, at least at the world level. And so
the idea that we'd have several centuries of decline is jarring for me. And it
means I have to give up on some hopes, at least for that period. And I have to
worry about what can last through that period. What sorts of things that I
like now that are nice would survive that period of decline to then come back
again. So this is where I am about this. And I gotta say, it's kind of
emotional for me. I'm not very emotional usually in our discussions, because
the future has been important to me. And so this bright future has been an
important emotional anchor for me. I look forward, I look up, and here I see a
decline, a long, painful decline before perhaps another revival. Anyway, so
that's my context here. I'm happy to go with this wherever occurs to you as a
direction.
Agnes:
I had a couple of questions. So the first one is, when you say this hasn't
happened for a long time, the idea of worldwide decline for many centuries, as
far as we know, it hasn't ever happened, right?
Robin:
Well, I mean, if you think of, say, the fall of the Roman Empire, or even the
fall of the Han Chinese Empire, there have been periods when locally empires
fell. Locally, yes. Right, but the idea of a global thing. Now, there was
somewhat of a correlation between Rome and China in terms of their fall, and
those were the two major empires in the entire world at the time. And if we go
back farther, there are probably bigger falls. So apparently, like, you know,
before the rise of Greece, and the famous thing that was a dark area there
after a previous civilization had collapsed with the Sea Peepers peoples
showing up in the Mediterranean or something. So we have seen farther back in
history, you know, when the world was smaller. empire is falling in that, at
least as far as anybody locally could tell, represented a world of decline.
But in terms of the actual world declining, we haven't really seen much of
that. This would be an entire world decline.
Agnes:
Right. So that's maybe already one reason to think it won't happen. It's just
never... Well, yes. Okay.
Robin:
That's a reason. I will grant you that.
Agnes:
So here's another question that I have. You say all the ideas about how this
might get turned around in a couple of centuries from now are sort of
disturbing or difficult. But you have also said to me on other occasions that
if we were to get the future into view, it would disturb us. Just generally.
Robin:
Yes, I have. Yes, indeed.
Agnes:
So is this just our becoming able to glimpse that? Or is this somehow
noteworthy as an especially disturbing future?
Robin:
Well, it's a place where you can get a closer handle on it. That is, it's one
thing to say in the abstract, we've got to expect the future to be strange and
disturbing. And it's another to point to specifics. and then we get a closer
handle on the specific strangeness. So, for example, in my book, Age of M, if
you accept that the book would happen, then you get a lot of details about the
kinds of strangeness that show up, but you're not so sure the book's gonna
happen. And here we've got a thing that all, pretty much all of the different
versions of it give you more concrete, specific strangeness and, you know,
worry. And so that gives you a closer handle, right? You can more directly see
it.
Agnes:
You think this, you know, worldwide economic decline for multiple centuries is
more likely than the scenario in age event?
Robin:
Yes. in the sense that this seems kind of inevitable. That is, this isn't like
a thing that might happen. Like this is pretty sure to happen unless something
else weird happens first.
Agnes:
Well, that's kind of true of anything, but it's like if something doesn't
happen to prevent, you know, whatever.
Robin:
Right. But basically the usual do get in the way.
Agnes:
So
Robin:
Okay, but it's still different imagining, like we could say there could be a
world war sometime, right? Another huge world war. And that would certainly be
disastrous and cause things, but we don't see any particular process that's
pushing us toward a world war at any particular date in the future or in any
particular place. We just have this abstract idea. There might be another
world war. This is a process we see happening now. It has been happening for
centuries. We can track that forward and see where it's likely to go. In that
sense, it's a much more predictable outcome.
Agnes:
Right. Though, like, I mean. You know, Malthus said, look, for centuries,
we've seen this process of, you know, population outstripping the food supply
and we can posit a mechanism because the population is at a higher rate. And
but but but what tends to happen is, in fact, that things tend to change.
Right. And there are intervening factors. So someone could sort of agree with
you that if things were to continue for hundreds of years along the lines in
which they have been going, that this would happen, but not feel very
confident that they will continue.
Robin:
Nevertheless, it still seems like the sort of thing a future should be paying
attention to. I mean, if you think about it in your personal life, you know,
you should project your mortgage payments and your children and whether your
job's ending and those things into the future, even if though, you know,
various things will happen to disrupt them. But in general, you should do your
best job to plan with the things you can expect, even knowing that things will
get otherwise.
Agnes:
Mm-hmm. So you think this is important for planning. That is, we should assume
this will happen. And now, what kinds of plans should we be making on the
assumption that population will decline?
Robin:
Well, we can look, enumerate, so I did in a recent blog post, enumerate a
bunch of specific ways this could eventually end and start to think about
which ones we'd prefer. and what might be required to push things in those
directions. Some of them, for example, would require technology innovation.
And then if you realize that once population starts to decline, that will be
the end of technology innovation, mostly, you realize you'll need to do that
before. Those would, in order to empower those solutions, you'll need to
develop those technologies before the decline. It'll be somewhat too late
after.
Agnes:
So one thing that seems odd to me about taking action with reference to the
revival is that it seems like any energy you were going to put into the
revival, you should instead put into the reversal of the trend.
Robin:
But the revival is the reversal of the trend. That's the question. The whole
point is there's this trend that's going to go downward of fertility. The
revival is the rise of fertility. It may take a while, but the question is how
could we revive fertility? If you could do it now, that would be an early,
better solution. So the question of how can we stop or reduce fertility
decline is pretty close to the question of how can we revive fertility?
Agnes:
Okay. So, I mean, I guess I think the question, how will it happen once the
population has gotten to a certain point and seems slightly different than the
question, what should we do now to prevent that from happening?
Robin:
but uh so you can think of you know maybe there's rot under your kitchen floor
or something and you can think about how you might deal with the rot now if
you were able to get some money and you know consensus in your family to deal
with it or you could think about how will you deal with it in a year or two if
you can't deal with it now and it slowly gets worse in some sense those are
part of the same thought process it's the same problem and you could
hopefully, luckily, deal with it sooner, better, but it'll fester and get
worse with time if you don't and you'll still have to deal with it. So you
still want to enumerate what are the different ways we could deal with it and
which ways could we do first.
Agnes:
Right. I think that the thing, so like this gets to me not understanding what
futurism is, that it seems like tying yourself in some weird kind of a knot,
which is on the one hand you're sort of like what's going to happen and you're
like predicting this thing that's going to happen and you kind of want to tell
us that it is going to happen and but then the thinking that we're supposed to
do in the light of that is to try to prevent that from happening which is to
say then conditional on all our activities maybe that it won't happen.
Robin:
Sure so in your personal life I think There is a place for guessing what's
likely to happen if you do nothing about a problem, and then thinking about
what you could do instead of nothing. and then choosing something other than
nothing to do, and then making the problem go away, such that your initial
forecast conditional on doing nothing goes away. But with respect to the world
and futurism, it's much more plausible to say, well, you and I are just a tiny
fraction of the world, so we're unlikely to have that much influence on the
world. So most likely, we will move it a little. So it makes more sense to
just predict what's likely to happen if we do nothing, or you and I do nothing
at least. and try to guess that, and then ask which directions we should push
a bit.
Agnes:
You mean, which direction should we push to no effect, as we predict?
Robin:
Not to no effect, but to a small effect.
Agnes:
OK, OK.
Robin:
We'll slightly increase the chances that people go in some directions versus
others. That's the most we could plausibly believe about ourselves.
Agnes:
But our prediction took that into account. So our prediction is, with our
efforts, failure is the result, right?
Robin:
Again, you know, you just want to lay out, usually lay out a scenario under a
set of minimal adjustment assumptions, and then start to consider larger
adjustment assumptions. But For many kinds of forecasting, even analysis,
often you have a reference sort of thing, and then you say what to do
different. So for example, if you take your car in to be repaired, and they
say, well, it'll need an adjustment here soon, and this other thing you might
want to deal with eventually, then usually the reference is, well, what
happens if I do nothing to the car? And then you consider, well, should I do
this thing in the next two months? compared to what if I wait six months or a
year? So here, I think the natural reference is to think of what happens if
this fertility fall continues as it seems to be. And then with respect to that
reference, consider other ways that people might respond. So that people may
well see this trend and see that they want to do something and then respond.
Agnes:
So we can make two predictions. Right, maybe that's what would be helpful to
me. So, like, the earlier prediction of this is probably what's going to
happen. Um, is, um, I, I guess I wasn't sure whether that was. Or was not
taking into consideration all the efforts that would be taken by people,
partly in response to what you do, and in response to other people noticing
the problem to reverse it. So. Given all of those efforts now, do you still
think that's what's going to happen?
Robin:
Well, unfortunately, our world just doesn't have a good track record of
foreseeing problems in advance and doing much about them in advance. We do
mostly wait till the last minute to respond to things. That's been human
history over the last few centuries, at least.
Agnes:
So on the other hand, human history over the last few centuries is also one
where there has been consistent growth and never a long period of worldwide
decline. So for going on that.
Robin:
Right. But but many problems have grown and festered until they got bad and
people foresaw them early and recommended doing things about them. And we just
didn't. I mean, even so far as like a lot of say buildings and roads and
things like that, in government, there's usually a much bigger budget for
making new things than there is for maintaining old things. And many people
look at old things slowly decaying and say, hey, you know, you should save,
save future money by paying now to repair them. And usually we don't do that.
And that's a consistent feature of government in our world. So people tend to
design, say, new government roads and buildings expecting insufficient
maintenance later. Because the way our government system works is there's a
lot more money for doing new things. You can get your name associated with it.
You're the exciting person making a new thing. And there's not much to gain
politically by being the person who pushes for more budget for maintaining old
things.
Agnes:
So OK, let me just flag so that we don't spend all of our time on this topic,
that for me, the peculiar combination of the practical and the theoretical
type of reasoning that a futurist does is just very puzzling. That is,
sometimes you seem to be from a purely theoretical mode predicting what's
going to happen. And sometimes you seem to me to be giving a kind of sermon
like, let's get worked up about this. And I feel sad. And we need to stop it
because it's really bad. which is more of a practical reasoning like what can
we do and I find it very unnatural to somehow combine those two things into
one form of thought. I'm just always like flipping back and forth between them
and I think that's just the thing I don't understand about culture.
Robin:
I just don't have this category in my mind. I think of practical versus
theatrical reason that's not as natural a category for me.
Agnes:
Yeah, that might be one of our most fundamental disagreements. But instead of
trying to resolve it right now, let's just talk about, I'm sure I know you
want to talk about, what are these various 16 ways that this reversal can
happen?
Robin:
Actually, I only want to talk about those if you and I can have an
interesting, insightful conversation about them. I don't necessarily need to
go into those details. So we should be searching here for what things we can
both have thoughtful commentary on. That's why I'm inviting you to ask which
things in this space seem promising to you as something we could discuss, but
you know I have many things I could just lecture about, but I don't want to do
that if we can be thoughtful.
Agnes:
This is another example of the divide between the practical and the
theoretical, which is like, is it that we're supposed to make a prediction as
to which of these topics are going to be productive or not?
Robin:
Yes. Yes. I mean, in some sense, yes. I don't know. You have to guess. OK.
Well, I outlined these 16 scenarios, and then I, over the last day, ending
this morning, did a set of surveys asking people to rate these things in terms
of how likely they are, how desirable they are, and how much of a good story
they'd make. And that allows me to start with the scenarios that people seem
to think is most likely. And the top item um at a score I set to 100 because I
set the top each one to 100 the way I do the fitting uh it's called insular
subculture that's the most the most likely and I even have a blog post on that
from a few years back and that scenario suggests that some subculture arises
analogous to say orthodox jews or mormons or the amish or something which
Internally promotes fertility and has high internal rewards for fertility and
also is highly insular in the sense of having the members resist outside
culture and not wanting to leave outside cultures, and you know, staying
inside. that could take several centuries in the sense that we have small
things somewhat like that now, but it would take a long time to grow. And it
seems like our existing examples I just listed are not, at the moment,
actually sufficiently insular. A too large a percentage of them leave those
cultures. So what we need is the high fertility times a high percentage of
people staying for them to actually grow. And that's an example of one of the
many dramatic scenarios that would end this and it's negative in many ways in
the sense that most likely such an insular culture would reduce insularity in
part by disrespecting this larger culture and telling people inside that they
need to stay away from bad aspects that they don't like of the larger culture
and then in the end when it finally grows and succeeds it then rejects and
throws away things in our culture that we treasure. uh that's part of the you
know disturbing negative part of that scenario is they may throw away a lot of
technology or throw away a lot of our you know social science or philosophy
even who knows what they'll throw away but that will be have to be part of
their rationale for why their members should stay insular and stay inside
their culture and resist outside influences this story that there are these
polluting things there and story justified in concrete terms, by rejecting
particular things. So that's an example of a disturbing scenario.
Agnes:
And is the prediction that those things would never be recovered?
Robin:
Well, I mean, there's a sense in which whenever we throw away things, we
usually don't recover much of it. That is, you know, we, for example, I
visited universities with very pretty brick, you know, not brick, stone
buildings, and they say, nobody can make these anymore. The stonemasons who
used to make many, you know, Gothic-style university buildings, those people
don't exist. It's been lost. And maybe they could reconstruct an ability to do
something like it, but we lose many things like that, actually. when we
replace them. So most likely stuff would be lost forever, but maybe something
a lot like it would eventually be reconstructed.
Agnes:
Right, like presumably most of scientific knowledge would eventually be
reconstructed.
Robin:
The most abstract, robust knowledge, yes.
Agnes:
Right. And so the thought is we that scenario is bad because they might forget
Socrates.
Robin:
They could, if once they decided Socrates is part of the evil, bad culture
that they are rejecting, then they could throw away Socrates and his respect
and for a long time not have people who respect and discuss Socrates. To give
you a personal example, that's a plausible thing that could happen.
Agnes:
Right. I mean, I, I see, um, like Socratic inquiry as being akin to, but even
much more so natural science in that it would just be discovered again. So
they may not have the name of Socrates on it, but that that's a sure. Um, um,
but so the, but the main bad thing would just be that there would be this
period of time. There would be some things that would just be lost. And then
there would be this period of time with would be like a dark ages.
Robin:
darker than we might've hoped, at least.
Agnes:
From our point of view. They would not view it as the Dark Ages. Okay. So, I
mean, I guess I find it, you know, I find your thought about the fact that
these futures are going to be very different and very unappealing to us, no
matter what form they take. to be somewhat stymieing in terms of, you give me
any individual one. I mean, you tell me about the age of M, and I'm like, ooh,
I don't like it. I know you don't mind that world, but most of us think that's
terrible. You give me the insular subculture. Am I more horrified by that than
age of M? No. Maybe the same. And so I'm not sure, it looks like no matter
what I try to do with reference to the future, there's gonna be some weird,
terrible world that shows up. And I'm not sure that I have reasons for
favoring one weird, terrible world over another. And the illusion of those
reasons might just be that there are some that I don't get very clearly into
focus.
Robin:
Imagine that you need to trek across some terrain in the rain full of muddy
puddles. And you've got to expect that by the time you get across the train,
you're going to be pretty muddy. But still, for each step you take, if you see
a dry spot or a big muddy puddle, you'll step on the dry spot, right? you will
try to minimize how much mud you accumulate through this trek through the
muddy land, even though you expect to fail a lot, to get a lot of mud. So that
could just be a metaphor for, even if you expect to have to compromise a lot,
that whatever you end up with won't be that great relative to your ideal,
still you have a sequence of steps and at each step you will try to get the
best version.
Agnes:
Right, but when we say the best version, am I thinking the version that
preserves the most of the life that I have now and the world that I have now?
Is that even the right goal with respect to the future? Like if you think
about with respect to your children, right? Are your children going to become
different from you to the point of being on, you know, not necessarily
unrecognizable, but like as adults, like have their own goals and their own
ideas. And would it make sense to try to make them preserve as much about you
as possible? I think no.
Robin:
So I think one of the great advantages of concrete cases is it lets you set
aside a lot of that uncertainty. But it's worth pausing and noticing this
fact. So if you just sit down and you say, you know, you're going to face a
lot of decisions in your life, why don't we just figure out right now,
everything you want and all your priorities, and then we'll be set for making
all your life choices. And that mostly doesn't work very well because there's
this really vast space of possible choices you might future-mace and a vast
space of possible different values you can have. And people really do just get
stuck trying to do that. So it actually works a lot better often if you just
pick a particular choice, And then you start to think about, well, what are
your values with respect to this particular choice? Because now it's much
easier. You don't have to think of all possible values for all possible
choices. You just have to ask, well, what are the relevant values for this
choice? And then if you do a sequence of those, you slowly accumulate a better
understanding of your values. But you don't have to figure them all out all at
once. So I agree that in the abstract, with respect to the long distance
future, we face a lot of difficult questions about what exactly we do or
should value. part of how you can think that through is to start taking
particular live choices and asking with respect to those choices what do you
value or what should you value and you can just make more progress that way
like taking actual concrete choices one at a time.
Agnes:
So like can I just ask like it seems to me that with respect to this fertility
fall you have a very different approach than you have with respect to the
robot takeover fear, right? So there's a lot of people, they're really afraid
that AI is going to kill us all and they think we need to do stuff now to
prevent this. And what you often say to those people is like, well, it's hard
to deal with problems in the long term future now. And yeah, maybe a few
people should be working on this, whatever you'll say, concessions like that.
But basically, like, let's deal with the problem when the problem arises, and
we'll be able to deal with it when the problem arises. And now with respect to
this problem, which is also in the distant future, you want this catastrophe
to feel really present to me, for instance, and you want to be acting with
reference to it now. And so it's sort of like there are these two different
end of the world scenarios, right? The robots kill us and we die out. And why
do you treat them so differently? I mean, maybe it's just you think the one is
much less likely, and that's all there is. But then a lot of the argumentation
about don't make decisions with respect to the distant future, that cuts both
ways.
Robin:
I don't see myself as being as different in these cases as you do. So I'd say
the general, approach would be to take any set of issues about the future and
first sketch out what's most likely to happen. If nobody does anything unusual
about it, just sketch out some baseline scenarios. The next thing you ask is,
well, what do you like or not like about these scenarios? And then ask, well,
what are some variations in ours such that we might have some influence on
those variations? and then ask which of these variations have the most potency
in terms of, you know, at the low cost to you in the near future, a high
leverage on the outcomes you care about. That's the overall approach. And so
for many things, you will decide that's pretty much going to happen no matter
what. You're not going to be able to influence it much. You'll set that aside,
and then you'll be trying to identify the few things that you could do that
would have a bigger effect, not only a bigger effect, but also on things you
care a lot about. And that has to be a search. And typically, most of the
things that you see in the future that you like or don't like, you just can't
do much about now. That's just going to be the answer. Hopefully, in the
exercise, you'll find some things and that will be the payoff in some sense. I
mean, I think people don't just want to take some action about the future.
They just actually kind of want to know what the future will be like. This is
for your sort of story. People just want knowledge. Well, what might the
future be like? And we're curious about it. We'd like to, you know, anchor our
view of the future, not with some vague fuzzy thing, but more concrete images.
You know, we will find some ways to use that, but we don't necessarily know
what we'll use, but, you know, we can practice thinking about that future by
thinking about various things we could do about it. So with respect to AI, I'd
say, you know, the, my main recommendation is we set up robust, took your job
insurance. That's the sort of thing that you can do ahead of time, that it
would have a big payoff and be robust to a lot of scenarios. And then that's
what I recommend. Sorry, go ahead. But there's other things people might want
to do about AI. And I go, I'm just not seeing that you can have much effect
there and that you can be very clear about whether it's good or bad. So it's
about distinguishing which aspects of the scenario you could tie to some
things you could do sooner that would connect.
Agnes:
So I take it that the people who want to spend a lot of money researching how
we can prevent the robot takeover don't have a concrete proposal necessarily
as to what exactly we should do now to prevent it. But they think it's worth
investing a lot of money and time into thinking about that, into coming up
with alternate possibilities. When it comes to the fertility thing, do you
have some proposal like the robots insurance, took your jobs insurance? Is
there something that jumps out at you as the thing that we could do now?
Robin:
Well, I have favorites among these scenarios in terms of what I would
recommend.
Agnes:
But that's not that's not what I'm asking. I'm not asking which of those
scenarios do you want? I'm asking what is the thing we could do now that could
cheaply influence the bringing about of one of those scenario? Is there
anything in that category like the insurance?
Robin:
There are a number of things that we could be doing near term. So under these
scenarios, I gave 16, the one that the poll respondents gave the smallest
chance to is actually one of my most economics-y responses, which is that you
could endow, let parents endow children with debt and equity, which they could
then sell to fund their parenting. And that would be a very flexible,
efficient way to promote fertility. And if just any jurisdiction allowed this,
I think we start to get experimentation with it and it would start to actually
work soon. So, given if somebody did that, I think it would have benefits soon
and be doable soon. I just agree with a lot of people that maybe nobody's
gonna allow that. But that's a concrete example of something that could be
done soon. One of the other scenarios here that's already sort of people are
going on is egg freezing and IVF. So there is a whole industry of people
trying to develop better, more reliable egg freezing and even the ability to
make eggs out of non-egg cells in people's body to then make, so there's a
biotech industry pursuing that. And apparently some places I was just told in
China, like, are giving everybody the option to freeze their eggs I guess as a
general social policy.
Agnes:
So, that's something that promotes fertility. During the time, namely over the
past, whatever, what has it been? 40 or 50 years when we've had egg freezing
and we've had IVF, fertility has continued to decline. That is, it hasn't
increased or improved fertility.
Robin:
But if the technology became cheaper and more widespread and more reliable, it
could help, but I agree that it doesn't look sufficient. So, The, the, the
issue is, there are people, maybe, you know, 10% of people, you know, parent
potential parents today who want to be parents, but didn't freeze their eggs,
and it's too late. for them, but those people would be helped.
Agnes:
Right, but you also have to think about the people who, because they were
worried about this, had kids earlier, right? Sure, right. So like, I agree.
Just look at the population of richer people or something and ask, has their
fertility gone up due to the fact that this option is available? I don't know
the answer to that question, but I wouldn't be surprised if the answer is no.
Robin:
I'm told there's a Danish people, I guess 10% of babies there are born via egg
freezing IVF.
Agnes:
So do they have high fertility?
Robin:
I don't know. I mean, I'm not saying, but the point is, this is just a
category of a solution. You asked what you could do now, and that's different
than wanting like to be sure that it would work.
Agnes:
Right. My thought is that could decrease fertility.
Robin:
It could, but the usual bet is that it would increase it, but that could be
wrong. But I would say the key problem is you know, by the time you unfreeze
eggs and, you know, use them via IVF to make a baby, you're also old. And a
lot of the discourage, you know, the reason people don't want to have kids at
the age of 40 is it takes a lot of energy to raise kids. And they see that at
that point, they don't have as much energy and, you know, freezing eggs
doesn't solve that problem. And so the larger that is.
Agnes:
That's the mechanism as to why you could lead to less fertility, right?
Because you might not anticipate that fact when you're 20.
Robin:
Right, and you might realize when you're 40, yeah, I have the eggs now, but I
don't want to do this. This is too much work. Right. Which is why that's less.
So freezing eggs was number five in the likelihood of scenarios in my polls,
down at 57 relative to the max of 100. But that would be my main doubts about
it. Yes, that first of all, it will take some time to get cheap and
widespread, but also just unless people are or actually have enough sufficient
energy, it won't actually get enough people to have kids.
Agnes:
Yeah. Can I give you my instinct about people, at least, I don't know. I don't
know if this is true of other people, but it's true of me if I say, what would
have held me back from having kids? For me, the main thing wouldn't have been
that it's expensive or that it's difficult or that I have less time to spend
on my work, but that parenting exposes you to tragedy. It makes it possible
that your life will become a tragic life. And endowing your kids with debt and
equity doesn't do anything to deal with that problem. I think people are
afraid. They have a really deep fear of having And I think that deep fear,
like, so, you know, we, I think we are, we are less okay with tragedy than
humans in the past. We less take it to be a given that human life is a site
for tragedy. We have fewer wars, which is one of the big tragedies right
there. wiped out for most people's daily lives. We have a lot of social
security system. That means, you know, I've just been reading all these books
set in the 19th century where people are just routinely starving to death and
all kinds of stuff. Most diseases are not really treatable, et cetera. Right.
So. I wonder whether. Some of the. Aversion to having kids is just. A lower
tolerance for sources of tragedy. And that in that case, what would. Um, what
would mitigate that I suppose would be something that would make having kids
feel really, really safe. I think if people could feel really safe with having
kids, they would have kids and then the question is just what could you do?
Robin:
So first we might ask, what's the cause of that change? So yes, when you ask,
how could we increase fertility? One thing you think of is, well, why did it
decrease? And what's the cause of the decrease so that we could reverse that
to have an increase? And definitely one of the sort of favorite classes of
explanations is in the family of increased wealth, comfort, safety, et cetera.
that is we are richer safer from tragedy right so and and it also makes life
easier so children have tragedy they are more work they are they are stressful
uh they're exhausting um yes all those things so um so then we if we want to
turn around and ask for scenarios then like what we might most want to do is
somehow change values and culture but we don't have good knobs for that So
it's hard to describe a scenario where we all change our attitudes about these
things. That doesn't seem plausible. It's just a thing to postulate, although
I guess it could happen.
Agnes:
It's quite plausible that it's, in fact, what would happen if we had
population decline and we all became poor again. Then we'd all get used to
tragedy.
Robin:
Right. So one of my scenarios listed is poverty. That is, we could have enough
decline that produces enough poverty that, in fact, We reverse this based on
poverty. Unfortunately, I think we have to go through a lot of centuries
because I think with the poverty, the wealth wouldn't decline that fast. So if
we look how far back we had high fertility, it was really quite a lot more
than we are now. So you have to go quite a long way down and in our wealth to
to reach that level again.
Agnes:
But you have to go quite a long way to get to high fertility. But just to
reverse the trend, to start to reverse it, you wouldn't need to do that,
right? I don't know. To start to reverse it, you just need to be above
replacement.
Robin:
Right, but we don't know how far that would have to be. But anyway, another
scenario would be just some big war or something that made us get poor a lot
faster.
Agnes:
Right.
Robin:
So again, these are disturbing scenarios. This might happen, but it's not what
we should wish for. We should hope we could do it some other way. But yes,
that might happen.
Agnes:
Right, but when I was saying, so right, so one, so that would be to suggest
this process is naturally going to reverse itself through poverty eventually,
but maybe we don't like that in the way that we don't like the insular thing.
Again, as I was saying, it seems like maybe we don't like anything, but okay.
But if we ask what would make having children, I guess my thought was, The
question people usually ask themselves is what would make having children feel
less taxing less difficult less expensive less interfering of their lifestyles
to parents. people are not usually asking themselves the question, what would
make having children feel less scary in the sense of less of an exposure to
like your whole life being ruined because your child was killed or something?
Robin:
Well, so for example, the second lowest likelihood scenario is something that
is the max on story value, according to my polls, which is parenting
factories. So if, you know, soon after children were born, they were sent off
to something like a boarding school or an orphanage, where there was like an
industrial factory level regimentation of the children and raising them such
that the parents don't have to do much work after that point. Now, that
certainly solves the problem of stressing the parents out and having them feel
at risk. at a particular substantial cost. I mean, this is very low rated in
terms of desirability scenario. People think this is a dystopia, but it does
address the concern you just had.
Agnes:
Right. Right. I mean, except that at least the way that parenting is done now,
giving up your baby to be raised in a child factory would itself be traumatic
for people. That is, that would already be tragedy happening right there. So
it doesn't solve the problem. It would be only if you somehow manufactured the
babies in the factory that you got rid of the tragedy. Or you change the
people so that they're totally fine with putting their babies in the factory.
But that's a big culture.
Robin:
Right. I mean, there have been some cultures, elite cultures in the past where
it was traditional to send kids off to boarding school at a young age.
Agnes:
So no, my mother was sent off to boarding school at a young age.
Robin:
So apparently that's the sort of thing cultures can achieve, but whether our
future cultures could achieve it is an open question. Right. So clearly that's
why people rated this as very unlikely. They looked at that and they said,
that's a big ask for a cultural change. And so they don't think that's very
likely, but now one of the funner more, I think a scenario that people think
is fun, even though they don't think it's very likely is what I call the gap
decade, where basically you ban people from certain in a certain age range
from doing any career training. And they're only allowed to parent or
something else. And then, you know, say from 16 to 26 or something, And then
at 26, they can now return to school and career preparation after having
raised some kids in the intervening 10 years. And if we somehow could enforce
this now, we don't have the problem of, say, young women fearing that their
career potential has to be thrown away if they're going to have kids, because
everybody's career potential is put on pause during that period now. So I
mean, you can see it doesn't feel so much like a dystopia. Exactly. But it's
also a pretty big ask. It would be a pretty large change and somewhat hard to
imagine that actually happening.
Agnes:
Or you could have like, you could have options, right? Like it could be either
you have kids or you do some other kind of public service, like you're in the
army.
Robin:
Right, right, exactly. But it still has to be a pause on career training.
Otherwise, people would try to use it as an, you know, if there's something
they could do in that time that would help them for their career, they might
feel pressure to do that.
Agnes:
Right.
Robin:
So whatever they do needs to not do much for career training.
Agnes:
Right, right. But given that we want to get above replacement, and it's only
10 years, You'd ideally, in this scenario, want people to be having twins or
triplets or quadruplets so that you get the benefit of the whole 10 years of
child raising and your child is 10 by the time you start your. Yes, sure. Or
10.
Robin:
Right. So there's other things we can imagine adding onto the scenario to make
it more attractive. But again, now, the number three on the most likely
scenario, which I agree with, is that some nations might decide to just
heavily subsidize having kids at the cost of heavily taxing those who don't.
And I mean, that would have to be paired with some taxing of immigration if
it's only some nations that do this and other nations who are needing
population would like to grab them away. But that could happen. And but in our
world today, people would accuse these nations, I guess, of coercing women
into motherhood because these taxes and subsidies would be very strong
incentives of the sort that people often use the word coercion around. So they
would, you know, many people would disapprove of such strong arming
incentives.
Agnes:
Right. Well, they wouldn't come into being unless people started to frame them
differently and approve of them.
Robin:
Right, but different nations might frame these things differently. So the
nations that approve them would frame them positively, but other nations would
disapprove them. And then you'd have a conflict. We often have that in our
world recently, that different nations have different policies and they
disapprove of each other.
Agnes:
Sir, what is the import of that?
Robin:
Well, because we have somewhat of a world culture where nations don't like to
be disapproved. That is the willingness of nations to do things that that
violate or go against the rest of the world is declining. So as an example,
I'd give organ sales. So the only country in the world now that does organ
sales is Iran. And bioethics conferences around the world are all the time
talking about how can they make Iran stop deviating from everybody else's
agreement that you shouldn't allow organ sales. And the kind of organ sales I
have is very restricted. It's a set price by the government. But nevertheless,
they still allow those sales. And as you may know, Iran, for many other
reasons, faces a lot of international pressure for stop being a pariah. And
they, you know, a lot of pressure, punishment costs, they are imposed on them
to pressure them to not be a pariah, not to deviate from international
consent. If you recall during the COVID pandemic, Sweden was something of a
pariah because it was deviating from everybody else's consensus on how COVID
should be handled. And a lot of people were trying to pressure Sweden to not
being a pariah. And Sweden was strong enough in some sense to defy that. You
know, most nations aren't quite as independent, strong, and able to defy
Sweden.
Agnes:
Well, you might imagine that the nations that do this subsidy, I mean, there
already are nations that subsidize.
Robin:
To a lower level. So part of the story is much higher.
Agnes:
As far as I can tell, they don't face giant sanctions. from other nations. I
don't know, haven't heard of Hungary facing giant sanctions for this reason,
other things people object to.
Robin:
But when you crank up the incentives, then people may disapprove a lot more.
Agnes:
They may. They may end up disapproving of the countries who don't do it.
Robin:
Sure. So the whole point is just to try to think through some of the
possibilities here, not to be very confident about any one of them.
Agnes:
I think that this is maybe part of my problem, is that I like to either think
something or not thinking. And it's a very hard thing to be in this
intermediate. You're like, we want to know what's going to happen. But the
thing is, we're not going to know. We're never going to know. That we know.
The one thing we know is that we won't know what's going to happen in the
future. We're never going to find that out, except maybe you, because your
brain will be frozen, and then you'll be thawed in that distant future. But
the rest of us are not going to find out. And so then you're like, OK, but
it's nice to know some of the chances. I guess I'm, I guess I'm just not sure
that I do want to know the chances of that is I'm not sure I often just like
don't know how to. cognitively conduct myself in relation to these vague, I
can't even put numbers on the chances. I mean, it's just like this might
happen. And so I just find it hard to like, feel like I'm making progress at
all. It feels like I'm just throwing some things out there that might happen.
Robin:
Well, have you ever made a big decision in a sort of decision theory
framework, where you identify scenarios and give them chances and, you know,
values and try to do a weighted average, maybe that's not something you've
ever done.
Agnes:
I do. I don't believe I've ever done that now.
Robin:
I mean, that is, of course, as you know, a standard academic theory about how
decisions are and should be made in business and other policy contexts, et
cetera. But real people deviate more, of course, from the theory. But there is
some truth to the plausibility of it. I mean, your children, say, face career
choices. Should they go to college? Should they take a job? Should they live
at home, et cetera? For each of those, you have to make a bunch of rough
estimates about the consequences of each choice. can't be very sure about what
happens if they go to college, go to a particular college, have a particular
major.
Agnes:
It's quite possible that I'm secretly making these estimates. But I would say
that I have to be secretly making them. I'm not aware of making them. And they
certainly haven't fit. I have, in fact, had those very discussions. they
haven't gone anywhere even close to the vicinity of here are five different
scenarios, here are the chances of the scenarios. That's just not what they're
like.
Robin:
I mean, as you may know, many businesses and government agencies do explicit
decision theory analysis of decisions they have. There is a whole industry of
decision theory consultants and software supporting etc to help organizations
do exactly this practice and part of the thing is filling in probability
estimates for various parts of the data.
Agnes:
I'm not objecting to them doing that but there's a question right like when
you and I are having a conversation about this distant future Are we more like
me talking to my son about, you know, should he go to college or not? Or are
we more like these experts who advise the business? And maybe you're imagining
yourself more like those business experts, people, and you're trying to speak
in that language to like ordinary people like me who don't even speak that
language. And I'm more like, well, I guess I should think about this the way I
think about everything else in my life, namely not using decision theory.
Robin:
So this would be interesting for us to explore, but this is, I mean, that is,
you and I have disagreed in the past, I think, about the value of decision
theory. And this is a concrete example of that sort of a disagreement, i.e.
different degrees of being inclined to use it in particular cases.
Agnes:
Right. I mean, I think that in my disinclination to use it in my everyday
life, I'm just very typical. That is, insofar as you're using it in your
everyday life, that's very weird. Very few people, even I think probably very
few decision theorists, use it, frame, explicitly frame decisions in their
lives in this way.
Robin:
Well, I mean, as a simple example, sometimes you walk to work and sometimes it
rains. And so sometimes you have to decide whether to wear a raincoat or carry
an umbrella based on the chance of rain. So decision theory says, there's a
simple calculation that says, well, let's weigh the cost of carrying an
umbrella or raincoat when it doesn't rain versus the cost of not having one of
those when it is raining. And that'll give us a ratio that we can then make a
probability from What higher probability of rain would be when you'd want to
take the raincoater umbrella. So that's exactly what decision theory says I
would say you probably do in fact, when you decide to take a raincoater
umbrella, ask yourself what do I think the chance of rain is.
Agnes:
Right, so as I was very willing to admit that I might secretly be using this,
but if I might secretly be using it in my everyday life, maybe I'm secretly
using it now in this conversation, right, in all these things I'm saying, even
in the parts where I'm saying let's not use decision theory, maybe that's
actually secretly using this.
Robin:
But if you and I were about to walk out the door and had to decide to take an
umbrella, I think I could ask you, what do you think the chance of rain is?
And you wouldn't balk and say, I can't think in terms of probabilities.
Agnes:
I would look at my phone and say, 70% chance of rain. OK, what's the next
part? What happens next?
Robin:
Well, you were just saying you were balking at giving probabilities these
scenarios.
Agnes:
But I don't give the probability. I look at my phone for the probability.
Robin:
That's different. OK, but if you had no phone, you could still talk about what
you thought the chance of rain was. It's not an incoherent concept to you.
It's not a concept you can't accept or think in terms of. You would still have
that concept, and that would be a central concept to the decision to take the
umbrella.
Agnes:
I mean, I don't know. So I'm not denying that I have the concept, right? I
understand. So obviously, I have the concept. The question is whether I employ
it in my practical reasoning about whether or not to take an umbrella. And I
mean, I guess like, I guess it's probably true that if the sky is overcast,
let's say, there's some state of affairs that can occur where I decide it's
the threshold for taking the umbrella has been met. And that's probably
sensitive to questions like, how much stuff am I carrying? Probably the
threshold is higher if I'm carrying a lot of things and it's already heavy. So
that seems true. And so maybe then we should say, I'm very good at this kind
of reasoning. I just do even better than you because I don't have to make it
conscious or explicit. I just do it. And so then maybe we should reason off
of, we should conclude from me saying, I don't know what I'm doing here that
my ability to assess these probabilities is in this case so poor that I feel
that when I'm reasoning about it, unlike when I'm looking at the overcast sky,
I'm simply making things up.
Robin:
Let me pitch it to you in a different way. There are a lot of intellectual
problems or problems you might think about that a large range of people look
at them and go, You can't make any progress on that. I don't want to think
about it. This is true not just ordinary people, but of most academics even.
For most questions you could offer them, they would go, I can't see how to
think about that or why I should bother. And so some of us are in the habit of
thinking about more topics that other people reject thinking about. And the
question is, how can we go ahead and think about something. Other people go,
you can't think about that. And so one way is to just have some standard
questions you can ask about any strange thing to at least start the
conversation. I think you have a lot of those skills as someone who's engaged
a lot of philosophical questions, philosophical questions that would stymie
most people, you could still make some progress on because you have a list of,
well, let's try this, let's try that. as philosophical tricks to start in on
those topics. And so for me, I mean, I just noticed that I did the survey. I
used expected utility theory as my simple trick to have something to ask about
these scenarios. I said, well, gee, I've got a bunch of scenarios. What can I
do with them? I thought, oh, I could ask people which are more likely. That's
a question I can ask. It's a number I could compare. I could get lots of
people to give it, and I could give an answer. And then the other thing is,
well, how much do you like these scenarios? That's a question I can ask. So
ta-da. And now I went farther. I didn't just give up and quit and say, who can
think about this? This is too hard. I started down the path in part by
expected utility theory gives me a thing to do.
Agnes:
But that the fact that it gives you that thing to do doesn't mean that it
gives the people who you're asking any real basis for making these estimates
about which is more likely that's the thing that strikes me as just. pulling
things out of the air. And the thing is, you ask people three things, right?
Which is more likely, which is most desirable, and which makes for the better
story. The third one you didn't get from expected utility theory, I take it.
Robin:
No, I got that from elsewhere. I got that from the fact I'm planning on
talking to someone soon about helping them write a story about it.
Agnes:
So I thought I would just ask. But I guess I think,
Robin:
So we do have a lot of data, basically, of asking people questions that they
don't think they have any good basis for answering. And still, they give you
an answer. And when you look at the distribution of answers, the middle of the
distribution is actually pretty good. So this is called the wisdom of crowds,
a book James Surowiecki had long ago that he mentioned prediction markets in.
But it's a long known thing. In fact, surprisingly, you can get information
out of people by asking them questions they wouldn't seem to have much of a
basis for giving an answer.
Agnes:
Right. And that gives you a good reason to ask those questions. But as the
person who's being asked the questions, it doesn't give you a good reason to
answer them.
Robin:
Once you know that this is true, it does. It gives you a good reason to
contribute your answer to this larger pool that will then distill it out of
it.
Agnes:
You give the answer. You're like, this is probably not right.
Robin:
But still, it helps. That is, I say, just go with your intuition. What's your
gut say here? Just give me a reaction. And that helps. That is a whole bunch
of people doing that. Get the distribution, get the middle of the distribution
by a fit, and turns out that's useful.
Agnes:
I feel like in many contexts, you have been resistant to intuition and the
role of intuition in argument. But I guess this is one where you're in favor.
Robin:
Well, I mean, in some sense, a lot of rigor is the trick of taking unrigorous
things and combining them in clever ways to make them rigorous. And so in some
sense, near every sloppy argument is a more careful argument. And part of what
we learn as intellectuals is how to make that mapping. To take a sloppy
argument and say, let's pick this up. Let's be clearer and more careful here.
And let's find a careful or better, more robust argument near something
sloppy. That's what you and I have learned to do over decades.
Agnes:
OK, we better stop there. We're out of time.
Robin:
All right.
Agnes:
Nice talking.