Is Life Worth Living? with William Eliot
Robin:
Hello, Agnes.
Agnes:
Hi, Robin.
Robin:
Today, we have a special guest.
William:
Hi, it's me.
Robin:
And who are you?
William:
I am William Eliot. I have been helping Robin and Agnes with the podcast since
the start. We have been improving sound quality and helping with the
distribution as well.
Robin:
And we very grateful, so grateful that you are visiting with us and we have
decided to let you choose the topic of this podcast.
William:
Yeah, very generously you've invited me out and I'm very happy to be here.
It's exciting to be in Washington, D. C., which is near where Robin works. And
Agnes is also visiting, so all three of us are in Agnes's Airbnb at the
moment.
Robin:
Well, it's exciting conditional in your existing but that's going to be the
question for us right now.
William:
This is the question of the day. So, I'm very interested in this whole concept
of existence versus nonexistence. I think that's kind of weird biases,
anthropic one is the most obvious one or the fact that you exist. But I wonder
if there's a way of talking about existence or marking of whether existence
has value outside of the fact that we are experiences of existence.
Robin:
Would it help to comment on preferences over existence, which isn't quite the
same as value but to show us that people have choices about their future
existence at least. So they can choose how long to live and we might think
they are composed of many parts like a person in a moment is one of their
parts and they are this combination of all their person moments. And some
people choose to have more person moments and they make tradeoffs based on in
some of those person moments, they might think, “Oh, that's a pretty bad one.
I guess I wouldn't want that to happen.” So like somebody about to suffer a
very painful death might choose to make the death happen earlier just to cut
off that whole painful part. Right? But somebody who expects a healthy life,
they would regret losing those – dying earlier than they planned and aren't –
don't those count as preferences over existence?
William:
They seem to, just that in the description that you gave right there. I think
that – given that every time stamp of existence, we can choose whether to
continue existing roughly because suicide is an option that's available to
everybody on earth, and that implies that you therefore choose to continue to
exist. So it is quite a common thing that if you think existence is probably
negative then you might say to that person, “Why not just …”
Robin:
Why keep going?
William:
Or the alternative. [Laughs]
Robin:
Why not stop?
William:
Yeah, why not stop? Right? So I think it's easy to say that but it's kind of a
weird thing which is it's almost like once you're on the boat, it's harder to
get off than it is to – it's not as abstract. It's just a binary decision to
continue living or not, at least in my experience personally it is.
Robin:
But since we all expect eventually to stop then people make choices about when
to stop.
William:
Yeah.
Robin:
So stopping sooner or later and doesn't that suggest that if they try to stop
later that they want to continue?
Agnes:
Can I respond to it? I just want to develop Will's thought, which is it's just
very well-known that there are these biases like status quo bias or endowment
effect or however you want to call it where once you have something, you don't
want to let it go.
Robin:
Yeah.
Agnes:
And I think that our attachment to life is like the mega status quo bias. Hold
on. And so, I think that just really often actually, people will cling to
something even something that is worse for them and it makes them miserable
because they just can't imagine letting it go. And it could be a marriage. It
could be a job. It could be sort of like the idea of having to stay in academe
job or whatever. And I think it could very well be existence. And so, we can't
go merely on preferemces. And to your point, Robin, that we all know we are
going to die eventually, I don't think that's so obvious. That is, it's not so
obvious people have accepted that. I think we are in a massive amount of
denial about the fact that we are going to die and we don't really accept it
and we don't allow that thought to enter into our conscious existence. And
when it does, we are just filled with blank mute terror. And so, I don't think
we can use that to say, “Well, we are going to die eventually. We just want it
to happen later rather than sooner.” I think it's in fact, we are in denial
about it and we cling in a not rational way to the we've always had and none
of that is an obvious indication that it's better.
Robin:
I think people throw around these bias arguments way too freely. [Laughs] As
author of a blog called Overcoming Bias, there are many psychological review
articles that suggest that this endowment effect goes both ways. I think there
isn't an average effect one way or the other so in some cases, people may be
overly attached, other case, they're underly attached. I also don't think it's
obvious that people are in terror, therefore suggests they don't really know
if they don't want to die. The most obvious interpretation is they are in
terror of dying suggests they don't want to die. Isn't that just the most
obvious straightforward interpretation?
Agnes:
Can I read you something from – I read this to you before, Robin, but I'm
really happy that I get to read it to you again. This is from an essay by
Schopenhauer that's called On the Doctrine of the Suffering of the World. And
so I'm just going to read two quotes first. “If suffering is not the first and
immediate object of our life then our existence is the most inexpedient and
inappropriate thing in the world.” Yeah, that first one. The second quote is,
“Whoever wants to test the assertion that pleasure in the world outweighs the
pain or at any rate, that the two balance each other should compare the
feelings of an animal that is devouring another with those of that other.” So
Schopenhauer thinks it's just an obvious self-evident fact that life is on
balance, incredibly painful like for human beings, for animals. It's just
filled with pain and suffering and occasional tiny amounts of pleasure. But
the dominant experience of existence is one of pain and suffering. That's his
view.
William:
Yeah. I mean going off of this, there is this problem. I know you say you're
reluctant to call it a bias but I think this is kind of like how a game theory
situation where you're in the game and by being in the game, as in you're
existing, you do have this sense that it's not a decision that someone outside
of the game is making. It's someone inside of the game. And that I think is
subject to the biases such that the ones that Agnes was just talking about,
the status quo bias and so on and so forth because it's not as if we are
talking about a question of, “I'm God. Do I create William or not?” We are
talking about, “OK, William is being created. Does he continue existing or
not?” So I guess there's actually – there's kind of before the life is
created, should be it made or not? That's kind of question one we have here.
And then there's question two here, which relates to the Schopenhauer quote
and it's general of a utilitarianism argument is, should you – I guess, should
you end life prematurely or how should you deal with the fact that there is
suffering in the world? What's the best solution?
Agnes:
And with respect to the – sorry, I forgot what I was going to say. Robin, go
ahead.
Robin:
So the relevant fraction of suffering and pleasure is just not directly
relevant to the question we are talking about. The question is: do you want to
keep going?
Agnes:
Is it worth living?
Robin:
Right. And you could want to keep going even if your life is full of pain.
That's a completely coherent stand because it's the alternative isn't pleasure
in this case. It's not existing at all and you could just want to keep going.
And I still would say, the vast majority of people seem to want to keep going.
That is, in most situations, that's what they choose. It seems like the
straightforward interpretation is that they would rather have more existence.
Agnes:
Oh, I remember what I wanted to say. One way to test the idea of like suppose
you were God, would be instead of looking whether people want to continue
their lives where they are likely to, I do think they are likely to be subject
to biases. It's similar to do they continue romantic relationships that make
them miserable? Right? They just can't imagine getting out of it.
Robin:
And they break them up too soon too.
Agnes:
You can measure it as how many – how much are people choosing to create life,
right?
Robin:
Yeah.
Agnes:
Like are people choosing to have kids or not? And you might think, if people
choose to have less and less kids, that's just a sign that people value life
less than they used to when they have more kids. That will be one way to
measure not like – that would be better because you are talking about like
existence that you don't – that you are not already implicated in.
Robin:
So there are two questions that are closely related. One is, do you want to
keep going? And the second is, do you want to create more life?
William:
Exactly.
Robin:
That is, do you want to have fertility? Do you want to have children
basically? They're related in a sense they're both choices to create more
existence but one choice is much closer to you in the sense that you are
continuing you, say right now, you have a much better sense of what that life
would be like.
Agnes:
But you are also got the bias in the one case …
Robin:
I don't – you haven't established that this bias exists. You just have claimed
that this bias exists. I don't actually see the evidence for it.
William:
I think that we have to remember that we are – I don't know if there's a word
for this, but it's like a biological bias, which is evolution has primed us to
“keep going.”
Robin:
That doesn't make it a bias.
William:
It does if we are thinking about a philosophical plane where we are just
determining whether existence is a good thing. We are trying to …
Robin:
Why think of evolution's attitude as presumptively wrong? I mean why not think
evolution thinks you should exist and that's a good reason to think you should
exist?
William:
Because I don't know – because I personally don't go to evolution for guidance
of what my morality is.
Robin:
Why not?
Agnes:
This is another argument.
William:
I guess this is kind of the underlying – this is the seed inside the apple
maybe is, does evolution provide morals and can you use the fact that it's
constantly driving for survival as proof that survival is a good thing
regardless of say, pain and pleasure. And I'm undecided. I'm not a philosophy
student.
Robin:
I would say, evolution is a good guide for you to guess all aspects of your
mind that you aren't directly able to inspect. That is, evolution designed
this mind of yours. It's a big complicated object and you can't see all the
parts of it but you could use evolution to draw inferences about what's likely
to be in the parts you can't see. And yes, evolution has likely designed your
mind to make you want to survive and persist. Therefore, you probably want to
survive and persist exactly because evolution probably put that there.
William:
And you don't think that's a bias.
Robin:
No.
William:
Why is that any different to say, status quo bias?
Robin:
Pause for a moment. I didn't accept there was a status quo bias if you recall.
William:
I see.
Robin:
Sometimes there's a status quo – I mean sometimes people are bias in that
direction, sometimes in the other. The literature I do not think gives an
overall bias.
Agnes:
So can I give an example? Suppose there's a kind of animal, let's say, it's a
koala. It's so cute. And that animal is sort of if left to its own devices in
its natural environment, it's going to go extinct, which we could represent in
Robin's terminology as evolution wants the animal to go extinct. It has
competitors, whatever. Now, according to you, the moral thing to do then is to
let it go extinct because we are following through an evolution's intentions.
Robin:
That wasn't the structure of my argument.
Agnes:
But it seems like that was.
Robin:
No.
Agnes:
That it's a parallel to your own case. Like evolution wants me to keep living
so I should. Evolution wants the koala to die.
Robin:
So, I mean part of your question is what do I want? OK? And what you want is
in part indicated by all the choices you've ever made. Those are indication of
what you want. Part of what you want is also indicated in part by the thoughts
in your head that occur when you imagine making a choice one way or another.
Is it terror or joy?
William:
Yup.
Robin:
But another thing that is a predictor of what you want is the fact – the
process that produced you, which is evolution. So the koala, we would expect
evolution to produce the koala to try to survive. That would be our prediction
about koalas. But we do not predict that if a species is going extinct that
evolution has encoded in that species the desire to go extinct. That is just
not plausible evolutionary story.
Agnes:
Right. But it's like encoded in the other species the desire to take up a
niche …
Robin:
Right. So if we are trying to predict the koala competitors' minds then that's
what we would predict about those minds.
Agnes:
But we would also predict about the whole – like that whole part of the world.
Robin:
But the challenge here was to predict what one person wants, i.e., Will, and I
would say evolution is a guidance to help us predict what does Will want in
the cases where Will can't just look directly and know what he wants.
William:
It just seems to me that there are so many occasions where we want is not
parallel with evolution. So to say that evolution is going to tell us what we
want with regards to our existence …
Robin:
Again, I gave you a bunch of different things, all of which were indicators.
So you are going to try to combine all the weak indicators you can to make
your best guess of what you want. But it's one of the indicators that
certainly – it's correlated positively. It's not correlated perfectly.
William:
Sure. OK. So can I ask you? Why do you – I think I can put in the answer but
why do you care about – why do you think that evolution does provide a good
guide for what decisions to make?
Robin:
The idea would be: you want something. That's the key concept we are trying to
infer. That is, you are about to make a choice and one of the choices somehow
corresponds better to what you want. And you've made many choices in the past.
So it's the same concept basically, reapplied it to a new context like you
made a choice before about what you wanted. Now, all the times you make
choices, you make it in a noisy environment where there's a lot that can go
wrong so you don't perfectly choose well. Right? So one of our explanations
for your choices is noises. We expect noise goes into your choices, but we are
trying to disentangle what you really want from all the other noise. And so,
again, you are a machine created by evolution, a complicated one. And if we
can understand the design priorities that went into constructing you, that can
give us a hint about the kind of decisions you were designed to make and
therefore, about the kind of things you would want. And that's one useful
source of information about what you would want in addition to again, the
previous choices you've made, the choices of other people who are like you,
the feelings you get in your head as you think about making the choice. Those
are all clues about what you want and you want to combine all the clues you
can to make your best guess about what you want.
Agnes:
I think I understand why you are not scared about the AIs like then
singularity, whatever, AI becoming conscious because if the AI were the reason
in the way that you just reasoned …
William:
Exactly.
Agnes:
… then it would just do what we want. But the reason people are scared …
Robin:
Well, it would adjust to it.
Agnes:
The reason people are scared is they don't know whether AI is going to start
thinking for itself. And that's what all of us are doing. We are thinking for
ourselves. We are like, “I don't care what I was designed to do. I now own
myself and I now am able to reorient myself towards the good in my own.”
Robin:
You can say those words but you are in fact still executing the program that
evolution built inside of you.
Agnes:
I mean I'm going to do that no matter what. I don't have to worry about what
evolution wanted. It will just work.
Robin:
Right. But you might say, “You want to produce choices that you would not
regret later.” And evolution can be a guide to what you won't regret because
evolution produced your regret.
Agnes:
Sex produces many regrets.
Robin:
Yeah.
Agnes:
And that's what evolution designed us to do. So it doesn't seem true that
following evolution doesn't lead to regret.
Robin:
Here's a whole different way to think about the subject. So often in society,
we think about all of us together trying to sort of have shared
responsibilities. That is, if we have the shared infrastructure of say, a
government, electricity or whatever, then we should each do our part to make
it continue and support it, right? That we have some social responsibility to
allow society to continue, right? And that's a responsibility a bit above just
what you want. You should help society continue to exist. Now, it's not an
overwhelming consideration, right? If you have enough considerations, say, you
want to – for some reason, you need to crash your car into a power station,
and that will deprive us all for power or something, right? We want you not to
do that. We have a bit of a responsibility to avoid that, but not an
overwhelm, right? Well, think of then all the generations in sequence that go
all the way through time, right? The ball gets handed. Generation passed to
generation. And if you drop the ball, all the generations after you lose.
You're part of this collective effort where you shouldn't …
Agnes:
Let the team down.
Robin:
Let the team down. Right. So there is this introspection like sometimes like
somebody will be out drowning in the ocean and they will make a human chain of
people with somebody on the shore holding on to something and one by one
they're all holding on each other until they stretched out to the person in
the ocean and they get and grab their hand and then pull them in, right? And
if you join this human chain, you have a bit of responsibility not to let go
of the people next to you. Not only might you be in trouble but like the whole
rest of the people out toward the ocean, they might get swept out. And I would
say, you have a bit of a responsibility as the result of all the generations
that came before you, all of whom continued to exist and passed the ball on.
You should …
Agnes:
You should have kids. But you can commit after that.
Robin:
OK.
Agnes:
Then you've done your part.
Robin:
But I am making – that part of it is still, I would say, you have a bit of a
responsibility there.
William:
Okay, I have doubts on this as well. I just don't know a life that isn't going
to exist where you not to have a child has any value, moral value to you
today? Because no one is affected by it not existing except for the rest of
society which may have required its presence, that child's presence.
Robin:
I think you're comparing two states of the world, one state where the
creatures exist and other state where they don't. You're anchoring on the
state where they don't and saying, “That other state is hypothetical, I can
ignore it. I can anchor on the state where they exist and say you're other
state is hypothetical and say this is the default state.” And with respect to
this default state, you're killing them.
William:
So if I say that society is going to continue and that these human will exist,
then I think it's more to act in ways which are going to positively influence
their lives. So don't drive into the power station. Focus on things which
might seem problematic to future existence such as AGI safety for example or a
nuclear risk, maybe even something like climate change. I understand focusing
on those when you know that those lives are going to exist but you could also
take a different strategy, couldn't you? You could say, “No one is to have any
more children ever and we are going to fade out fizzle away.” And I wonder if
the latter strategy of fizzling away is more or less positive in terms of
moral value than the former of letting life continue. And when we make that
decision, we've kind of got to forget what's happening right now because that
can change in a matter of, I don't know, 150 years.
Robin:
How about think of it this way? In terms of cosmology, we are alone say,
probably for the nearest million galaxies, and if you really like the
existence of dead stuff with no people with pain, look at all of the galaxies,
there's just this one planet out of a million galaxies that has life, why not
just allow that one experiment to continue there; must everything be dead? Why
don't just allow one of them?
Agnes:
But you think there are aliens.
Robin:
I think – there's one per million galaxies I would say.
Agnes:
OK.
Robin:
Yes, there are aliens out there but they each are alone per million galaxies.
And so, within the million galaxies of each alien species, almost all of it is
dead and empty and there's just this one tiny piece of life. And you are
saying, “Shouldn't that be dead too?” And I'm going, “Come on. Let there be
some life somewhere.” If there is a portfolio benefit here, maybe death is
good, maybe life is good. Let there be a little bit of life.
Agnes:
I think that with respect to the – and so I'm sort of with Will in saying that
there's something weird about the idea of moral obligations to nonexistent
people. And like you said, “Well, Will is anchoring on the case in which they
don't exist.” But I don't think that – and you want to anchor on the case in
which they do exist. But in the case of which they do exist, you can't make
them not exist if we are in that. So it seems to me, we are neither assuming …
Robin:
This is true in the other case too.
Agnes:
Right. We are neither assuming that they do don't exist, right?
Robin:
That's what I mean by anchoring. We are focusing on one case and pairing the
other to it as the reference point.
Agnes:
Right. But I think in so far as we are trying to decide what to do, we
actually have to view both of the life possibilities.
William:
Agreed.
Agnes:
And I think if there is going to be a rule that says we have to go this way
rather than this way, I don't think it can be because you have a moral
obligation to those nonexistent people because I don't think you can have a
moral obligation to not yet existing people even if they are potentially
existing. I think you can have an obligation to them conditional on their
existence. So supposing we know there will be people in a hundred years or in
a thousand years, we have obligations to keep the environment in a certain
way. But the obligation to the nonexistent people who would only exist
conditionally on our decision to make them exist is just not obvious that we
can have an obligation to them to make them exist. It might still be good to
make them exist for other reasons than having an obligation to them. We might
just have an obligation to make the world as good as possible. That might be
the best world. And so, we have an obligation to bring it about. But I don't –
one thing I don't think we have is an obligation to them to bring them about.
I don't think that makes any sense. So utilitarian argument still works, just
not a deontic argument that you – like you can just think that world and
that's sort of what Robin was saying, “Look, the world in which earth has life
in it is a better world than in the world in which earth doesn't and you
should bring better things about so you should make that come about.” The
question then is just whether that's true. But I don't think you have a duty
to – it's not like the human chain case where the people are already there and
they're going to die. And so, I think that that's really …
Robin:
I could say I was always destined to let go of the chain. I'm letting go of
the chain now. That was always the way it was going to be. They were always
going to drown, therefore I have no responsibility.
Agnes:
If it's true that you are always that – I mean you could say that would be
lying, but suppose it's the truth that you couldn't control your arms and that
a thing was going to strike you, well then in fact, you are not responsible.
Right. So if we suppose that their death actually have nothing to do with any
of your choices, which is what you're doing when you're anchoring on the
possibility that it exists then it no longer makes sense to speak about what
you are morally obligated to do or not.
Robin:
I am more of the position that you'd be morally obligated to make the good if
you can.
Agnes:
Right. So the question is, what is the structure of that obligation? Why?
There are two possible reasons. One is you have a duty to those people, which
is what you are holding hands metaphor was getting at. I don't think that
works. So I think we got to dump that one.
Robin:
I disagree.
Agnes:
Well, I'd like an argument. Your argument was that in your example, it's
crucial to your example …
Robin:
You say I can't be obligated to a creature who doesn't exist yet, and I
disagree.
Agnes:
I can't – you can't be obligated to bring about that creature's existence.
Robin:
Why not?
Agnes:
Because their existence is conditional on that very decision.
Robin:
Yeah. So? That's a coherent obligation. You could say it's wrong but it's a
coherent obligation.
Agnes:
What I'm saying is you can't argue for the existence of such an obligation on
the basis of an example where that example is you letting down people that are
holding your hand that already exist.
Robin:
Any example is motivating. It's not going to be an exact parallel.
Agnes:
OK. But this is the feature that we think is relevant. So you're not going to
persuade us by trading on an ambiguity exactly there.
Robin:
I would add to the example of history that not only are you the end of a long
chain of people who continued your lineage, you're the end of a long chain
many of whom eagerly wanted that lineage to continue. They weren't indifferent
to this lineage.
William:
And so, it's kind of like a historical debt. You are continuing to …
Robin:
Yeah. They created you in part. You owe them. Part of what they want from you
is that you continue.
Agnes:
I don't understand why you owe them. You didn't make a deal. There was no
contract.
Robin:
We can owe things without a contract that you made. You can be endowed with
contracts.
Agnes:
Suppose that somebody comes to your house and they would renovate your house
and the make it look really nice. They sneak in and they renovate it and they
are like, “You owe me now. You owe me money for this renovation.”
Robin:
But we are talking about your parents, not a random person that comes to your
house. What do you owe you parents?
Agnes:
Oh, your parents are a random person that you never met before you come into
existence.
Robin:
I think you can owe your parents. They are not just some strangers.
Agnes:
Of course they are.
William:
But your parents had you for their sake, not for your sake.
Robin:
How do you know it's not for your sake?
William:
Because I think most of the times – well, because evolution would say like sex
is enjoyable, we have now created this child.
Robin:
I think of course parents can do things for their children's sake, part of
which is to create the children. I think one of the things parents most do for
their children's sake is to create the children.
Agnes:
I mean suppose your parents came to your house while you weren't there and
they renovated your whole house with very, very expensive materials and your
parents say, “OK, we did this whole thing and we are your parents, you owe us
money.” Do you think you owe them?
Robin:
I think your parents get a lot of deference when they say, “Look kid, we are
happy you exist. We are glad you are happy you exist. But there was a deal
here. There's something we wanted out of you.”
Agnes:
Did you have kids basically because – to satisfy your parents?
Robin:
Perhaps; I wasn't very consciously thinking about that. But that's not what we
are talking about here. We are talking about can your parents have claims on
you? Can they say, “Look, we did this nice thing for you, you should do this
nice thing for us.”
Agnes:
But one sign of whether they can have such claim is do any of us respond to
that? Do you have kids – I certainly don't have kids to satisfy my parents. It
didn't sound like you did either.
Robin:
Long ago, there were these people called Hare Krishnas, it was a religious
group, and they had this trick of staying at airports and handing people
flowers. And part of the trick was if somebody gives you something, you accept
the gift and then they ask for something. You feel obligated to give in
return. So they were asking for donations in return for their flowers.
Agnes:
Right.
Robin:
And this is a common marketing trick or strategy, which is I do think
reflecting the actual human norm. If somebody gives you a gift of value and
then ask for something from you, you have a bit of an obligation to try to
help them out. Not infinite but, you owe them …
William:
You can't refuse the gift because you can't at the point of birth say, “No,
thanks.”
Robin:
No. Nevertheless, the norm still applies.
Agnes:
I mean it doesn't seem to me like you thought it applied in my example of the
people sneaking into your house even when those people were your parents. So
in many cases then, it doesn't apply.
Robin:
I would say, if somebody sneaks in your house and actually does something that
adds value, you could be pissed they didn't ask for your permission but you
should honestly ask, did they add value? And how much are you willing – and
how much is that worth to you? And if they actually did something nice for you
then you should acknowledge it and maybe credit them some compensation even.
Yes. [Laughs] They did something nice for you.
Agnes:
You know there's a philosopher who I can't – was it Shauna Schaffer? I think
it might be her, who thinks that you wrong your children when you bring them
into existence because you didn't ask for their permission to exist and there
might be – I think there are in fact lots of bounds on why you can't commit
suicide like it's being illegal or you might think it's immoral for a variety
of reasons. You might think you're not allowed to commit suicide and so you've
trap your kids in existence and that's an immoral thing to do and it's like
wrong to have children for that reason because you didn't get their
permission.
Robin:
In our last podcast, we discussed this common observation that consent is
actually not that central to our concept of many kinds of social processes.
Consent is more how we manage our conflicts.
Agnes:
Yeah.
Robin:
Consent is an important central construct to figuring out how to deal with the
fact that we each see value differently and have – and achieve different
values in things. But that the thing itself, consent isn't that central to it.
So I might invoke that observation here and say, “Look, the fact that you owe
your parents doesn't have that much to do with consent.” Parenting is the kind
of process where they really just can't your permission. Sorry. They are going
to have to make a choice and we'll have to deal with that. I mean it would be
nice if you could ask permission. So actually, as a side comment, I have this
book called The Age of Em: Work, Love, and Life when Robots Rule the Earth
were emulations create other emulations. And for emulations, they can ask
permission before they create an emulation because they are making a copy. And
so you could go to someone and say, “Can we make a copy of you? We will have
this life over here. Will that be OK?” And you could say yes and then the copy
is made and then they have this life. So for emulations, we can ask them
before we create them, “Would that be OK?” But that's just not feasible for
humans today.
William:
So I totally agree. That would be a really lovely solution to a lot of
problems if you could ask your child before it is born, “Do you want to
exist?” And the fact that it could be a thing with the ems is a good thing.
But I mean I'm worried about something, which is that the em isn't making –
the child em in its unborn state might not be making a decision completely by
itself and it might have the same – to come back to what we are talking about
earlier, it's like the same, sorry to use the word biases, or the same kind of
impacts, the same kind of influences in its decision-making which might not
mean it's making an accurate choice. So let's just say briefly for a moment
that evolution hasn't made us into philosophers. It has. But let's just say it
hasn't briefly. And let's say we were deciding whether to make a brand new
planet with a brand new species or not. And let's also say – I mean I think
this is why I don't know, does evolution exist without life? I don't know if
it does or doesn't. It seems to me it probably does but it just kind of one of
those like undiscovered laws maybe that these …
Robin:
We might say selection creates life so if selection can't create life unless
there was a moment when there was a selection without life.
William:
Yeah, exactly. So that's interesting. I guess what I'm saying is I'm worried
that if we are making decisions about whether existence is good or bad from
the point of existence, as in we are existing, if only we could be like the
God that's deciding, “Should I make this pattern, this planet, this universe
or not make it?” And then we wouldn't be going to these arguments like, “Oh,
evolution has decided that it's imperative to continue existing.” Because you
choose – if you are God, you would choose evolution. So the fact that we can't
choose that doesn't seem like a strong reason why we should continue to follow
it. It's something that we haven't chosen. It's something that we've been
forced into.
Robin:
What other basis could you possibly have for figuring out what you want other
than what you are? That is, either what you want comes from inside of you or
comes from outside of you, if you are going to reject inside of you, what
outside will you point to? Where in the universe will you look to find out
what you want? I would say most people think of what you want is found inside
you. But inside you was created by evolution. All those intricate structures
you might look at and reflect on inside you to figure out what you want. Those
are all created by evolution.
William:
Yes. That's very convincing to me as in like where else am I going to find the
answer because it's not as if there is non-life forms which are capable – I
mean I don't even know how to think of this.
Robin:
I mean the moon is dead. Well, where on the moon will you find the answer to
what you want?
William:
And kind of the DNA of the moon, that it's not going to be the answer to this
question, I agree with you. And like if you thought you'd made some kind of
super maxi intelligent AI which you could ask questions about the kind of
source code of the universe, even that would still be maybe subject to the
same concerns about evolution because …
Robin:
The AI would have come from some sort of process.
William:
Exactly. Where do you go for this answer?
Agnes:
I imagine human beings trying to ask themselves this question before there was
language. OK? They can't figure out what they want. Obviously, they are not
very sophisticated in how they think. And you might have thought like, well,
first thing, if you want to figure out what's going to make you happy, what's
going to make your life fulfilled and meaningful, first thing you have to do
is find a way to like interact with the others of your kind in certain ways.
As we learn to do that then we also learn to like connect with each other in
ways that are deeper than our original pre-linguistic modes of connecting with
each other. And evolution, that mostly gets us – I mean maybe up until now but
probably not.
Robin:
No, I would disagree.
Agnes:
Well, this is a question of cultural evolution. But – and like I guess I think
that it's not right to say that our understanding of who we are is something
like, “Well, we will be satisfied by what corresponds to our antecedent
programming.” I think that we can become different and we can learn and
discover things that are of real meaning and value that we didn't grasp at the
beginning. They're not just the kind of logical upshot of where we started.
And I think that – so there's a philosopher named Bernard Williams and he has
this system between things that I desire conditional on my existence is where
Robin reminded me of when he started talking, about like how you want to this
conditional on your existence, so things that I desire conditional on my
existence, and then things that would give me reasons to exist. And so like
conditional on the fact that I'm going to keep existing. I want food so I
don't the pain of hunger. I want the room at a reasonable temperature so that
I'm not sweating. But those are not reasons to exist. They are just things I
want given that I do exist, right? And then he has a separate thing of like,
what could actually make – give you like reason to exist or make your life
worth living, and he calls those things ground projects, right? There are
things where there is something important that you are trying to bring about
with your life and it could be parenting, it could be a change in the world,
it could be a way that you relate to other people. There is like some source
of meaning and value that means you have a reason for existing. And I think
most people are looking for that in their lives. They are looking for the
things that would give their life a reason for existing. And in some sense up
until that point where someone is taking it on faith, and that's actually sort
of how I'd put it rather than status quo which is like OK, I'm going to keep
existing. I'm just going to assume this is going somewhere. But we are
actually looking and searching. So I don't think that it has to be the case
that we have to look to our coding to say, “I'll find what I want by looking
at what the person made me as.” I think it's like, “No, no, there's a process
of inquiry that I'm engaged in. I'm engaged in it probably by talking to other
people and by looking into what other people has valued over life and by
trying out sources of value.” And what I'm searching for in that process are
these ground projects that would give my life meaning. Williams himself
thought that you could find those things and they could engage you but like
maybe for one or two or three hundred years. He thought weren't the sorts of
creatures that could handle immortality. We wouldn't find enough ground
projects. We just get bored of them. So he thought like you could imagine life
extension and having ground projects for a certain amount of time but not for
like maybe more than like 300 years.
Robin:
So I really want to challenge this picture you painted that evolution has sort
of created humans up until the moment they started to talk. And then after
that, evolution turned off and everything since has been some sort of
philosophical conversation. [Laughs]
William:
Yes. So I really love this distinction between reasons to exist and things
that are morally good while you exist in a sense. But again, this does have to
– I feel like this has a weakness that Robin is talking about, which is that
all those things that we might described as ground projects, good things,
reasons to exist were also designed by evolution.
Robin:
Even more so that this whole process of discussing things with people and
thinking about things in your head, evolution set up that process and has been
selecting for that process for a long time. Not just biological but cultural
evolution. That is, you are the result of biological and cultural evolution
honing that whole process of thinking and discussing things and selecting the
versions of them that would survive and reproduce. So you should predict that
the kind of thoughts you will have and the kind of discussions you will have
will in fact be ones that promote reproduction, at least in the ancestral
environments. That it's not – that it's not just a random relationship or
historical relationship. They are directly related in that way.
Agnes:
But then they often don't promote reproduction so like fertility is knocked
down.
Robin:
But evolution has this really hard task in this complicated world to produce a
tendency to reproduce. We shouldn't expect it to do it in every case exactly,
right? That's way too much to expect of it.
William:
But the complicating world was made by evolution.
Robin:
Yes. Well, it was influenced by. It wasn't entirely made but it was greatly
influenced by evolution.
William:
Sure. Sorry, I had a point. But again, I've forgotten it, like Agnes. [Laughs]
Robin:
So you ask, what is the basis of all these bases that you will use to argue
for you would wanting to exist or what to do conditional on existence or all
of these projects you might accept or not accept, all of the machinery that
you will use to do all that arguing and discussing, that was all produced by
this evolutionary process which was blind to the consequences. So it's going
to be a noisy thing and have a lot of randomness and have a lot of unintended
consequences from its point of view, but nevertheless, that's the process that
produced you.
Agnes:
But not only evolution, right? You were just saying other things like laws of
physics and stuff.
Robin:
But those are constant and will never change, right? So …
Agnes:
Evolution is also constant in that sense that it's just – the basic structure
was constant maybe.
Robin:
But if we want to predict why you would do one thing versus another thing, we
can imagine, we need to be imaging counterfactual variations that are
possible. So we want to know what we would choose. We are trying to imagine
choice A versus choice B. So we have to be thinking about distinctions between
A and B. So things that are constant between A and B would not be very
informative for looking at prediction the choice between A and B, or advising
a choice between A and B. So A and B both are consistent with physics then
that would not help with that choice.
William:
I remember what I was going to say. It's just maybe an interesting remark
which is, imagine if actually evolution is ultimately a self-killing thing and
over time, it evolves to create philosophies and then ultimately whatever
instantiation of evolution will ultimately decide actually it's better for
nothing to exist, and so it kills itself.
Robin:
So I mean a standard observation, its a bit trite but I guess there's some
truth to it. The story is most animals don't understand death. They don't know
that death will happen to them. They just know how they do various things. And
even if they notice a death in front of it, they don't make the connection to
themselves in the future. Right? And so, they are not terrified of death.
Humans are these creatures who can think about death. And so the story was,
well, it was this big problem. As soon as evolution created human minds that
could think about death like they got obsessed with death and then that got in
the way of those creatures being productive, etc. Or they could see the cosmos
and see they are a insignificant tiny fraction of it. So the story is, there
was this long period of evolution where evolution had deal with the fact that
giving human minds these broad capacities would create all these dysfunctional
scenarios where they would get obsessed and think about the wrong stuff. But
we are results of a long period of selection after that. And so we are
constructed exactly to avoid those problems. Our minds are able to set aside
our fear of death or come to terms with your small part of the cosmos. Right?
I mean we haven't perfectly maybe done that but that's what you should expect
your mind to be, the sort of mind that can think about these big things but
still go on with doing stuff.
Agnes:
I want to go back to Will's, I feel you haven't fully faced up to Will's
question which is a really great question. So suppose we make the following
empirical discovery about evolution. We don't know everything about evolution,
right? We don't know our state of knowledge. I mean we are still working on
it. And there could be big transformations in our knowledge and we imaging a
hypothetical scenario where we learn that sort of – that evolution is like
programmed with a death drive, right? And it is designed for us all to go
extinct and this is the way life works throughout the galaxy is that it's
supposed to go for a little bit and then stop. And it kind of shuts itself
off. Maybe it shuts itself off by having people developed nuclear weapons or
whatever. That's just part of the evolutionary programming. It's designed to
go out in this way. If we were to discover this, would you think, OK, well,
this should change everything, we should no longer aim for fertility. We
should just – maybe we should all kill ourselves because that's what evolution
wants and it will make us really happy because we now know this is our
evolutionary programming.
Robin:
So distinguish two very different hypotheses here, right? One is that
evolution will create creatures who try to survive but nevertheless some net
effect of the whole process will make it all end and die. A second scenario is
that evolution would create creatures who want to die. That's a much harder to
believe scenario. That is, evolution would select for creatures who want to
die and that's how evolution would die. The first scenario is more believable.
That is, evolution would create creatures who want to survive and nevertheless
indirectly as a net effect of everything, it would all die. But under the
theory that evolution will create people – creatures who want to survive, who
try to survive and it will fail then the prediction is, you will also want to
try to survive, and therefore it doesn't predict that you should expect to
find inside yourself a desire to die.
Agnes:
But suppose we are in the second case, I don't think it's so crazy and
implausible that there is inside of, let's say human creatures, a certain kind
of impulse towards death. And that in effect, let's say, it's encoded in us
this desire to die, but the way it's going to work, the way we will eventually
kill each other is that the way that we evolved is that that desire to die
grows bigger over time as a proportion of our total desires. This is part of
how the coding works. And so over time, it becomes more and more important to
us. And maybe the first way you see that is lower fertility. Those are the
beginning of the new humans, the humans who want to die. And so, this is how
we see the trend, right? And we see that what our future is, is going to be
these human who more and more want to die. Would you say we should reason from
to, well, we don't want to have future, we don't want to have descendants, and
we don't – we should commit suicide.
Robin:
It's just not coherent with what an evolutionary process is. You are imagining
some other process and you are giving it the name evolution because this is
just not evolution.
Agnes:
Right. So like maybe let's just agree that this is not consistent with what we
take ourselves to know about evolution.
Robin:
Let's just take a religious analog, right? God made my mind and God gave my
desires and God created this world of life and then God for whatever purposes,
wanted us all to die. And God encoded in our minds the desire to die.
William:
Yup.
Robin:
And that's slowly getting realized and then we are all going on the track to
die because we are trying to make ourselves die because have found that. So
that's – I'm setting kind aside evolution …
Agnes:
Yeah, yeah. Fair enough.
Robin:
But we can work with a hypothetical there, right?
Agnes:
OK.
Robin:
And so now, what you should predict is, you want to die. That's the literal
prediction of this theory. Deep inside you, the deepest, most real structures,
the most persistent and more reliable pattern structures are in fact patterns
that say that you want to die. That is your deepest desire.
Agnes:
And do you think once you learned this, you should try to die?
Robin:
It would tell you that if you choose that, you would not regret it so much. It
will be consistent with everything else …
Agnes:
I mean you will be dead.
Robin:
Yes, but nevertheless, it would be completely consistent with your nature and
your desires that you would make this choice. If you want to recommend someone
to make a choice on the basis of do they want it, this is – that's the
recommendation here.
William:
OK. So earlier, we are saying – you are saying that because it's evolution
that gives us a drive to survive, therefore it's a good thing to survive and
we should continue to …
Robin:
Therefore, it's what you want.
William:
OK. Therefore, it's what you want. Therefore, it's what human values might
determine …
Robin:
If you reflected on all the complications and signs, you try to come together
with a picture of what you want, that is what you want.
William:
So then that must – but just the principle it must extend so that evolution
wants us to die, that would be …
Robin:
Or if God wanted you to die. If the process that made you …
William:
If the process made us …
Robin:
… made you to want to die, if we can predict that, then we predict that you
want to die.
William:
Then we would say that it's like – I mean in the evolution frame of mind, you
would say, “OK, this is like a moral decision to die.” And …
Robin:
So for morality, it's seperate. So maybe it's time to like make this
distinction. But I get the most leverage out of “what you want.” That is, if
we just look at all the choices you've ever made and we say, in each choice,
you try to choose what you want and then you're about to make another choice.
And in this choice, the question might be what you want. We could – there's a
different question we could ask. Is that the moral choice? And I would make a
distinction between what you want and what is the moral choice. I think most
people do recognize that distinction. Often, there's a conflict between what
they want and what the moral choice is. And so I think the better way to say
is that we all want to be moral in part. We just also want other things and so
then sometimes there's a conflict between the part of our wants that is to be
moral and the other parts of our wants.
Agnes:
But the parts that wants to be moral, is that also determined by evolution?
Robin:
Yes.
Agnes:
So, in this story …
Robin:
In actual fact, in the counterfactual – no, in the common factual, maybe God
put it there. But in actuality, it's …
Agnes:
Right. Right. But like in this world where we determine that overall, it's God
that put all these wants into you. He also puts the moral ones into you.
Robin:
Right. And the things that conflict against them.
Agnes:
Right. So how do you determine in the world that we are actually in, say,
evolution put in us some desires to continue to exist and to reproduce. But
that leaves open for you the question of what are we morally to do? And so,
how do you determine that we morally ought to do what evolution wants us to
do?
Robin:
I didn't determine that.
Agnes:
But I mean you were giving an argument that it's like this is the helping
hands that are given, right?
Robin:
Right. That was showing there was a moral component. That wasn't saying that
that was the only thing …
William:
I guess there's the other complication such as like what you feel like in the
moment or what society might be telling you, those other aspects. So I guess
what Robin is saying is that evolution is a part of your decision.
Robin:
Morals are a part of your decision and evolution produced that part and many
other parts. Maybe not all parts.
William:
It seems kind of faulty to me to be making assessments about how to act in a
system using the laws which brought you about in the first place. There's some
kind of faulty paradox there.
Robin:
But what else is there? What other possible basis could there actually be?
William:
So I'd like to believe that there's something kind of like gravity or some
other law like the speed of light, etc., that is like the – whether life
should exist or not.
Robin:
Well, we can have those laws as the cause of these wants that evolution gave
you. So for example, one standard account of morals is that they are ways to
manage the peace in large social groups. That is, what we do is we agree on
some morals and we watch for people violating the moral rules and then we get
indignant if they violate them and that energizes us to punish them for the
violations, and that manages and keeps the group cooperative in certain ways.
That's a standard story for what morals are, where they came from, why they
have the features they do. Under that theory, it will – you will have similar
morals in a wide range of social creatures. It wouldn't just be humans happen
to have one particular set of morals. You would expect to see similar morals
in a very wide range of alien and other creatures if they function in that
same way to keep the peace in a social group. So that would be a way in which
they are like the speed of light, right? But the causal chain is then the
universe, cooperation I useful in the universe. Evolution needed you to
cooperate so evolution gave you the morals that produce cooperation and then
that's why you feel the moral obligation to be cooperative in certain ways.
Agnes:
So I mean I would try to sketch the alternative method of making decisions
that didn't require you to just look into your own wiring and then say, “OK,
this is what it makes me enjoy or something.” And like I think that the word
for like coming to have a new conception of how things are that's based not on
like studying what would be a satisfying conception but studying the way
things are is learning. And so you might think, look, value is something that
we are still learning. As a species, we are learning what's valuable. We don't
have a complete understanding of it and we make mistakes. And so we've done
things that are bad, objectively bad and wrong, and we try to correct and we
try to improve. And just like we are learning about physics or learning about
math and it's just not learning about our brains, it's learning about the
structure of the world, the way the world really is, there is learning we have
to do about value. And we do that learning by talking to one another, by
building institutions, etc. And those values are not just like the things that
evolution – the constraints that evolution programmed into us to get along.
They are in fact – I mean that may also be true about them. I don't want to
deny that because of course, physics and math and whatever, those results are
also in some sense, our brains were programmed to have those thoughts. Sure.
But that just like to say that our programming is consistent with our coming
to know the way things are. And we don't do math by trying to do what
evolution and figuring out which mathematical result that evolution programmed
me to have and I don't think we should do morality that way either. That is, I
think that the way we do moral inquiry is by trying to figure out what the
right answer is, which is the way we do mathematical inquiry too. And I think
that kind of inquiry is possible.
Robin:
So it's possible that different creatures want different things. It's also
possible that along some dimensions, they all want the same thing. For the
kinds of things they would all want the same thing of, then it makes sense to
inquire about that same thing together in a universal way because everybody
will learn the same thing because it's all the same. If different creatures
have different things they want then it's not enough to know something
generic. You have to know about you.
Agnes:
I think you can make the same thing about believing. Like different creatures
believe different things and you have to know specifics about that creature to
figure out what it's going to believe. Well, no. I mean the creatures – if the
creatures are all trying to learn math or they are all trying to understand
physics, they are supposed to converge irrespective of if one of them has
green tentacles and the other ones look like us.
Robin:
Right. That's exactly in fact, in decision theory, the distinction between
facts and values. Facts are about things outside yourself and values are about
you and we combine them together in expected utility to make choices. So yes,
we usually conceive of facts as things that are just true about the world.
Now, your belief about facts in standard story will depend not just about the
world. It will depend on the information you have and your priors. And so, we
understand how individual beliefs could vary based on individual differences
and information and priors. But of course, we also see them as converging with
more information to the fact that they are about. I wanted to make the
observation that if somehow you could defy your evolutionary heritage, which
no doubt you can in individual cases, and just make a choice about what you
value and so maybe you don't discover what you value but just somehow choose
in a way that reaffirms and reinforces such that you become different than you
were and say, you say, choose to like pistachio. And evolution only gave this
potential to like pistachio but over time as you keep doing something that
reaffirms your taste for pistachio, you become someone who likes pistachio
more, right? You could say that's a way in which you can defy evolution,
right? Evolution just gave you a general – potential of a wide range of things
but you made a choice in your life and became someone who had this value,
right? So we could say literally that's a way in which evolution isn't the
only determinant of the values you have at the end of this process. Even in
that sort of situation, however, in a large world of creatures like you,
evolution continues. And the next generation or generation after that is much
better predicted by say, evolution, than your taste with pistachio in the
sense that evolution already encompasses the knowledge that different people
can play this game of changing their values, and that's what evolution is
taking into account when it vary things to try to – to get the outcome. And
that's all part of the game and all anticipated.
Agnes:
OK. But how is that relevant in my choice? I mean it's kind of like saying,
“Well, look, evolution doesn't need me.” And then I'm kind of at liberty. And
then I might as well just think about what is actually good since it's going
do its thing.
Robin:
Within a modest degree of freedom. That's the ..
Agnes:
I mean I only have this one life. That's my degree of freedom. And like I can
have faith that evolution is going to do its job, with me playing some small
part, in that and then I should just figure out what's good and not worry too
much about evolution.
Robin:
But in fact, if you think about evolution might help you get there fast
because it might tell you …
Agnes:
How to get where faster? Where evolution wants?
Robin:
No, no, to those pistachio thing, right? You might realize they look. No, it's
going to make me want to do pistachio.
William:
Kind of like a spin-off of the pistachio idea, I was thinking about like kind
of contradictions within what evolution wants. And let's say we knew kind of
in advanced that humans were going to be killed by like a fleet of giant
squids and they were going to evolve to rise out the ocean and kill us. How
would humans react? Or humans would – if we knew about this, we usually
develop anti-giant squid weaponry and then kind of – based off this pistachio
example, I was just like thinking, actually, whatever humans choose to do,
whatever they choose to do is because evolution ultimately effectively
selected them to be able to do that or to do that in the first place. So it's
kind of – it's like an escapable thing like constantly reflecting us.
Robin:
Right. But up to a point that is. So there are many games in game theory as
you know where the optimal strategy is to flip the coin.
William:
Yup.
Robin:
That is the equilibrium strategy is to flip the coin. Now, the equilibrium
doesn't say whether the coin comes up head or tail. So in some sense, you are
the captain of your ship, the author of your life. You flip the coin and you
decided if it was head or tail. So in some sense, evolution could have
constructed you with some randomness. I mean no doubt, it had to take into
account there are just going to be randomness it couldn't control. And so,
it's overall strategy for you includes the fact that it can't predict a bunch
of details about what you will do and then that's sort of a story about
freewill in some sense.
William:
Yeah.
Robin:
What seems to you freewill is the randomness that the system that designed you
couldn't anticipate that it just had to accept.
William:
Yeah, evolution definitely has this deterministic flavor where what happens
will be because in a sense, evolution willed it. So even if you think I'm
going to do activities which were …
Agnes:
Defying.
William:
No. What I'm saying like one which weren't defined by evolution. They were
still defined under it.
Agnes:
Yes. I mean defy.
William:
Oh, defy. Yeah, yeah, yeah. Sorry. Defy, yeah. They will still defy. They were
still defined by evolution.
Agnes:
Yeah, yeah, yeah.
William:
It's kind of a weird – like it's not the question of like freewill or
deterministic …
Robin:
Right.
William:
It's kind of like a weird abstract version which still encapsulates it.
Robin:
So you can think of a boss with a bunch of employees. And the boss gives
certain orders to the employees, knowing full well 80% of people will follow
he orders and 20% will be pissed off and do the opposite. And that's exactly
what the boss expects. And he chose his orders with that fact in mind.
Agnes:
So like supposed that you are trying to decide whether not to have a kid, and
you might think that people should have – I know you're a pro natalist person
so you think people should have kids. But presumably, you don't think that
everyone in every situation should have kids, right?
Robin:
Right.
Agnes:
And we will think of the following situation. You know that your kid that for
their entire life, from birth to death, they will be subject to a tyrannical
master and they will – it will be like this boss. I mean they basically will
just have to do. They will never have a moment of free choice in their lives.
They will just have to do whatever this boss tells them. And sometimes it will
bring them very great suffering. And sometimes they will think they are
defying the boss. But actually, like the boss controls them so well that
that's just what their boss wanted. And you might think, “Should I create this
child,” and like understood in that way? And it's like far from clear to me
that the answer to that questions is yes, but that's the situation you think
we are all in.
William:
Because the boss deserves, somehow deserves this.
Agnes:
Yeah, exactly.
William:
She is like what we would expect.
Agnes:
We are rooting for this weird boss.
William:
The tyrannical master.
Agnes:
It's not our parents because our parents didn't choose to create us. Evolution
made our parents to create us. It's evolution, our evil demon of a god.
William:
Exactly.
Agnes:
That we think we are all enslaved to. And we have to do its will.
Robin:
So do you guys remember when we started?
William:
Oh, I do.
Robin:
So let's wrap here.
Agnes:
You have the final word, Robin.
Robin:
So, I will again pull your mind back to the image, which is factually correct
that for the nearest million galaxies, it's all dead and empty.
William:
Yeah.
Robin:
Completely dead and empty except on this one planet where there is this …
Agnes:
A lot of slaveery going on.
Robin:
… one species …
William:
A lot of astronomical suffering.
Robin:
… one species which as a lot of suffering, but also a lot of joy and insight
and community. And the question is, should this one planet and this one
species, this one set of species, should this be also like the rest of the
dead universe? Should we empty it out and kill it there so that everything can
be uniformally dead or should there be at least one, maybe a million planets …
William:
We'll be expanding.
Robin:
Where it's not like that. Now, when the universe is half full of life and half
full of dead, then talk to me about maybe we should say some dead stuff and
not fill it all with life. But at the moment, it seems to me, if you have at
all any uncertainty here, give a bit more to the life because it's way
unbalanced.
Agnes:
OK. I feel we can stop.
William:
Yeah. [Laughs]
Robin:
Right click. I'll stop the recording, right? Here?
William:
Yup.