We are met again to try to meet our minds. And this time, I am suggesting the
topic of partiality. So, the framing here would be that in many kinds of
decisions that we make personally or collectively, there’s some concept of
efficiency or selfishness, what the decisions would be that would be most just
directly beneficial, low cost, et cetera, to us, or in the service of an
organization. And relative to that standard, sometimes we are inclined to be
partial, to favor something else, something over something else out there
because of our identity or allegiances. So we might be favoring our gender or
race or our age or our nation or our region or some like our personality type
or we might have artistic allegiances toward one kind of art genre or another
and all sorts of sort of ideological things including say, open versus close
software, capitalism versus socialism, blah, blah, blah. And many people get
very animated and interested in trying to be partial. And I’ve been talking
about AI risk recently with a number of people and it seems like a central
issue in AI risk because then many people basically just want to be partial
toward humans as they see it. They see AI as non-human and the risk that AI
would out compete humans and they are very disturbed by the idea because
something that they are partial to would be losing there and they want us all
to coordinate to be partial towards humans, and reject say, the greater
efficiency gains of AI, until we can find a way to get it without this loss.
Anyway, that’s the thing that was prompting me to think about partiality. It’s
also say, the basis of sexism and racism or things like that. Perhaps, if you
thought people were say, racists not because they believed the other race was
different but just because they wanted to be partial towards their own. And I
think that was a substantial motivation in history, that people just wanted to
be partial towards some side.
So we can distinguish two different forms of self-understanding among the
partial people. One of them is, “I just want help my people, my crowd, because
they’re mine,” in the way that most people would talk about save their
families or something. Another is, “I want to help my group because my group
is objectively better than the other group, so we are just more valuable so we
deserve more.” And so one question about the AI like, “I want to be partial to
humans over AIs,” is which of those two positions are they taking? Are they
like, “I just happen to be the human. The AIs might even be more valuable for
all I know but I want my group to win?” Or is it like, “No, humans have a
special kind of value that is really important and it’s in kind of with an eye
to that that I want to be partial to them?”
Well, most of the people I talk to are more relativist to this enough to say,
“Well, it matters to me and I’m not sure if it’s globally morally better or
something.” But they just know that they prefer it. And the same way that you
might prefer sort of your culture of the years you were born, say, and might
disapprove of the next generation’s changes to your culture.
But many people do slip into the absolute terms there. And I think it is
relatively natural. I think in some – actually, in The Elephant in the Brain,
we have a quote from Herodotus, “Basically, everybody thinks that their
culture is better than all the other cultures,” absolutely basically but – and
so that’s obviously said with an eye toward thinking people who have a bias in
that regard therefore, making you cautious about whether your personal sense
that your stuff is just better is really reliably tracks what’s actually
Right. But it seems to me they are two very different kind of sense of
arguments, right? So one would just be to examine, are we getting necessarily
the better than these AIs? Where if you could show that we are not then you
would be addressing that argument, versus I just have a visceral – this is
just what I prefer and if you tell me it’s way more inefficient and has all
kinds of other bad stuff, I’m just like, “Well, it’s what I prefer. That’s it.
I’ll just keep repeating that.” Then at that point – that’s a very different
kind of interlocutor I guess. I’m not sure we can predict what – how we can
argue with them with the brute preference.
I mean in my experience, it seems to be more of a brute preference, that is.
But we can question that if we like. I think we have a norm in our society
that for relatively mild differences, we are supposed to be ecumenical and
encompassing say. And for example, if you’re in the humanities and I’m in the
sciences or something and there’s some particular job out there that could go
to humanities or sciences, instead of you or I just going “rah our side”,
pushing for that job to be done by one of those sides, we could say, “Well,
let’s see who is actually better at that, and whichever side happens to be
better at that job, I guess let them do that job.” And we are often embracing
these sort of more neutral norms about many of the differences we have when
they are relatively mild. But then we get to our core of things we really feel
are more to the core of us, we are often just defensive and not very
articulate about why we are better.
I mean it seems to me that pretty often, we have norms about impartiality that
introduce inefficiencies. So the demand to be impartial and to ensure that the
system is impartial has a tendency to make, at least in my experience, makes
systems less efficient.
Well, I think when we don’t trust people to be making choices for efficiency
reasons then we often impose some just uniform approach that won’t allow for
efficiency considerations to bear. I agree.
I mean like it might just be more efficient for me to hire my family members
because, since we know each other and we have trust among us, if I hire them
as my colleagues and department administrators, the department will be more
efficient. But we think no, you shouldn’t do that because it’s partial.
And I think that’s usually motivated by efficiency considerations. That is, we
usually think that when we allow people to be partial that way, they are going
to be using their partiality to make that choice and not using efficiency. And
that’s why we ban those. So, when we suspect strong enough partiality, we
often prevent a discretion in order to prevent the partiality even if that
discretion will also prevents further efficiency.
So your thought is that like the reason why you’re not allowed to discriminate
in hiring or have nepotism and all of that is because it will be more
Yes. So,a for example, Western firms have much less nepotism than many other
nations’ firms. And basically, Western firms are more efficient and more
effective because of that. Many other nations have family-based firms where a
family runs the firm and then they tend to prefer family members and those
firms in fact don’t grow as large and can’t expand to as many areas and are
less efficient because of their nepotism. That’s sort of a standard result in
the business literature. And there’s a perception in the West that you have to
be worried about nepotism and it’s a risk that if you let people do that, they
will not choose the best person for the job but they will choose their family
members and then that’s why there’s this norm and often rules against it.
I mean I guess that makes sense to me but it just seems to me that at least
there is also just a self-standing impartiality that is independent of
efficiency considerations for things like hiring, for things like selecting
someone for prize or all sorts of …
Well, if you select someone for prize, there’s usually some norm of what the
standard is for giving the prize. And then in that case, efficiency would be
applying that standard, whatever it is. And then if you are instead giving it
to people you’re partial to, that would be compromising and corrupting the
Right. I mean I guess it’s not clear to me that the usual norm has anything to
do with efficiency.
It does in my sense of the word efficiency. So maybe the word “efficiency” is
misleading here. But just in some sense, you might say, “We are being partial
toward quality,” in whatever metric the quality for prize would be. And the
prize might involve some partiality in its definition. So for example, it
might have a prize to sort of some accomplishment in, I don’t know, air
conditioning and somebody might have done a lot for air conditioning from an
engineering approach but we are biased for the sciences. And so our prize
wants to promote the science of air conditioning. And so, we give the prize to
a scientist. And that would be a way perhaps in which you could say our prize
criteria was partial. But there are other contexts where we have sort of – in
business for example, say in terms of profit, a most standard reference point
for efficiency, i.e., what would at lower cost give more value to customers,
inducing more revenue as the difference, for profit. And relative to that, you
might have employees in the firm like say, Google making business decisions
where they are not just maximizing Google profit for work or convenience, but
they are being partial toward some programming language or some political
agenda or some geographic location, and it’s a common perception that people
empowered with such decisions often make them with some degree of partiality.
Right. Right. So it’s like – I mean I guess the way that I think about is that
we all belong to groups, a number of different groups and those ties to those
groups are very important to us. And they’re part of why we want to survive, a
really big a part.
So that there would be no efficiency without them in a sense that we …
And so, maybe it’s not so easy to calculate when the partiality is leading to
an inefficiency because maybe you have a background or something, “Hey, we
could get rid of all these groups then we would be super-efficient.” But then
everybody would be totally demotivated. Nobody would care about anything and
we would all die.
Well, for say, for employees of Google, we might say, “We are going to pay you
a salary, and with your salary, go feel free to be partial with it. But when
we are making business decisions, make them for Google’s benefit or because
that’s the way we are going to give you the biggest salary to go do things
with.” And you might – we might even believe that with some decisions, you are
going to be partial and then maybe we don’t pay you as much as a result of –
because of that. We expect you to be partial, and in some sense you’re buying
the power of being partial by accepting a lower wage for a position where you
have some discretion that you can use it. And so then, that might be the
business compromise is we just expect some decisions to be made partially. And
therefore – but we know that that comes at a cost and then somebody has to pay
Right. But like I guess I think the thought that we can banish group
allegiance outside the workplace seems implausible to me. That is, people have
allegiances to their colleagues in their workplaces and if they didn’t, they
would be demotivated from working.
And so, the workplace itself has to create allegiances and partiality and has
to sustain that partiality. And that’s not – and you could see that as leading
to inefficiencies, but that wouldn’t make any sense because it’s a condition
on the amount of efficiency that you do have that those exist.
But you might be able to express all your partialities outside of your
business decisions perhaps. I mean work isn’t everything. And you might think
that just by being at Google and being in the position you are is already able
to promote your partiality. That is, you might be partial toward the things
Google represents and that Google promotes in the world even when it’s just
I mean remember our conversation about that Moral Mazes (book)?
It seems like the idea of distributing the partiality outside the workplace
environment seems very naïve from the point of view of Moral Mazes, where the
workplace is just like a message instead of allegiances and that’s like at the
managerial level. What you’re saying might be true of like the person on the
factory floor but the work of a manager is managing allegiances as far as that
– if that book is true.
And part of the tone of that book was to lament that structured world because
it looked inefficient, and it does look inefficient.
Right. But the point is like it might look inefficient but also, this is the
thing that works. So that’s sort of my point is that there is a thing here
where it might look like you can get rid of it but it might that all the
efficiency we have conditional on having this kind of partiality. Yeah.
So getting rid of it isn’t on the table here, so I’m happy to not be
suggesting it could all be gotten rid of. But one kind of partiality that I’d
like to call attention to I guess is a partiality that basically tries to sort
of squash something and get rid of it. So, there are many like technical
decisions in the world say, between open-source and closed-source software, or
one particular programming language, or another or using fossil fuels versus
alternative fuels and a thing that happens is people get a partiality on that
sort of axis and then they often try to coordinate, sort of to promote
everybody to sort of be on their side and to be against the other side. And
they’re often trying to use network effects in the sense of if they can reduce
the other side enough, then it will just go away because you need enough
people doing the other side to keep it a viable thing. And if not enough
people use fossil fuels then nobody can because it costs some infrastructure
cost to support the system of fossil fuels. And similarly for some programming
language tools or something, if nobody uses a certain set of tools then it
just won’t exist. And so often, partiality is some sort of a war where one
side is trying to eliminate the other side to exterminate it really. And
often, that succeeds. And so, there we might ask the question whether it was
good that one side exterminated the other.
It depends on whether the thing that was exterminated was good.
Yes, of course. But we might worry that with enough partiality, people
wouldn’t be paying enough attention to whether it was good and they would just
be paying attention to being loyal to their side. So that’s a kind of thing
that happens that makes you worried about partiality.
OK. So we’ve developed – we’ve articulated one thing that should make you
cheer in favor of partiality. Mainly, it looks like a condition on being
motivated to do anything at all and structures all of our relationships inside
and outside the workplace. And then one thing against it which is that it
occasionally leads people to coordinate to eliminate the other side and
occasionally some of those eliminations are of good thing.
OK. I’m just summing up where we are. I feel like on net, we are still pro
I think it’s maybe less about whether we should be pro or anti partiality as
opposed to which kinds of partialities we should allow ourselves to seduce us
OK. So which are your favorite kinds of partiality?
Efficiency perhaps, which is more of a paradox here I guess. So …
I would imagine it’s not a kind of partiality.
OK. So in say the AI risk discussion, one way it might be framed is some
people are worried about some human essence being lost and they look at things
like love and laughter and the sacred and anger and creativity and they say,
“I’m worried that AI will throw all those things away and they will be lost.”
And I’m partial to those things and I’m really partial to those things so I
think any substantial AI displacement of humans is essence is horrible and
basically a catastrophe. And I might say, “Well, I like those things,
laughter, love, humor, the sacred, et cetera.” But I might say, “Those seem
like somewhat robust things that minds might have, e.g., language, and so I’m
less worried about them going away.” And somebody – people come back and say,
“Well, what they mean is the human versions of those things.” Sure, there
might be some kind of machine love and machine laughter and machine sacred,
but there’s some special way these things are packaged together at the moment
in the human and they’re really partial to that. And that’s a place where I
tentatively to feel less partial. I go, “I’m not so sure why I care exactly
that it would be a particularly current human style version of these things.”
But – I mean another related example is just, human culture has changed
enormously over history and I feel less partial to our current human cultural
features compared to cultural features that our ancestors had, and then
presume that our descendants will also have quite different human cultural
feature. And many people feel very partial to our particular culture and its
particular features and they are afraid that those might go away.
OK. So – but like when we – at that point, you have some preferences and they
have some preferences. Now, we have a standoff between you don’t care about
these very human things, they care a lot about them. Where does the argument
go from there? Neither of you has given any kind of reason as to why your
preferences are good ones to have. You just both have a set of brute
preferences. Neither of you seem persuasive to me as …
So the next step of my analysis might be to look at when efficiency correlates
with a divide like this. So …
Hold on. Before you go to that next step, if you – we have to make a
prediction about your interlocutor. Do they care about efficiency? Because
otherwise, you’re not arguing with them, you’re just arguing out of space,
right? Now, my prediction is they don’t care. They care about these things and
they want these things even if they are going to be less efficient. So I’m
just not sure it’s relevant whether we can get increases and efficiency
because that will be you arguing with yourself. You’re partial to efficiency
and they’re not.
I’m pretty sure that given the concept of efficiency I have in mind, everybody
cares about it. You might not realize you do but everybody cares a lot. So …
We can a Twitter poll. Do you care a lot about efficiency? Let’s see what …
I mean if I just use the word that would not work because I would have to
explain what I meant to them.
So, let’s take our prototypical example of say, discrimination, say, racial
discrimination in the US South say, in mid-20th century.
A description is usually of the form that people are being treated unfairly
with respect to an efficiency standard. That is, they wouldn’t – people with
the same qualities wouldn’t be hired for the same job. You couldn’t sell land
to the same people for the same price. There was a coordination to prevent the
usual efficiency processes from just letting people locally find the best
thing for them because, whereas a coordination is saying, you must keep blacks
in their place and can’t let them fill various roles even if, to you, that
would seem to be the sensible thing to do. And so, a criticism of that sort of
world is this idea that they are impoverishing themselves by that. And similar
thing in gender discrimination, one of the biggest arguments against gender
discrimination was that we were hurting ourselves arbitrarily by not letting,
say, women fill roles that they were equally qualified to fill as men. And by
preventing that, we were worst off.
I don’t think that’s the usual criticisms. I don’t think that’s what most
people think is wrong with that situation. And you can see it by saying,
“Let’s say that in some inefficient economic system, communism or something,
they had some gender or race discrimination.” Is that fine? Because it’s
inefficient anyway and it’s not adding to the inefficiency to have the
discrimination. And most people would be like, “No. Discrimination is wrong
because you shouldn’t disadvantage people on the basis of these features, and
that is nothing to do with efficiency. It has to do with the right that you’re
supposed to be impartial and not discriminate.
So any organization like Google, say, if they are doing something
inefficiently then in essence there is less for everybody. There is less
wages, less office space, less customer service, less profit for investors. So
if you want more of any of any of those things, you implicitly want more
efficiency because efficiency gives you more of stuff. So that’s the sense
which everybody wants efficiency because everybody wants more.
Maybe you don’t want so much more. I mean lots of people are like, “We should
all be making do with less. We have too much.”
No. But we – when they make their personal choices, they make personal choices
to get more.
Sure. And when people make personal choices, they’re also impartial in all
sorts of ways. But you might think “Yeah, but as a group, we should be
scaling. People talk like this. We shouldn’t be trying to get them all the
time.” All I’m saying is, your efficiency, I don’t think it’s obviously some
value that everybody has agreed to. You said you’re partial to it. That’s your
I will still argue that in fact given people’s actual choices, they are in
fact partial toward efficiency. That is, efficiency is just another name for
giving everybody more.
I just don’t think everybody wants to give everybody more. I think some people
want – the rich people to have less and the poor people to have more, but they
want the poor people’s more to come from the rich people of less. That’s just
But even giving poor people more, efficiency give poor people more.
But they don’t want to get people more. They only want to give the poor people
more on the condition that it comes from the access of the rich people. They
want redistribution. They don’t want more. They don’t want the total to be
I think most people do want the total to be bigger, but I’m sure you can find
people with any possible combination of views. I don’t doubt that. But still,
I think again, partiality makes sense with respect to some standard of an
impartial choice. We can’t really make sense of it without reference to some
I agree. I just don’t think that impartiality has much to do with efficiency.
I don’t think you understand what I mean by the word efficiency then, because
efficiency is just a way that any – you can get more of anything, whatever the
things you want.
I think impartiality doesn’t have to do with getting more of anything.
But almost all the things we value is valuing more of things, the various
things you want more of. So you want more philosophy, that’s valuing more.
It’s just another way of saying, all the things you want, you want more of
them. That’s another way of saying you want them. You can’t really want them
without wanting more of them.
Right. But I might just like want the world to be more impartial where that’s
not going to translate into wanting more of stuff. In fact, it might just –
that might just mean we all get less stuff if the world is more impartial
because we have to …
You might be willing to make that trade-off. I’m just saying there’s something
other than impartiality you want and whatever it is, we could talk about you
wanting more of it and that’s what I mean by efficiency, is just getting more
of the other stuff.
That’s fine. But now, you’ve just conceded that wanting impartiality is
separate from wanting more. Right? Those are the kind of things you might want
because the reason we want less discrimination isn’t because we want
efficiency. It’s because we think discrimination is fact. Then we also want
efficiency. Yes, that’s an additional thing.
But if impartiality does reduce efficiency then we could also want it for that
reason. We could have two separate reasons.
Right. I think impartiality sometimes increases efficiency and sometimes
But I mean there are many of these other kinds of impartiality like say, Star
Trek versus Star Wars.
People don’t have a big moral thing about making sure we treat those
impartially, right? We don’t – we aren’t worried if some people favor one side
or another in general. But if you were stocking some movie library or
something, we might want you in that role to be impartial so that we could
trust you to serve our customers for who are stocking a movie library or
something. That is, there could be some roles in which we would want you to be
impartial with respect to that even if in general, we don’t think there’s a
moral problem with people being partial to one of them.
Right. But like the reason why you wanted that might not be efficiency. I
could imagine somebody who owns like a video store and he doesn’t want his
clerks pushing Star Wars over Star Trek to people. And they point out to him,
“Look, when we do that, like push them, we end up with more customers buying
things.” And he is like, “That’s not the kind of store I run.” He has a sense
of like self-respect that he is willing to lower sales in order to have
impartiality. I mean that seems particularly coherent to me. So they do seem
independent. I want to ask you a question about, so you’re not very worried
about the human versions of love and laughter, et cetera, being a bit
transformed by AI because you think that the fundamental things driving them
are sort of robust against the incursions of AI. And so, this wave of forces
that’s going to transform us and delete something, you’re like, “Yeah, but the
things that it’s deleting are not that valuable or other forms of them will
come into place.” Why don’t you accept the same form of argument about the
thing where because of partiality, we coordinate to squash something and get
rid of it? Why can’t they – those people will say, “Well, look, those things
are pretty robust to these incursions of partiality and some other form of it
is going to survive. Maybe not the exact one we had before.” You’re freaked
out about one thing. They are freaked out about another thing. You’re pretty
calm about the thing they are freaking out about because they calm you about
the thing you’re freaking out about by employing the same argument?
Well, this is where I would invoke the concept of efficiency.
So for example, in the United States, we have what many of us consider a
pretty dysfunctional process by which we review building infrastructure
projects. Compared to Europe, it just takes us a lot longer. Things get a lot
more expensive because we allow so much citizen participation in the review of
such projects. And so, you could just think about this on one axis of citizen
participation versus rich developer power. And on that axis, you might think,
“Yay citizen participation.” But I might say, “But these two sides of this
axis aren’t at all equal in efficiency. One is just much less efficient than
the other.” And so, if we allow this side to win here, say just through this
coordination process of trying to squash the people who promote the other side
and it gets entrenched, then all the rest of our society gets taxed because of
that and all the other things we want to be partial to, we are less able to be
partial to them because of this tax. And that’s an issue I wanted to highlight
here. That is, when we let some – when there’s a correlation between
efficiency and these sides, which there often is, then by choosing one side
and letting them squash the other, we are all – we all just have less capacity
to do things.
So I mean there was one version of your definition of efficiency where it’s
just like more good stuff and you might think that the various kinds of good
stuff are incommensurable and they don’t add up into just a big pile of good
stuff. And one person is like, “Look, by good stuff, I mean citizen
participation. And I am getting more of that good stuff by the current
regulation process. I want more good stuff. That’s why I’m doing it.
Efficiency is my ground according to that definition of efficiency, right?”
And you’re like, “Well, no, no. We have to account in all the other kinds of
inefficiency and then we have to come up with sum total.” And they might think
– they could agree that we want more good stuff but disagree that you can do
that sort of math. And so you can’t import the legitimacy of doing that math
just by using the word efficiency.
So this is problem with us where I want to inherent a bunch of standard
concepts from economics and you resist. That is, we have a standard structure
in economics by which we do have these comparisons and that we do add them all
up to sum total and that’s why we can talk about efficiency. So …
There’s a standard resistance to that on the part of the outside world where
that structure is not acceptable to the people that you want to persuade. And
so, just presupposing isn’t going to help you make your case.
Well, they are not in the room at the moment.
I’m voicing their concerns right now.
But it would take us a substantial detour to go into the usual economic
analysis of efficiency and why these things are comparable and why you can add
But I think it is what partiality is. I don’t think it’s a detour. I think
partiality in part is the idea that there are some goods that can’t be summed
up with other goods.
It doesn’t at all seem right to me. That is, the idea of partiality is just
favoring one side. That doesn’t at all imply that it can’t be compared.
I think that …
Incommensurability is a whole different claim than – that you can favor one
side or the other.
Because I don’t think they’re so different. So I think that if you favor some
side or some group then you’re going to be a bit resistant to cashing out of
that and be like, “Yeah, I favor this group and if you pay me enough, I’ll
favor the other side.” You’re going to want to say, “No, my partiality cannot
be bought in that way. It can’t be – I’m not going to exchange it for money
value.” I actually do think that those are related. If people pay you enough
money, would you support some other family other than your own family? You
would probably say to yourself, “There’s no amount of money …” or maybe not
you. But most people would say, “There’s no amount of money which I would do
that.” Which is to say that my partiality amounts to an incommensurability or
But we could represent that as just one has a much higher priority than the
others. I think we could maybe set this dispute aside by just saying when we
favor one side or another of a particular division or partiality, that can
come at our cost of our ability to favor other kinds of partiality. We don’t
necessarily have to agree on the weighting of that. But that would be the key
observation to notice is that there are trade-offs between these things. If we
go far enough to promote citizen participation in building projects, then we
will just not have very many building projects and we would not be able to do
solar energy or mass transit or lower housing costs. There will just be a
bunch of other things we can’t do so much because of favoring that one side.
Right. But I think that there’s actually two different ways to frame that. One
of them is one from a partiality versus another. That argument I think is
often rhetorically persuasive, versus one from a partiality versus just other
good stuff that can be summed together. So like for instance, if there is some
particular group that is going to be disadvantaged because of our high valuing
of citizen participation like poor people say, say we are doing so much
citizen participation but because of that there is a group of people that are
homeless because they can’t get homes. Now, you got yourself a persuasive
argument. Why? Because you’re petting partiality against partiality. Whereas
if you say, “If we do the citizen participation then we are just wasting money
on that. Those are money that we can use for whatever we wanted. That’s going
to be less persuasive to the friends of partiality.
But money lets you be partial. That is, if you wanted to help poor people,
money is a thing that lets you do that. If you don’t have this money, you
can’t do as much of that.
I’m just telling you that there are two different – those are two different
arguments even though you’re right that that’s a thing that people could use
it for. It’s only when it’s framed as designated for that that they are going
to get its power.
But I don’t think we need to settle this to use this observation in the
further point that I had in mind, which is just to say that I want to – I’m
attentive to the cases where being partial to one side or another will greatly
reduce our capacity to promote many other things. And I want to be especially
wary of that, and that’s a kind of partiality I’m especially concerned about.
And say, with respect to the AI risk issue, I would less say that I just don’t
care about the difference between human love and AI love than that I see an
enormous loss of foregoing the AI that the AI is this enormous potential
wealth improver or capacity improver to just do many things. And so, giving up
on that you see in order to promote this one partiality would greatly reduce
our capacity to address a great many others. So that’s the parallel I was
trying to set up.
Right. I guess what I’m saying is that I sort of predict that just saying,
“Well, we can do many things,” is not that persuasive. That many people don’t
care about the ability to do many things. They care about the ability to do
certain specific thing. And if you say, “We’re all going to do those exact
things then maybe you’ll persuade them. But if those are just some things you
could do but you could also do other things, they’re not going to be
persuaded. And let me say why I think they’re not crazy I think this way. The
reason why we care about partiality, the reason why we are attach is that
values don’t come for free. That is, the thought that you care about anything
isn’t – you just get that for free. You don’t get caring about things without
work, without costs generally. And so, you could be in a position of having
tons of resources, things that would allow you to do many things but be
totally demotivated and not want to do anything with them and just lie down in
atrophy until you die, because you don’t care about anything. So there’s
nothing you want to do. And I think that people feel like when you’re offering
me this kind of future where we give up on human love and human laughter, et
cetera, and then we have this power to do many things. I’m going to be in this
state where I’m not going to be motivated to do any of that. I’m not going to
do anything with all my giant amounts of resources, which is maybe how they
also might feel about how if my neighborhood is taken over by big, tall
buildings and I get lots of money for it, there’s nothing I want to do with
that money. What I wanted was to live in a nice neighborhood with the short
buildings. And so, their value attachments are like – they’re like these fixed
points of value attachment and if you take – and those are the partiality –
those are the forms of partiality. And if you take enough of those away, it
doesn’t matter how much you compensate with money or the generic power to do
That just sounds to me like another way of saying there are some things you
could value a lot such that the loss of them would be catastrophic because
you’re describing a catastrophic loss.
That’s right. I think for most people, human love and human laughter in the
forms that we have them are those things.
Is voter participation in building projects, those things, that you know – and
so we could go …
To a lesser degree.
Because we didn’t have that before in 1950. So when people seem to live
meaningful lives then with motivation so …
Right. It was a time when you didn’t have a family. You didn’t have a wife and
kids. So if we just like got rid of them all now, you should be fine because
there was a time when you didn’t have them.
Well, I might value them a lot but I don’t believe I would lose all
But many people do. If you like – I suppose the deaths of everyone they’ve met
since they were 12 or whenever, that would be devastating at least for some
people and some people would lose all motivation until they managed
miraculously to form new attachments if they did. So I guess I think it’s just
not nothing to take away someone’s attachment.
I guess this is related to what I think of as sort of a big underlying issue
and these futurist debates related to AI risk which is just that if people
could really see how much change might plausibly come if they allowed change
to happen, they might just want to veto change because they are attached
substantially to the world the way it is. So if you say …
If they weren’t attached in this way, they wouldn’t do all the things that
lead to the change either. That is, those attachments are part of what
innovation relies on. So you don’t want to get rid of them.
But nevertheless, if we let them fully act on a clear view of where it will
all lead, they would prevent all the change.
Or they might just fall into a depression.
Right. So there is this question of when people put a high value on some
things, what kind of a high value? I think that’s kind of the issue, that is
when we have small values on things, we are often willing to trade them off
for some other small value thing, or a modest increase of something else that
we value highly. And in our world of ordinary consumer shopping, we are quite
capable of taking a limited monthly budget and choosing between all the
various things on offer, in order to get a mixture of what we want. And in
those cases, we are perfectly well-acknowledging that we want a little bit
more of each thing and we are willing to compare those to each other and say,
“How much of one do we like compared to a little more of another?” And then
when we get to these big things we value, we often switch modes and we might
say, “I refuse to compromise on all of it and I need to have it and if I don’t
have it, I will just be demotivated. I will lose all motivation.” We make sort
of big threats about any compromise on my big things.
Right. That’s why some people think decision theory only works for medium-size
decisions, not for the really big ones and also not for the really small ones
when the two twists.
Presumably, you mean traditional decision theory because these are decisions
so presumably some decision theory applies to them. They would just have to be
a different decision theory. I mean decisions happened, there must be a theory
of how they happen, right?
You might think that if the values are large enough, it’s best not to model it
as a decision but to think of the – to think of that as a moment inside of a
longer time period where the person is learning something. So it’s like a
That’s just another way of describing them. But it’s a decision so that’s a
theory of the decision, so it’s not …
I don’t think so because like I think that if you couldn’t become aware of it,
if they are in principle could never be a choice point, then it seems to me
that that’s not correctly describing it.
We are talking about people making choices though, right?
No. So we are talking about a situation where someone ends up going one way
rather than another. But there is no choice point.
They didn’t make any choices that influence the one way or another. They had
no causal effect on the path?
So think about this. Say – think about an – this is an analogous case. Suppose
I want to learn – I want to understand some physics formula, right? And you
explain it to me and I still don’t understand it. And then you explain it
again and I still don’t understand it. And then you do – you might say, “I’ve
decided to understand it, but I haven’t decided to. I mean I don’t understand
it, right? I’ve decided to try and I’m trying but I’m not getting anywhere.
But then you explain and explain and explain and after like a year, I finally
understand it.” Now, we want to say, “Did I causally influence my own coming
to understand it? Yeah. Right? I mean I clearly played a causal role. For
instance I would ask you questions, explain stuff. But I also think that
there’s something wrong with – I mean there was certainly no point at which I
decided to understand in a sense that I decided to become enlightened about
it. That’s not a thing I could decide. But the process was essentially
stretched out over time. There wasn’t a choice point. There was an initial
choice point where I decided to try but my point is, there was no choice point
where I decided to become enlightened. So, I think that this is how it is with
large value transformations. There is no choice point. We are learning over
the whole time and the transformation is not the object of our will in the
sort of way that would be required for like weighing pros and cons. I got to
weight pros and cons of understanding it.
Our concrete examples here were people concerned about AI risk and then
considering making a ban or pause on AI research in order to avoid the AI bad
end, or another example of people wanting to change our development
regulations in terms of citizen participation. If you say people say, “I don’t
want to change our development rules for citizen participation because I find
our existing cities to be so important to me.” That’s a choice that you – I
mean the choice to not change the regulation. Similarly, AI the regulation are
choices. So those choices would have decisions and they have decision theory
describing those choices. If you say “These are choices that are done
differently,” fine. But then there would have to be a different decision
theory to describe those choices.
Right. I mean I guess I think yeah, it’s sort of a choice but like there’s
going to be some idiosyncratic things. For instance, they’re just going to
ignore a lot of information that you gave them. Right?
And so, it’s going to be hard to try to explain what it is to make such a
choice well if it’s written into them making of the choice that you’re not
willing to do trade-offs. What’s the rational way to do it versus irrational
when we are presupposing no trade-offs are allowed? I mean I guess I just
don’t know what it would be to make a theory of that. And thing I was offering
was that in the case where you go the other way where you’re not deciding
negatively but you’re deciding positively, that it’s a lot like the case where
I decide that I want to understand the equation. So people could decide that
they want to come to see what it would be to love or laugh in the AI mode,
right? How could our concepts of love and laughter be stretched such that we
would still value this new kind? To my mind, that’s the most promising avenue
and the idea like, yeah, it would be fine. There’s just some other thing
that’s going to be a little bit different. That feels dismissive and it feels
like giving up on the value.
I wasn’t trying to say it was just a little bit different.
Right. But the point is …
OK. I was trying to compare it to another large thing at stake. That was my
argument was to say, this maybe substantial but this other thing you would
have to give up is perhaps even bigger. So I’m comparing two big things. I’m
not trying to dismiss something as little. I’m trying to say, “But we are in a
space of big things and there are other perhaps even bigger things at stake
here.” The same sort of argument might be for citizen. I’m not going to say
citizen participation doesn’t matter either. I’m not going to trivialize that
and say, “Who cares?” And stop caring about that. I’m going to point to the
larger consequences and my view of that. That is, all the other things we are
giving up as a result of that. In the same way in AI I would be pointing to
how much really we would be giving up if we went that direction.
Right. But like if you think that my ability to value our form of social
organization is contingent on making citizen participation such that I don’t
care that we have like more airports and trains, et cetera, and more housing
if we don’t have citizen participation because we have a bad society so we
have a bigger bad society.
I mean I might say like, Europe doesn’t have it and they seem to have a fine
society. So Europe is not known as an authoritarian society. It’s pretty
It might make the argument that citizen participation isn’t that valuable.
That’s the argument you’re making now, because you’re saying, “Look, the
Europeans do without it.”
It might say it’s valuable. It’s just you can’t – doing without it, doesn’t
mean it’s not valuable. Saying you can do without it does not mean to say it’s
trivial or small. It’s saying that it’s finite.
Right. But I mean I guess I do think that people often have trouble adopting a
different value perspective from their own when a lot of stuff in their own
mind is contingent on it. So that like I suppose that somebody was like about
to lose their only child and you are like, “Look! There are people who don’t
have any children and those people get on perfectly fine. It’s fine. This is
fine. It’s bad. It’s a little bit bad, but you have a happy life like those
people without children.” And suppose in fact, it’s not just their child was
about to die from a sickness but you’re about to murder their child. But
you’re like, “But it’s OK. It’s for the greater good and yours is going to be
OK.” You can imagine, they would be very resistant to that and they wouldn’t
be very persuaded by your argument about other people who don’t have children.
So I guess a theme that’s coming out of this conversation, which we can pursue
in the future podcast if we want, is this idea that we tend to make big
choices differently than small ones. We think about them somewhat different.
And that might mean a different descriptive decision theory is appropriate in
terms of how we go about it and what choices we tend to make. I’m still going
to do at least tentatively embrace the usual decision theory as a normative
decision theory and say that people have trouble with big decisions. They are
doing them badly. And these deviations are a signature of that and that,
therefore, things often go pretty wrong because people have trouble making big
decisions. And I might point to some of these examples as evidence of that
saying, “Look, how wrong big decisions can go.” I guess I could give more
examples. But that’s part of what we are getting to in this discussion is
noticing that say, regulating AI or regulating citizen participation or
allowing it, is we are treating them differently. In some sense, correlating
with them just being big for us. They weight a lot. And then, as you say, we
often find it hard to imagine the world without them, without particular
choices we’ve gotten used to, and that’s part of our difficulty in making big
decisions is, if we are thinking of a big change from a status quo, we are
often – we can see the status quo. We see how our world and our lives are tied
up with it, and we might wonder if we can be motivated at all even in a
changed status quo. We can’t envision our motivations in that context and we
aren’t so sure we would be motivated.
Yeah. I mean – I guess I’ll just – I think it’s not right and most people
would think it’s not right that we would make these decisions better if we
made them the way that we made the medium-size decisions. So imagine a parent
who needs to decide whether or not to try and take out a loan for a
life-saving medical procedure that probably won’t work but it’s like the last
hope for their child. But they have another child and they really struggle to
go through the reasoning of whether or not to pay for this medical procedure
in the way that they would about whether or not they should buy a new vacuum
cleaner. They are somehow not able to just be like, “Well, look. Here’s the
cost of this. Here’s the cost of this.” And like smoothly – I think most
people wouldn’t say, “Well, that person is defective.” And they are making
that decision in a defective way because it would be better if they thought
about it just like the vacuum cleaner. I think it’s a sign if something going
right that they’re unable to think about it like the vacuum cleaner. Mainly,
it’s a sign that this – unlike a vacuum cleaner, this is something they
actually really care about. So when people really care about something, the
framework of standard decision theory doesn’t work. And it’s possible, we
shouldn’t even think of these as decisions, I think. It’s not that I think we
are doing all of these things badly and we just take the framework and slap it
on there and then just criticize this entire spade of activity, I don’t think
that makes sense because the theory was supposed to – we can’t have the theory
dragging us around that way. It’s supposed to go the other way. The theory is
supposed to shed light on thing that we are doing. And I think the fact that
we – when we are making certain very charged and very fundamental decisions
that we systematically will exclude information. We will not make certain
comparisons. We will not accept certain trade-offs. That’s giving us
information about what it takes for us to be motivated in life. And we
shouldn’t just ignore that information and say, “Yeah. What we’re really
supposed to do is just do like the vacuum cleaner case.” What we should say
is, “Look, the fact that we are even motivated by a vacuum cleaner is coming
from this other case where we don’t navigate those decisions that way.
We are nearly out of time but I think it might be fun in the future episode to
take some large decisions and walk through together how to think about them in
a usual decision theory way and in your alternative proposed way and sort of
try to walk through the details of some big decisions.
I’m just going to remind you that we did that once. We had an episode about
you and I making certain important decisions, me about my – getting divorce
and you with your career change. And we noted that we did not make them in the
traditional decision theory way, neither of us. And at the time, we didn’t say
that means we made them badly. We should have done it like the vacuum cleaner
Well, we didn’t actually try to compare it to decision theory. We didn’t do a
detailed mapping of decision theory to those choices. But that might be a fine
place to start with such an analysis.
But I guess for today, that will have to be the minds almost met.
All right. Bye.