Evidence order bias
Robin
Hello, Agnes.
Agnes:
Hi, Robin.
Robin
We're going to try to talk about what I'll call evidence order bias. Just to
let all our readers know, we did a recording of this a week or so ago and
decided to try it again. So that's what we'll tell you. All right. So let me
introduce it, I guess. The idea is It's within the scope of the literature on
rationality and ways to be more or less rational. So I know you have some
doubts about that and we should engage those, but I hope you have some way to
engage that literature. And literature is usually of the form, trying to
imagine different possible beliefs in different contexts and talking about
which ones seem more rational, i.e. Which ones we would more recommend,
endorse, which ones, if you have some way, possibly to influence your belief,
maybe not immediately at that moment, but through some setting up of early
context like education or training or talking, that you would want to choose
those contexts in such a way as to induce the more rational beliefs. And We
have examples of you shouldn't believe A and not A and other sorts of examples
of rationality. And the particular 1 that's issue here is the idea that your
beliefs should depend on your evidence and to the degree they do, they should
depend on the set of your evidence, i.e. All the evidence you have, but it
shouldn't matter in what order you acquired that evidence. So if you learn A
and then B, you should expect to come roughly the same beliefs as if you learn
B and then A. And if you can predict otherwise, then that would be a mistake
of some sort that you would try to overcome if you had any way whatsoever to
influence your beliefs, if not in an immediate direct way, in some indirect
distillate. Are we on the same page?
Agnes:
Yes. So can I ask you a question?
Robin
Yes.
Agnes:
So when you talked about indirect ways of influencing your belief, You didn't,
1 of the main ones you listed was education or training. So if we could
somehow get rid of evidence order bias by instilling the antipathy towards
that bias to people at a very young age by way of evidence order bias, you
think we should do it? That is we would use the bias in order to eliminate the
bias.
Robin
I don't know that just antipathy is a very good way to get rid of biases in
general. I might think practice, you know, wherever you try.
Agnes:
They make them practice when they're young so that they overweight the
significance of that relative to their later evidence.
Robin
So I mentioned this order effect, and now you're jumping to the observation
about a sort of lifetime scale of the order effect. So there are different
scales on which we could think about it. For example, confirmation bias is a
short time scale version of the bias where you initially have 1 hypothesis and
then some other evidence contradicts it and then you're tempted to go with the
first hypothesis that came to your mind. Whereas if you'd seen the evidence
the opposite order, you would pattern it. That's an example of bias on a short
time scale. And then part of the interest here is the idea that maybe the
biggest reflection of this bias is the way in which we as young people tend to
much more assimilate what we are taught when we are young and then the older
reversals carry less weight. And so I think that's what you're referring to
with this meta comment of could we fix the bias by appealing to the bias and
using the bias to overcome it.
Robin
But it's less about a belief than more about a pattern to watch out for it,
correct? And so the usual way to train that is to walk people through examples
where they're tempted to produce the bias, show them the bias, and then repeat
and have them try to overcome and then calibrate. And then you don't want them
to overshoot either. So the idea would be to see with repeated application
that they are successfully overcoming it.
Agnes:
And like there must be, I'm not aware of any of this, but I'm sure you know,
there must be sort of experimental evidence as to the degree to which such
interventions are successful with all biases. Do they tend to be successful?
And do they tend to have lasting effects?
Robin
So I think the literature on trying to overcome bias, first of all, it
suggests that smarter people aren't less biased. They're often better able to
justify their biases and trick themselves into, you know, being biased to
neither what. So just being smart doesn't protect you.
Agnes:
Yeah, I think I just came up with a really ingenious justification evidence
for that. That we need it in order to eliminate it.
Robin
Right, okay. So another observation is that merely knowing about biases
doesn't eliminate them and often it can do the opposite in that people often
believe that because they're aware of the bias and understand how it works,
that means they're going to not be suspect to it anymore. But I think that
when you give people more concrete incentives, and then in that context show
them a bias, I think then we see the ability of the incentives together with
knowing about the bias to help them to learn how to reduce it in response to
the incentives. So
Agnes:
for example- And the incentives by themselves aren't enough. Like if you just
have people make bets and then the money matters and they will still-
Robin
Well, So when people have incentives, they are of course aware that biases
might be causing them to not do as well as they could, and they will be
looking for their own biases, but they may just not see many of them. So in
that context, you could help them by identifying some of the biases for them,
and then that would speed them along a path they might have reached
eventually, but maybe not depending on how many possible biases they'd have to
look among to find and how much they'd have to practice. That's of course the
idea of school, right? Many of the things we teach people in school, they
might figure out life anyway eventually, but maybe we can speed them along by
pointing some things out earlier.
Agnes:
Right. So like, I get the idea that maybe it would help. Is there any
empirical reason to think that it would? I mean, are there studies that
suggest, because I guess I just don't know that I have that much confidence
that doing this will have any like standing effect on people.
Robin
There are definitely studies that with respect to some kinds of biases, the
combination of incentives and some training helps. The question might be
whether it's true with respect to this particular bias we're talking about,
and I don't know if we have data on that in particular.
Agnes:
I see. Okay. Okay. And, I mean, it seems like there's a lot of different
things that we get evidence about. And so, like, if you wanted to work on just
confirmation bias or something, something very local, there I could sort of
see that educating people about the existence of the bias could make a
difference. I guess I'm more skeptical when it's, like, you learned as a child
that, like, your family is like a great family or something. And like, you
know, we had to take that away because you, that was like, you just had like
evidence order bias there.
Robin
I'm not proposing in this conversation that I have clear empirical evidence of
the effectiveness of the training on this particular bias, but I'm suggesting
that at least we can identify this as a bias and that it's a large 1. So that
should make us interested in at least talking about it and considering what we
could do about it. But as compared to even the other sort of confirmation
bias, this is really huge. That is, people adopt, say, the stance that their
nation is especially moral and good in international relationships, that their
norms of their society are good proper moral norms. They adopt these attitudes
to a really strong degree And that should really stand out to us as, my
goodness, once you realize that, now you'll, you should be wondering, how
could I overcome this? Because it shouldn't just stay, be really obvious.
Agnes:
How you do this. This is like, right. So At that point where I say, it's not
at all obvious that we want to overcome this. It's so big that, so like, let
me give a different, like, let me give a different example that I see as being
somewhat analogous. So I think, and you might disagree with this part, but I
think that trusting people means that you form beliefs about them that are not
fully grounded in the evidence that you have. That is, a third-party impartial
observer would be less confident than you are that they're going to keep their
promise if you trust them. So I think that when we trust people, our beliefs
outstrip the evidence. And I also think that, now that we think that's a good
thing, that we think it's right to have at least some of the time Beliefs that
outstrip the evidence. And because there are cases where your belief might
outstrip the evidence, but you actually learned that it was produced through
deception or something, and then you want to get rid of that belief. And we
don't feel that way about our trust beliefs. I don't feel like, oh, the fact
that I trust my husband more than the evidence suggests is something I need to
do something about or get rid of. It's actually just part of like what it is
to have like a healthy relationship. And I agree that there's a paradox or
difficulty there, right? So I want to grant that, but it's just not obvious to
me that the solution to this paradox is always let's fix everybody so that
nobody trusts everyone else and everyone just makes accurate predictions about
everyone around them. On the face of it, that seems to me like it'd be a worse
world than the world we live in. And I think similarly, the fact that people
develop these early loyalties to whoever happens to be around them and weight
the evidence that they get very strongly, that might be like the core of like
why human life works and why human societies work and even why we can trust
each other. And so it's just not obvious to me that it's a thing that we want
to get rid of. It might equally well be a way of saying there's something
wrong with our theory of rationality if our theory of rationality says we're
making such a big mistake.
Robin
Okay. So we started this conversation out in the framing of rationality and
the kinds of things that might help us be more rational. And then you're
moving it to, yeah, but what's so great about rationality in terms of
challenges to rationality.
Agnes:
So. Or have we correctly characterized it, right? Like we have a formal
system, that system to be wrong.
Robin
So there is a literature on the kind of deviation for rationality you're
talking about, which is usually framed in terms of commitment. For example,
people talk about the emotional urge of retribution. And so they say, when
somebody hurts you, you might have this emotional urge to hurt them back. And
typically you don't actually gonna get any concrete benefit from that. So we
might say it's irrational for you to want to hurt them back. But we might say
that having been constructed in this matter to be inclined to respond with
retaliation is at that earlier point, rational in the sense that somebody
knowing that that's gonna be a response will be tempted to treat you better at
that earlier point, knowing that you're inclined to retaliate. And so it's
rational ex ante, i.e. All things considered at an earlier point, it might be
rational to choose to then later be irrational in particular ways, which can
be represented as a commitment and the advantage of having a commitment. That
is, it can be rational to commit yourself to something that it wouldn't be
rational to do when the time comes to fulfill your commitment. So that's 1 of
our major ways that we in the rationality literature world come to understand
the value of a certain kind of irrationality. Now the question might be, is
this trust you're talking about understandable in those terms? Can we see the
value of trust as a value of commitment. 2 people commit to trust each other,
say, and if they can see that they've each committed to it, they might expect
better outcomes in the future relative to them each always being exactly
rational, our analysis and responding completely rational to all their current
options and evidence. Right. In which case, then when we get this back to the
order of evidence issue, we might say, well, at some early point in time, it
would be appropriate to make commitments, but you would want at that earlier
point in time to be looking at all the evidence in order to best judge which
commitments are actually in your interest to make. You don't want to be
tricked into making commitments that are against your interests. You want to
correctly judge is a commitment in your interest.
Agnes:
Sorry, sorry. Go ahead.
Robin
So, I mean, that's the final point here is we might say, you know, when you
think about things that you chose early in life and you gave a higher weight
in early in life, Can we understand that as a reasonable commitment or were
you just tricked? So for example, thinking about loyalty to your nation, we
might ask, is it good overall for the world or for you that you were so biased
toward your nation over other nations. Is that making nations more coherent or
is it just making them fight wars more and having more unnecessary conflict?
Agnes:
Right and like I guess I think the the ties the local ties that people have
like I know libertarians hate nations in particular, but like there are a lot
of local ties, not just nations, that is people have local ties to their
family, their extended family, their city, their neighborhood block, as well
as their nation, that is people prioritize the parts of the world that they
come into contact with. With that it seems to me that those local ties are
yeah like play a huge role in human life and have like tons of positive value.
Why do people fight wars? I mean, there's econ theories of why people fight
wars and how it's in, you know, can often be in the best interest of the
parties to fight wars. For, you know, for reasons, you know, better than I do.
And so, so it seems like, sure, we can tell a story about wars too, but I
guess I find it, it's almost like I could try to imagine a creature that
didn't have any of those sort of local ties. But if that creature wouldn't be
a human being, I have a hard time saying, would people be better off if they
were like that? Because they'd be so unlike people
Robin
that I just- The choice isn't here to get rid of all such commitments or none.
We can be selective about asking which of the things. So for example, if you
were trained in a graduate school in philosophy and a certain style of
thinking about philosophy, Should you really feel committed to that style of
philosophy for the rest of your life or at some point in your career, should
you feel authorized to finally reconsider all the different schools of
philosophy and pick the 1 that you now judge best without such a commitment? I
would think you would endorse that at this point in your career.
Agnes:
So I think that it just depends. So it's actually quite similar about
families. So I think that sort of some people, they pretty much stick with the
way of life that they kind of inherited from their parents in a lot of ways,
you know, and they want to live like that and maybe given everything else
about the world, that's the best life for them. Anyway, it's the 1 they
choose. And I think that's also going to be true of many philosophers, that
they will do their best work by sticking to the tradition in which they were
raised and will have a good philosophical climate because we have people from
different traditions that will talk to each other. But it's like, I do think,
well, for some people, maybe it is good to really, you know, kind of tear
yourself out by the roots or something. But it's not at all obvious to me that
somehow it's good for every individual, nor is it obvious to me that
philosophy as a whole would be better off if everyone deracinated themselves
in that way.
Robin
I guess I want to appeal to what I would hope is a common shared notion, but I
guess we'll find out now if it is, that as intellectuals we have made the
choice to examine more beliefs than most people do. That is, for most people
to live their lives, they don't need to think through as many things or
rethink them. And that part of our role in the world is more to reconsider and
re-examine things that most people don't. And once you've accepted this idea
that our role is to be less biased than people typically are in the way they
grow up in their world, then it's not that biggest step to think we should
also do the same thing with respect to even our intellectual heritage. That is
our role as intellectuals is in part to be ready to question the assumptions
we were taught when young.
Agnes:
So I think that it's true that as an intellectual you call more things into
question, But there is just a whole heck of a lot of things you can possibly
call into question. That space is very, very big. And so trying to overcome
this particular bias, that's like 1 very specific thing that you could try to
do. And there are tons of other ways that you could call stuff into question
that, and I guess as an intellectual, I don't think you're exactly trying to
maximize the number of those roads that you take. I think you're trying to do
all of that as productively as possible. And so wantonly trying to call
everything into question.
Robin
But that's not the proposal here.
Agnes:
OK, right. So I think so if your question is, does it follow from the fact
that you're an intellectual that you're going to care about trying to
eliminate your evidence order bias, my answer is no. Most intellectuals don't
and shouldn't care about that. It's good if some do.
Robin
I might say, sort of the fundamental task of intellectual really is to first
learn the inherited traditions and beliefs of their intellectual world, and
then second, to opportunistically seek out where they might be mistaken so
that we can improve and update them.
Agnes:
That's true too, but I don't think that necessarily involves overcoming the
bias. For example-
Robin
But the bias is a primary clue about where we're wrong.
Agnes:
Yeah, but I mean, maybe it's like,
Robin
Should we use all the clues we have about where we're wrong? Especially the
bigger ones should call our attention more?
Agnes:
So I think that there's, maybe it's like, there's a lot of different ways in
which a person can try to handle the fact that they were shaped, the fact that
they had an education in the first place. That they were, and if the thought
here is really to fully try to overcome that.
Robin
That's not my claim. So let me repeat the claim because I want you to either
say yay or nay to the claim at
Agnes:
the
Robin
new level of abstraction. The claim again is that our fundamental job as
intellectuals is to first learn what people have thought so far and then to do
our best to find the mistakes in that so that we can make a better version of
what we will think in the future and the future can iterate on that. Our
fundamental job is to identify the mistakes and fix them and go, okay, do you
accept that?
Agnes:
I, ah, No, but I think we can go with it. That is, I think our fundamental job
is to answer questions.
Robin
We always have tentative answers to all questions.
Agnes:
I don't think so. I think often our problem is that we haven't articulated the
question well. And that we could, you know, articulate it better or just even
articulated at all. But, but I think that this disagreement is not going to be
relevant to, that is there's, oh yeah. So, so just go. I mean,
Robin
overweighing your early training can also lead you to badly foreign questions.
I mean, the oration and framing of questions is also produced through early
training. And so an attempt to question early training will also be an attempt
to question the early trainings.
Agnes:
So I guess I think finding the mistakes in what people thought is very
tangentially related to overcoming evidence order bias. So for example, which
mistakes call out to you and how and whether you correct them is largely a
function of the training you've had. And so if people make mistakes about how
they interpret the literature of Emile Zola, you're not very inclined to
correct those mistakes. That's because of like your evidence order. That is
you learned about Emile Zola like pretty recently, but you learned about like
other stuff about like science and tech and whatever when you were young and
that fixed you in a certain path to care about a certain set of problems. And
you don't and I don't think it would make any sense for you to try to erase
that fundamental orientation, but it is an orientation to try to correct 1 set
of problems than on other problems.
Robin
Again, you seem to be trying to put me in a position of making very extreme
claims, and I'm trying to resist making such extreme claims.
Agnes:
Well, I thought you were making the claim that like ideally we would want to
get rid of this bias and I'm claiming that the answer that that that's not
true.
Robin
That but my claim would be we are in the process of looking for biases. That
is, we're looking for mistakes. We're looking for ways in which our answers or
even our framings are mistaken. And we're looking for clues that might
indicate which of our framings or tentative answers are mistaken. And I'm
offering this as 1 of our biggest clues. So that doesn't mean you should
follow this clue to all of its implications in all possible contexts. I don't
think that's possible. That doesn't mean it's a clue you should ignore. I'm
saying it's a big important clue. And that's a valuable thing, right? We want
to collect big important clues about where
Agnes:
we're mistaken. So I think a bias and a mistake are very different. That is, I
think that The existence of a bias doesn't necessarily point to any mistake at
all. What we're looking for is mistakes. When you find a mistake, what you
will see is that something is wrong and there'll be a reason for it being
wrong. Whether or not the person made the mistake as a result of having a bias
or not in a way is irrelevant.
Robin
That seems to deny the possibility of mistakes and say probability judgments.
If I say something's a 20% chance of being true and a corrected estimate, no,
it should be 40, say that's not a mistake because it's only a probability
judgment. You weren't absolutely sure it was true, therefore it's not a
mistake. I mean, it seems like we can have mistakes in our probability
judgments. I mean... We have reasons for probability judgments.
Agnes:
I didn't mean to be denying that we can make mistakes in probability
judgments. I also, I feel like very out of touch with the thought that I ever
make any probability judgments. I know you would probably say I do, but I'm
not very aware that I do. And so the thought, oh no, I've made some claim
about what is true about probability judgments is not very have a strong
bearing on me because I'm like, I don't even know that I make those. But that
wasn't my That wasn't the point. My thought was that-
Robin
Remember, we have this concrete example. You grew up in your country and in
school they teach you that your nation in wars or relations is the moral 1 and
it's been aggrieved. The other countries were the morally wronging ones making
the evil mistakes. Yeah. And then later on you realize that everybody in all
their countries were taught that.
Agnes:
Right. That's no reason to think that yours is wrong. The reason to think that
yours is wrong. No, the reason would be the argument. Would be like, here's
why
Robin
your name is reason. By itself. It's a statistical probabilistic reason that
adjusts your probabilities, but there are such reasons.
Agnes:
Yeah. So, Okay, fine. I agree that there are such reasons. I think that that
point of view on your beliefs is already a very alienated 1.
Robin
Okay, but that doesn't make it wrong.
Agnes:
I'm not saying it's wrong, but I think that there's a way in which the very
fact of evidence order bias, like the reason why we're susceptible to that
bias is that our, is that we have this non-alienated relation to our beliefs.
Robin
That doesn't follow for me. When you say alienated, I just mean looking at
yourself from the outside, from some sort of outside. Is that what you mean by
alienated? It's just a standing back away from yourself and looking at what
you must look like from a other perspective?
Agnes:
I guess I'm thinking of something more along the lines of the difference
between sort of having a reason to think that you're wrong about, a
statistical reason to think that you're wrong about something isn't an
explanation of how you went wrong. It doesn't show you like the error. That's
what feels alienated to me about it. So that the thing still seems true to
you, but now all of a sudden you have to not believe it.
Robin
Of course we quite often have evidence about things that only tells us part of
the truth about things. If you demand that all evidence shows you all the full
deep truth, then you're very restraining on evidence. You could have evidence
that somebody committed a crime, but that doesn't mean you know when they did
it or how or why, but that doesn't mean it isn't real evidence that somebody
committed a crime and that you should ignore it. It's not alienated. It's just
evidence that doesn't show you everything.
Agnes:
No. So I think that you could have, for instance, a partial explanation of why
your country is not that morally great, an incomplete explanation, right? So
for instance, we started all these wars and that wasn't very good. And that's
not a total reckoning, right? It's just, and that's fine, that's not
alienated, even though it's only partial. So the point isn't that it's
partial. The point is that when you tell me, when I say my country is really
great, And you say, well, it started all these wars. I now have an
understanding of why maybe it isn't right to say that it's so great. Whereas
if I say my country is really great and you say, well, everyone says that
about their country, I don't have any understanding of why it's wrong to say
what I said. That's what's alienated. You don't have,
Robin
you have some understanding, you just don't have a full understanding. I'd
say...
Agnes:
I don't think so. I made the distinction. I already made the distinction
between a partial and a complete understanding. I don't have a full 1, even
with the worst. I think there's another kind of difference.
Robin
But you have a partial explanation here, I would tell you.
Agnes:
I don't think you do. I don't think you have any explanation at all as to why
the claim, you know, America is not that great is false.
Robin
You have a reason why you should be suspicious of what you were told. That
seems suspicious.
Agnes:
I think that that's right. I think that you have a reason why you should be
suspicious that doesn't in any way amount to any understanding of why it's
false and That's the kind of
Robin
but it does so now once you understand So, you know, so if I tell you, you
know, somebody is trying to sell you on a get rich quick scheme And I say, you
know you don't know this person. They seem to be shady. They haven't shown you
any track record. They haven't shown you any contact. The thing they telling
you seems to be too good to be true. And you said, yeah, but you haven't
explained to me why if I give them $10,000 to send the Nigerian prince, it'll
go wrong. I might say, of course I have. Once you understand that there are
people in the world who make stuff up in order to trick you, then that's the
causal mechanism for why this person here telling you the story. I don't need
to tell you the details of why their story is wrong to make you plausibly
understand why this thing is happening to you and why you should make a
different choice.
Agnes:
I think that that's right, but I guess I think the relevant analogy would be
if you said to me, look, the 5 other times when you've given money to
somebody, it's been a con man. So this 1 is probably also a con man. If you're
saying that to me, that might be a reasonable thing to say to me to get me not
to give the person the money, but that's different from saying, well, look,
his story doesn't hold together or...
Robin
They are different, but they aren't less valid. They are both valid kinds of
evidence and appropriate things for you to update your beliefs on. I feel like
you are making a distinction between kinds of evidence and telling me that 1
kind is invalid, inappropriate, alienated, somehow lesser, and therefore in
some way should be set aside for some reason.
Agnes:
I mean, I think maybe we should focus less on the question, is 1 of them bad
and 1 of them good, and instead on the question, is there really a difference
here between these 2 kinds of evidence, and what is the difference? You could
disagree with me. You could say, I just don't hear any difference between any
of these cases. They're all in Japanese.
Robin
There are differences. The question is how to properly generalize from the
examples to give a general category of the difference, but there certainly is
a difference. I'm happy to write that.
Agnes:
So what is the difference?
Robin
I guess you could say that sometimes information about, meta-information about
how we came to beliefs and what we believe and what other people think is
relevant to what to believe. I'll call it meta, I guess, that is it's not
represented in terms of the reasons you might give to persuade somebody of the
belief, usually in terms of internal, I guess there's this literature on the
inside and the outside view, which is about, so for example, if you're
forecasting whether some project to make a new curriculum and how long it'll
take. An inside view says, well, the curriculum will have these 5 parts and
part 1 will take 2 months and part 2 will take 3 months, part 4 will have 5
people each doing one-on-one teeth and then you add them up and you have a
number. And an outside view might say, well, you've done this 5 times before,
and those took 2 to 4 years, therefore this will probably take 2 to 4 years.
That's an outside view, and that's an analogous kind of relationship. In the
curriculum planning, if you say it'll take 4 years, tell me why it'll take 4
years. And I'm going to say, because other similar cases have taken that long.
Actually, no, no, I want to hear in terms of the internals of this
calculation, which parts will take how long. Because in my internal
calculation, it only takes 1 year because this part takes 2 months, this takes
3 months, etc. And those are just 2 kinds of evidence. They're both valid, but
you shouldn't reject either.
Agnes:
Right. So let's just for the moment accept that these are both valid kinds of
evidence and you shouldn't reject either, but that there is a question maybe
about the degree to which you attend to the 1 kind or the other kind and
whether, for instance, we should be doing more attending to and calling our
attention more to the meta-evidence, the meta-information as you're calling
it. And like, I guess I think if that's supposed to be a general
recommendation that we should in general be attending more to this meta
information I'm not sure that that's right. We attend to it already to some
degree, right?
Robin
Right. So my-
Agnes:
and I want to grant you if we attended to it maximally that might get rid of
evidence order bias. But there'll be massive costs to that. And I just noticed
that I very rarely use that kind of meta information when arguing with you.
Like right now, I'm not like, well Robin is usually right when we're talking
about rationality, so he's probably right about this. Like if I said that,
that just wouldn't be very satisfying. We wouldn't feel like we were getting
anywhere. Right? So, so it's, with respect to this conversation, it doesn't
seem true that we would be better off each taking this meta info instead of
the first order info. We would be more rational. We wouldn't operate better.
Robin
The basic form of the argument is that the larger sort of the average error
that's associated with some sort of bias, and the more important the topics
that are biased, the more attention we should give to the corrections that we
might want to induce from noticing and fixing that bias on that topic.
Obviously, we shouldn't be very interested on unimportant topics. We shouldn't
be very interested on biases that are very small scope or temporary, et
cetera. But the claim is that this is a very large, very wide scope bias that
then has enormous effects on very important topics. So.
Agnes:
Right. So I think that's correct, but I think there's another, there's a
counterweight point, which is that Let's say whether we take the meta
perspective or the first order perspective, the outside view or the inside
view on a topic, with some topics, it won't matter that much. It won't matter,
it will be okay not to take the inside view. But the value of sticking to the
inside view, it's going to be higher when the topic is very important to us.
That is the benefits that we get from the inside view are correlated to the
same sort of features that would make those be the places where you're going
to be wanting to hunt for bias too. So they're going to be the the competition
is going to be fierce in that territory.
Robin
So I wrote 2 blog posts on this subject so I am initiating this topic for us
to discuss. The second 1 was called evidence order bias, but the first 1 was
called choose cultural or Bayesian morality because in that first post I was
focused on morality as an example. And I've got to say inside arguments about
morality or me are few and far between and not very strong. So this outside
view argument about morality seems to be 1 of the strongest pieces of evidence
I have available to me on that subject. And it's not easily outweighed by
inside arguments.
Agnes:
I think that what many people would say is that from the outside view, it is
just not clear that there is any morality at all, that there is anything that
is of value. The outside view doesn't give us values, including the value of
existence, the value of human life, the value of pleasure. It gives you a way
of kind of thinking about the values that you get from the inside in a
balanced way. But you go out that far, there is no value. Here we're just
talking about becoming aware of
Robin
the fact that other people and other cultures have been taught other
moralities. Right. Now, if you're going to say that merely attending to that
fact somehow undermines and destroys morality and therefore we should resist
attending to the fact that other people-
Agnes:
No, I mean, I think the practice of attending to other people being taught
different moralities is part of the inside view for a bunch of different
cultures. That it's a culturally specific practice that is part of how a
certain group of people finds meaning is by having this kind of cultural
universality. So no, it's fine thing to do. Okay,
Robin
but in order to achieve it...
Agnes:
You're from the culture that does that, so it makes
Robin
sense that you engage... Most people in my culture who do this do not achieve
the order of independence, and that's the strong critique here. That is, They
talk as if they have assimilated the fact that other cultures have learned
other moralities, but in fact, their order matters enormously for the
conclusions they draw. So therefore, I conclude they have not in fact much
overcome the bias. There is a lot more information for them to assimilate in
this consideration.
Agnes:
Right. Well, I don't think that the reason why they consider many cultures is
in order to overcome that bias. I don't think anyone's
Robin
ever been. My criticism.
Agnes:
Right, but the point is, if you talk about this idea of considering other
cultures, I'm saying like, no 1 does that in order to overcome this bias.
Robin
I'm recommending that right now. I am saying, frankly, that given what we
know, this should be 1 of the biggest identifiable mistakes you're making is
to being too gullible about the morals your culture taught you.
Agnes:
Right, so, but I was giving you a reason why that doesn't make sense, which is
that at least if taken to an extreme, the outside view, I'm not sure I think
this is right. I'm just putting forward this argument. The view that you need
to take in order to correct for the evidence order bias, which is not in fact
the view that people who do all this comparison are doing. Those people are
just doing what their cultural tradition requires, and they're comparing very,
very few of the actual options that are there. There's a few popular ones to
compare. So, but the thing you would have to do would be something to adopt a
point of view where from that point of view, there is no value. And so you
wouldn't be arriving at a neutral morality, you would just be arriving at no
morality at all. You would only still have value out there to the extent that
you stuck to 1 of your inside views.
Robin
Let me rephrase what you're saying in a way that you may not accept, but then
tell me why it's wrong. What you seem to be saying is that if you consider all
the evidence and you weigh it all equally in an order-dependent way, then
there's no way to conclude that there is morality. That is all the evidence is
against morality existing. But total evidence strongly argues that there is no
such thing as morality.
Agnes:
That's what
Robin
you seem to be saying.
Agnes:
No. So maybe If we go back to the first order and the higher order kind of
evidence, maybe the thing that I'm saying is we can't always consider both.
That is, there's a tension between consider.
Robin
In most other contexts in our lives, there isn't a conflict. For example, the
police often have access to what they call metadata about phones, like who
called who, and then sometimes they have access to the phone calls themselves.
And people are often claiming you have privacy rights when they don't have
your phone calls, but they do have the metadata. And they usually that's
enough to figure out what they want to figure out about, you know, who's doing
well with who. But they don't contradict each other. They are consistent with
a actual story of who called who and what was saying. And in most cases,
considering meta-evidence and concrete evidence is consistent. Like I have
this stuff on brevi aliens and I'm considering some pretty meta-evidence and
fitting my model, but that should be consistent with any very concrete
evidence we get about any particular aliens out there. And I think this is
just generally true about meta and inside and outside evidence. Typically when
we want our best model of the world, includes all of it. And it would be quite
surprising if we thought there was an intrinsic conflict between more abstract
general evidence and more specific evidence. And you
Agnes:
got to go to 5.3. I'm doing a podcast. It's okay. We'll be done at like 540. I
sorry, hopefully we can cut that part out. I think that those cases just start
to look very different when you consider... Sorry, again, I'm going to switch.
I think that those cases, they look different when you consider an
interpersonal conversation. So suppose I'm like, as you're talking, I'm like,
well, Robin's an economist, and until now I've never heard anything
enlightening about economists from this topic. And he's usually wrong about
this. And to the extent that I am attending to all these meta considerations
about who you are and The fact that you're saying this and the kinds of arm
gestures you're doing and whether those are correlated with people speaking
the truth. And to the extent that I am attending to those things, I am not
attending to the actual content of your arguments, what you're actually
saying. And that's why in a conversation, it can often be very annoying. If
I've given my money 5 times to the people and I'm explaining to you why this
person is really good and you're like, look, 5 times you've done the wrong
thing, I'm just like, but you're not, I would say, but you're not listening to
me, right? That's the words I would say, you're not listening to me because
there's a, there's a tension between you're listening to my substantive
claims, which may be your right to be ignoring, but you're ignoring them and
you're moving to the meta level. And so you may be right that meta information
and first order information are compatible in many contexts, but there are
other contexts in which they don't seem so compatible.
Robin
The incompatibility you're talking about seems to be just you can't attend to
too many things at once. It's like if you were, you know, we had a video of an
hour-long talk where someone's trying to give an argument but then we watched,
you know, 12 five-minute versions of them in parallel on 12 different windows
on our screen, we wouldn't be able to understand the talk because we would be
getting them all at once. So yes, of course, you can't simultaneously attend
to the details of many things. That doesn't mean they're in conflict. You
could just attend to them 1 at a time.
Agnes:
I guess I think that the problem isn't that there are many things at the same
time. That what we actually do is we put up barriers for a reason. If a
student comes to my office and they're making an argument, a philosophical
argument, and if I, now if I am thinking about my laundry or something, I'm
not going to do a good job attending to their argument, that's being
distracted. But I think there was a different problem that arises if I'm
thinking about how they probably don't know very much because they're only a
first year student and so this is probably not going to be a very good
argument. That second sort of consideration, I'm supposed to like table it.
Like that is some evidence and I'm supposed to not consider it. Part of what
it is to engage... Well, part of what it is to engage respectfully with them
is to bracket it at least for the purposes of the conversation. And that
bracketing is just quite different from the way in which I'm going to bracket
something about my laundry.
Robin
Well, I mean, there's a sense of which you are bracketing the laundry during
your conversation with them, right? You're literally not faking the laundry.
Agnes:
I agree. And that's what I was saying is that I don't think it's just a
question of distraction.
Robin
But so if there are multiple kinds of evidence, I think it would be fine for
you to say when you're considering each kind of evidence, temporarily it might
be effective to set aside all the kinds of evidence and focus on that kind of
evidence. But that doesn't mean that you shouldn't later hear the other kinds
of evidence and then have a moment where you try to combine them all together.
Agnes:
Yeah. So I'm just not sure.
Robin
You had the conversation with the undergraduate in your office and they seem
persuasive. And now they walk out of your office and then you go home and you
think, well, how believable was this argument given the fact that they're an
undergraduate and I was kind of sleepy and maybe, you know, I'm especially
vulnerable on this topic or whatever. You would at some point include the
other meta information to try to make your overall judgment.
Agnes:
So I think that there's a question about what the end of the day is here. And
1 way to think about it is that it's where you come to in conversation with
that student. You can later have opinions about where your opinions lie and
you can also very easily forget those opinions, right? So some thoughts can
flitter through your head that evening when you say, ah, but this wasn't that.
But you might think, well, the reality is whatever conclusion that you came to
in the conversation, you could then sort of have enough of a realization that
there's a problem with it that you then want to talk it through with them
again or something. But it's not obvious to me that this thing that you do in
your head later is like that's the real stage. And so if you have to bracket
the fact that they are young and inexperienced in the conversation, for me,
that's a very substantive form of bracketing. It means you're bracketing it
with respect to the conclusion that you draw.
Robin
Let's imagine you're a juror in a trial, and as usual, the prosecution
presents its evidence first, and then the defense presents its evidence next.
Say at the end of the prosecution's evidence, you seem to be leaning toward
the prosecution, they make sense to you. At the end of the defense, you
bracketed it for the time you were listening to the defense, you were
bracketing the prosecution's evidence, listening to the defense, you now find
them to sound plausible, And then you are instructed to go to the jury room
and make a total judgment, no longer bracketing everything, trying to
integrate what you've heard. That's a common sort of responsibility in the
world. That is, a reporter will go interview a whole bunch of different
people, but then they will write an article that pulls together all the things
they've heard from people. Why wouldn't the best final judgment be the
integration of all the evidence you've heard? Why would you prioritize this
moment when you've been listening to 1 source and bracketing other sources and
seeing it there from their point of view? And at the end of that, you do see
it from their point of view at that moment, but why is that the answer?
Agnes:
So, I mean, I think that it's not obvious to me that you do need to bracket
the prosecution when you're listening to the defense. It very much makes sense
to me that, for instance, when you're listening to the defense, you're
listening for whether they respond to some of the claims that the prosecution
has made, right? You want to be listening for those things, and so you
wouldn't be better off bracketing it. And that is, you're not going to, like,
it may be that you just have attentional limits But it's not the case that
you're gonna somehow fail to understand the defense Unless you bracket the
prosecution What I'm saying is that I actually fail to understand what you're
saying the content of what you're saying if I keep attending more and more to
all these meta issues about what are the correlations between
Robin
What if the prosecution introduced those meta issues? You're claiming that by
listening to the defense, you must set aside
Agnes:
the limitations. That would really be a problem. If the prosecution
introduces, look, juries are notoriously fickle and This
Robin
particular defense attorney has repeatedly lied. The last 20 clients they have
brought up, you know, were all terribly guilty and they defended them.
Agnes:
Yeah. So, I mean, I'm quite curious. My guess would be that that is not
permissible in a court of law. That is, that's going to be bracketed by the
judge for a good reason, that this is not a good deliberative process.
Robin
But should that evidence never be brought up? So- You could bracket it while
you're listening to the defense. Yeah, so- the claim is you should form your
final judgment at the end of the defense is that not consider the other better
evidence.
Agnes:
I think it's just that I actually really don't know how to consider that
evidence. And so the point isn't that if you take in all the evidence, then
there's no argument for morality. The point is that all of the evidence
includes some of the first order evidence, and that's the argument for
morality, but the higher order evidence undermines some of that. And if you
just looked at the higher order evidence, you wouldn't have any moral claims
at all.
Robin
But if you look at both, shouldn't you try to look at all of it and agree?
Agnes:
Yeah, I mean, so I think it's just maybe here, like the issue is there are
many contexts in which the invocation of the higher order evidence simply
confuses things. That is, that often is the case. It's typically the case in
conversation. But there may be other contexts where that's a good thing to do.
And then maybe in those contexts, there's some way to add it all together. I
just have like little enough experience of that, that I kind of just don't
know what you're talking about. It's not that I'm somehow saying a priori it's
impossible. Like I've never experienced it.
Robin
So let me just tell you a fact about rules of evidence. It's called naked
statistical evidence. So let's imagine there's a concert and you know a
thousand tickets were sold and there's a fence and then a big crowd around the
fence broke down the fence And now 4,000 people came onto the field and there
were 4,000 people in the field listening to the copter. Only a thousand of
them bought a ticket. Now, if you grab, if you had a picture of 1 person in
that crowd and you brought it for a judge and you said, this person hasn't
showed us a ticket, and therefore there's a three-quarter chance they didn't
buy a ticket, that will be called naked statistical evidence, and that's not
allowed. But if there was a security guard who saw somebody sneak past, but
they weren't very sure, they said, I think it's a 75% chance that that's a
person I saw sneak by. That will be accepted as evidence in the trial.
Agnes:
So... Makes total sense to me. So this is just the kind of system I would
construct. Yeah, the naked statistical evidence is just... It's just a... It's
like a view from nowhere, Whereas at least you got the security guard and he
saw something and he's not sure of it. But that's very different from-
Robin
I think most statisticians do not accept this distinction. Most statisticians,
I'm pretty confident, say these are both completely legitimate forms of
evidence that should in fact change your degree of belief that this person was
gutsy. Well,
Agnes:
I think the first bias that you need to fight against before you fight the
evidence order 1 is the anti-statistical bias or something you should describe
that 1 where people like me and the people who constructed the judicial system
think there is some really significant difference between these 2 cases and
this you and the statisticians are like no there's no difference at all it's a
total illusion that there's difference and then
Robin
there's no difference we're saying they're both relevant they both are equally
they're both strong evidence or conclusions not saying they are identical in
form, but they are both effective.
Agnes:
But You think it's a mistake for us to rule out the naked statistical
evidence, right? Yes. Okay. And so, I'm sure that even the people who want it
to be ruled out think would not agree to the claim, we should never consider
it. For instance, I'm sure they're like, it's fine for the statisticians to
think about it. I'm not saying, I don't want them to think censorship of those
facts.
Robin
It's not in a court of
Agnes:
law. Doesn't belong in the court of law. And so you need to come up with the
more concrete claim that me and the people who make courts of law think, which
is like the difference is relevant to a certain kind of-
Robin
A related thing is that you're typically not allowed to introduce evidence
that the prior convictions of the accused.
Agnes:
Right.
Robin
Okay, which is also somewhat better evidence.
Agnes:
Right.
Robin
And you know, my argument would be, well, you want to convict someone or not
based on all the evidence you have to get the most accurate estimates so you
have the fewest mistakes. You don't want to convict innocent people. You don't
want to let guilty people go. And so if that's your lodestar, if that's the
point, then you want to include all relevant evidence. And the reason for just
not including evidence would just be when the process of including it would
induce some sort of game or nefarious activity that would then pollute other
sorts of things.
Agnes:
Yeah. So, I think that we should... Maybe the thing to do would be to say
their rationality can govern 2 different kinds of activities, thinking and
talking. And it may be that it's going to govern those activities in different
ways. And I tend to be relatively uninterested in thinking. I think it plays a
relatively small part in human life.
Robin
Let's set that, I mean, the court is about Dewey.
Agnes:
We might need 3 then. But the point is that I think actually quite often in
conversations, it's poor conversational hygiene to point out, for instance,
that the person's been wrong in the past. If you said to me, hey, remember the
last time we tried this conversation and you failed horribly? That would just
be, it wouldn't just be rude, it might be poor conversational hygiene. And so
it may be that what happens in a court of law is we're trying to have a
rational conversation and that's not the same thing as we're trying to engage
in a rational thought process. The question, what kind of, what should be our
model for action? Is he at a third question? And so the point is that, like,
you know, you're going to have to convince me that it can be appropriate in a
conversation for me to introduce all these facts about who is usually right or
wrong, but that can be the right way to talk to someone. Or at least that's
what I think I'm fundamentally resisting in resisting this sort of evidence.
Robin
So we're about out of time, and we're introducing sort of whole new topics
here to explore that we'll have to do in future episodes. I would just
summarize my position as a very common position, which is just that there's a
standard integration of doing thinking and talking, which is that you have a
set of beliefs in your head at any 1 time, you have a process of updating that
on information, and you take action through decision theory based on your
beliefs and your preferences and talking is a process of collecting
information where you calculate which things you might say would induce the
most information so that you can update your beliefs the most informed to have
more accurate beliefs to take better actions. But that's a standard point of
view that it seems that you disagree with. And we will in the future figure
out what you dislike about that standard point of view.
Agnes:
I will just say 1 final word, which is that there's a book called The Elephant
in the Brain where the authors disagree with that point of view on talking.
They claim that we're not actually trying to get information out of each other
when we try to be, well, we're doing another thing. So.
Robin
Maybe we should go back to that something. All right. Nice talking.