Good evening, Agnes.
Hi, Robin. So I suggested that we continue our conversation about
disagreement. Here's how I summarized our positions, your position, my
position, and Yudkowsky's position on Twitter. I think you disagree with this
somewhere, but I'm going to say it. So, Yudkowsky's book has two parts. The
first part is the system is broken. And then the second part is, so you should
feel free to think for yourself, form your own assessments, reason from the
inside, rather than relying on the system. You think the system is broken, but
that doesn't mean you get to think for yourself while you're on assessments,
et cetera, etcetera. I think the system is probably not very broken. But you
can still just think for yourself, and kind of free for all approach, OK? So
that's how I describe our three approaches to the phenomenon of disagreement.
Like I'm a permissivist about disagreement so I don't think we have a terrible
epistemic situation going on, generally. But so I take – so that's how I – but
tell me what I'm getting wrong, if that's not a fair way to summarize it, our
OK. So, one way I think Eliezer even did say it, is that you should feel free
to have your own values to decide what you prefer but you shouldn't feel free
to believe anything you want. That is epistemic is more constrained than
values. That is the whole idea of epistemology, is the idea that not all
beliefs are equally valid or appropriate. And the whole point is to try to
figure out what are appropriate, better beliefs. And so, you would seem to be
rejecting all of epistemology, if you say, well, any beliefs are fine, you
don't care and you see no reason to be more restrictive of some versus others.
Surely, you don't mean that, literally. So, what do you mean?
Oh, I mean, I think you should try to believe what's true, right?
But the... I mean, I'm surprised that you're going to be such a permissivist
about values. It seems to me like if I have to imagine two people, and one of
them gets to have values that are totally out of whack, and the other one has
like beliefs that are totally out of whack, and like, which am I more worried
about? I think I'd be worried about the values one. I think I'd be more
inclined to police that guy than to police the beliefs guy. But, it might
depend on the details of the case. The guy with good values, but like a very
skewed belief structure could lead them to do a lot of harm, right?
I mean, I think it's usually coming from, say, a decision theory context,
wherein we have a lot of ways to constrain beliefs, but we don't have very
many ways to constrain values. So, at least when we– we at least do what we
can with the part we can. So statistics, for example, and Bayesian decision
theory, in general, like, there's a lot of rich structure that people worked
on for a long time about ways to constrain beliefs. And people have also
attempted on ways to constrain values, it's just most people think we've just
achieved a lot less on finding ways to have shared agreement on how to
constrain values, but we've found a lot of ways to agree on how to share
But well, my thought wasn't that like, you should form your beliefs without
any regard for the truth. Because I don't think you can do that. I think even
if I permitted people to do it, it would just be impossible. Like, if I tell
you, "Go ahead, feel free to believe that my shirt is red or something" like
you can't do it, right? So... but the thought is, like, there's a certain
thing where you might or may not have the temerity to do it, right?
And that's the thing that, you know, Yudkowsky brings out at the end of his
book. There's this like modesty approach that you might have with respect to
certain kinds of received claims. Right? And you're supposed to be, with
respect to some claims, modest and not try to think it through for yourself
and just accept that the sort of received wisdom in some sense of the phrase,
received wisdom, and often for him, this goes with having the external point
So like, if every single time you've said you were going to do the referee
report by the right date, but you've always gotten it in a week later, like,
you should believe you're going to get it in a week later, even though this
time you're like, "No, I'm going to do it." right? And so this idea of like,
"Look, you're not allowed to just try to figure it out yourself and to
deliberate about when you're going to turn in this referee report. You should
just accept that the pattern of your behavior dictates."
So, the question is like, is there ever a case where you're not allowed to
think it through for yourself or it would be irrational to that?
So it sounds like we've accepted now tentatively, the general idea that there
are ways to constrain beliefs.
Oh, of course.
And so then it– the rationality of disagreement literature has been about
whether there are additional things to say about disagreement, other than the
other things people usually say, ignoring disagreement. So, outside of the
context of disagreement, we have sort of how empirical you should be. So
you're just talking about, you know, if you have a steady trend, how close
your beliefs can be to that or deviate? Because of course, you could say,
"Well, sure, go last five times I missed it, but this time I have this whole
And, you know, the question is, how plausible that could? And then so we have
lots of statistics really are like a large field of formal analysis of how
beliefs should be constrained by data and priors. And argumentation, of
course, in court, or in academic journals, or in conference, we have a lot of
norms about what appropriate kinds of arguments, acceptable arguments are, the
kinds of arguments that you should sort of force you to conclusions if the
structure is met. So we have all this rich literature. And now, the question
is just, can we add additional elements related to disagreement or not? That's
the whole issue about rationality of disagreement.
Right. And let me reiterate something from our last conversation, which is
that when I use the word disagree, I mean, like, actually disagree in the
process of like, having a conversation where you say words to the person,
like, "I disagree with you." So like, when I say, "You can disagree anytime,"
that's what I mean. I mean, like, just as far as rationality goes, it may
socially unacceptable, right? But just as far as rationality goes, as long as
you're willing to pursue the disagreement, right? Even like, if you want to
disagree that my shirt is red, you want to argue that my shirt actually is
red. I think actually you're free to disagree about that. If you want to
pursue to disagree.
At the moment, I am going to disagree with that.
You think my shirt is red.
No, it's not red.
Oh, it’s not… right. OK. I meant you want to disagree to the fact that my
shirt is red. OK.
So... but maybe, could we look at this by thinking about something from your
I just want to make one distinction before you get there, right? So, you just–
so, a lot of the rationality, almost all of the rationality of disagreement
literature that I've seen, is expressed in terms of beliefs. It's about what
beliefs it's appropriate to have in the context of differences of opinion. So
the literature really is literally about differences of opinion, and what that
implies about beliefs. So the word disagreement, of course, has more
connotations than that. And so we just need to be careful here, which topic
we're talking about.
Are we talking about just any sort of confrontation or argument? And are we
talking about sort of any sort of purpose of that? So, I would be happy to say
that it's often rhetorically useful to take something someone said and says,
“Let me try to disagree with that.” And I don't necessarily have to not
believe it. It can be just a good conversational strategy to sort of explore
an objection, and see how far it can go and whether it'll work. And we don't
necessarily need a difference of opinion to make that useful and acceptable.
So, and then we may not even notice that we have a difference of opinion
there. Again, we could produce this for other purposes, provide the final
opinion. We can produce it for practicing and seeing how the argument
structure works and whether or not we can work together as a discussion team.
And there's lots of reasons we can have a discussion, other than sort of
particular beliefs we’re trying to achieve. And again, we can have other
prompts to our beliefs other than a difference of opinion. But the rationality
of disagreement literature is about a difference of opinion prompting beliefs.
It's about when it's OK to have different beliefs, once you've noticed a
difference of opinions.
So, I think that maybe some of that literature is unambiguously about that. I
think the most interesting feature of it is a way in which there's a
systematic– at least the most interesting feature, the way you talk about it
is that you systematically go back and forth between whether you're talking
about disagreement in my sense or difference in opinion. And that's why I want
to talk about your blog post because you do this in your blog post.
So, you start out with this example. So this is – this blog was just called On
Disagreement, Again, right? And you want to revisit the question of like, what
do you do in the face of a persistent disagreement with your epistemic peer,
right? But you start out by saying, imagine two people, right, and they have
time to exchange information about a topic and to argue back and forth about a
topic. And then– but even as they go back and forth like even after a long
time, they don't converge on the same view, right? And then, so at the end,
suppose you and I are doing this, at the end, you think P, and I think not P.
And you want to know, like, are you permitted to walk away with P, right,
given that I think not P and I'm your epistemic peer.
Permitted isn't quite the right word.
For what we more might meet is, what conclusions can we draw about our
rationality? Shall we…
Or you want to call yourself rational? Is that it?
Or, can we call both of us rational? Or can call of us, both of us mutually
aware of rationality? So, the conclusion I try to draw is to say, well, at
least one of us is not being rational here. And that's an interesting
conclusion to be able to draw from a situation. It's not quite the same as
saying you're not permitted to do it. You might be permitted to be irrational,
for example, as we recently discussed on Twitter, actually, in some degree.
And it might be that they're irrational, not you, and you would be permitted
to respond to their irrationality in the way you're doing. But there's
certainly a problem that we – it’s problematized. There's an issue to be
Right. And so, what I wanted to say is that, to me, there's something very
strange about asking the question about our relative rationality, and even
over like, for instance, whether we have, at the end of the day, reason to
prefer our conclusion to the other person's conclusion. That all of that is
like, it's like, what do I think about the disagreement when I'm not in it?
So, you took the example of saying, I'm going to make this deadline, and like,
several times in a row, you failed. So, in some sense, that's an analogous,
but it's a simpler case to understand, right? So if you have two people, one
of which says, "I'll make the deadline." and then five times in a row, they
don't make the deadline, and they're a week late. And the other person says,
"To the sixth time, no, I'm not going to believe that you're going to make the
deadline. Because look at this track record. This track record suggests that
you're broken somehow, your processes is not right. It's not working. You need
to reconsider your process."
And that would be a point at which, if this person who has failed five times
in a row, still insist that they're quite confident they're going to make the
next time, then the other person would be justified to some degree in thinking
that that first person is not being fully rational. So that's a more concrete
example of what you might conclude more generally from a disagreement. But I
presume you grant that context, that could be a reasonable conclusion.
I think that we might, like, have reason to judge people as irrational. What
I'm wondering about is whether the right way to do that, in relation to a
disagreement is to do something like, that steps outside the disagreement and
diagnoses it. Instead of like, suppose I think you have a bias or you're being
irrational in this argument, right? Shouldn't I point that out to you and say
to you, "Wait, you're being irrational."
So, in the case of the schedule being, you know, the deadline missed five
times in a row.
You are abstracting from the details of that by saying, "The mere fact that
you missed five times in a row, it seems to me sufficient to conclude
something's wrong. I don't need to go into each of the reasons you had each of
the five times and each of the things that went wrong. It is a claim that you
can abstract from those details." So, the rationality of disagreement, sort of
analysis is again a claim about abstracting from details. And then of course,
it may be wrong, but it's nothing intrinsically impossible about being able to
draw conclusions about a situation, abstracting from some of its details, or
even most of its details.
I mean, suppose that you couldn't conclude that the other person was
irrational. That is, you gave a set of...
Or, so, you conclude that one of the two of you is, for example.
So, it might be you, not them.
Right. And suppose you can't tell which of the two of you it is, then what?
That goes into all of your future conversations. So for example, if you have
conversations with 10 more people, and each case one of the two of you is
irrational. And they each have conversations with 10 people, and for them,
it's never the case that it's the problem with them with anybody else, then of
course, it could be the problem is you. So with just a single pair, maybe you
can't pin it down, but with enough people... so this is a standard sort of
legal area, right?
If you have A complains about B, well, we don't know what's right. But if what
if we collect a lot of complaints and that we've got 10 people complaining
about B? Right? And almost nobody's complaining about each of the persons
complaining about them, we're going to sort of say, "Well, looks like it's B.
B is that problem here. Lots of people are complaining about B, not many
people complaining about many other people.." Every time there's a complaint,
we can never be sure who's right, the person complaining or the person being
complained about. But we can look at the statistics about who complains about
who and often is quite suggested.
I mean, I think sometimes some people just bring out the irrationality in
others and also bring out...
Like some people bring out nasty behavior in others.
Oh, sure, right. And so, we might decide that B is the problem because of the
things A does to bring it out in them. So for example, you know, for example,
we might say, "Lots of people are insulting B, right? And it could be that B
is a dwarf, and somehow, being a dwarf induces other people to insult them,
right? And nobody insults anybody else. But it's just B who's ever, only ever
being insulted, then we might offer that theory.
But nevertheless, it's still informative, right? The fact that there's the
structure that everybody is insulting B and not other people. That's key data
we're going to take into account. So, in the rationality of disagreement
again, you know, you or I are irrational, one of the two of us is that still,
you know, data to think about.
So I guess the thing that's confusing me about this is like, as I understood
it from your original thing about the rationality of disagreement, it's about
the difference of opinion. And it's about whether or not you're allowed to
walk out of that conversation with the opinion that you walk out of it with.
And so the question is not like, do I call myself rational? Do I call myself
irrational? Neither of which has obvious immediate bearing on the opinion,
right? But like, do I now have to like extra downgrade, downgrade this opinion
a little bit? Right? Given like– given that we ended up here, do I now need to
make a further adjustment, due to the fact that the conversation ended at that
point? That's why it took me the question.
So, under standard expected utility theory, there's two possibilities here,
right? One is that you're completely innocent, and it's the other person fully
at fault. And another possibility is that you are substantially in part at
fault. It could be they're also at fault but you could be part of it, right?
Now we can, you know, the standard thing to do is under each of these
hypotheses, ask what would be the appropriate response then, and then say,
"OK, now I'm in this position of uncertainty, I don't know which of these
situations I'm in." And then you do a decision theory analysis to figure out
the appropriate response, given that you're uncertain, about two
possibilities. Just like, it might rain when I go outside or it might not
rain, what should I do? Well, if I take an umbrella, and the chance of rain is
high enough, that will help. So you estimate the chance of rain, and then you
decide whether to take an umbrella outside, right? That's the usual thing.
So in this case, we'd say, well, if you're really confident that it's them,
not you, well, then I hear, you know, stick with what you got, you're fine. If
you think there's a substantial chance that it's you, in addition to perhaps
them, now, you have a reason to question the opinion you just formed. It was
polluted or tainted by your irrationality here. And in particular, your
rationality is you're being unwilling to listen to them enough. That is, it's
not just a generic kind of rationality, it's a rationality with a sign that
says, "Look, you are not listening enough to them. They know things you don't,
and you should have been taking their opinions as evidence of things they know
that you don't and changing your opinions in response to the things you heard
from them. That's what rational people do with each other. And therefore your
failure to do that would be making you not listen enough, so listen some more.
So, you do a weighted average of, you know, not listening at all – not
changing your mind at all, and maybe taking more into account that it might be
you, not them.
To me, the oddness of all of this reasoning is that it happens as a mop up
operation after the conversation like why aren't you thinking as you're having
the conversation? I should listen more and list more.
Of course. Right now... Yes, of course, that is the recommendation. But again,
we're trying to simplify the discussion. So for the hypothetical, we present a
case where the discussion is– in order to make this the clearest hypothetical
to make the argument, we're going to make this most striking hypothetical, but
of course, the more practice hypothetical is. Yes. As you go along, you ask
yourself, am I listening enough? Are we foreseeing disagreement? So, as I
I don't even think you ask yourself. Like, why does... so there's this
solipsistic process that seems to be running on a track while you're having a
conversation. Right? As though really, the real conversation is happening in
your head, as you process what's happening. You're like, what should I think?
How should I update given this conversation that's going on? And like the way
I see it is, no, the actual disagreement is the actual conversation. That's
where you're figuring out how to adjust your views. And there is no real
second order process that's going on that has any real like bearing for where
you end up.
Let's just talk text and subtext. I mean, so we can have a conversation where
there's the literal text, and we say one thing and say another. And then as we
know, there's typically in a conversation, a great deal of subtext. And what
we're talking about now is the subtext that you might be reflecting on, but
maybe not make explicit. So a part of the subtext is, should I be talking to
this person at all? What should we be talking about? Which sort of things
could I trust them more or find them informative about? Are they listening to
me? Am I listening to them? Am I being gracious? Am I insulting them? Am I
making them mad by violating kind of norms? These are just things that we're
always considering in any conversation in the background. So this is just
intended to be adding or sort of identifying some of those considerations that
were always in the background of any conversation.
OK, but I guess I just tend to think that facts about the rationality of
disagreement are fundamentally not located in the subtext. They're
fundamentally located at the level of text. And in the example that you give
of these two, you know, ideal disagree is like, I wasn't imagining them,
having all this complex stuff happening at a level of subtext were like,
"Well, maybe he didn't even mean the thing he said. I have to interpret it
this means other thing." And like, if the idea of ra– you know, if the
rationality process is like doing all this, like correction for the secret
meanings, or something, that's a very different picture than the one that I
had, which is simple. Like, we disagree, and we say, we think and we're
trading information, and...
I'm going to praise you here and say that you and perhaps I also are good in
conversation at listening and engaging other people. But you and I often, or
at least sometimes come, encounter people where we have a conversation with
them, and that we relatively quickly, in the process of the conversation come
to realize they're not listening to us very much. They're not even, we make a
rebuttal and they just repeat what they said, or they find a way to rephrase
it. Or they're being very defensive. And so, not really responding to an
objective. That's a thing that happens in conversations. Sure, it happens with
you and with others. So, we often need to judge in a conversation whether the
other person is even there in the sense of listening and engaging us.
Or just, you know. So obviously, we can see in many political debates, the two
sides are not listening, or certainly in many interviews, you see, the
reporter asked a question, and the other person just answers a different
question. They didn't want to answer that question and so they don't. And the
reporters usually let them go away with that.
Sure but that, none of that need subjects, I often think you're not listening
to me. And what I do when I think you're not listening is I repeat myself.
Because often it turns out that what I think of is you're not listening is I
haven't really made the point clear, right? So like, this is a holdover from
our last conversation. I think that like there's a slight of hand happening
here, where people say they're talking about differences of opinion, but
they're actually talking about disagreement. But the way they squeeze those
things into one is they imagine a disagreement, and then they cut it off. And
then they're like, now let's talk about the difference of opinion. And I think
something fishy is going on there. Right? And I keep trying to say that, and
you keep saying like, "Well, we're talking about disagreement. We're talking
about difference of opinion." So there I'm like, I'm saying, "Hey, listen to
me, you're not listening to the thing I'm saying, right?" But those things
those "Listen to me" thing that can also just be part of the text.
So it's probably true that with nearly rational people who might sometimes go
into an irrational mode, that when each one of them saw a cue that the other
person might be falling into that mode, they might raise that issue explicitly
in the conversation and warn them often, and help them avoid that mode. And
that would be a healthy conversation of somewhat rational people.
Obviously, if they are both fully confident they were always going to be fully
rational, there wouldn't be a need for that. But if the point is that there
are these irrational modes that people can get in and issues that come up, and
that somehow that makes it harder for them to have their discussion, then yes,
it would be helpful if they could raise it explicitly and deal with it. But
nevertheless, that won't happen unless you're aware that that's possibility
and thinking about it in your head before you raise it explicitly. So, just
like all of these other kinds of subtexts, yes, the healthiest way to deal
with many sort of subtext issues is to make them explicit at the proper time.
But that won't happen unless you're thinking about them before you make this
That doesn't seem true at all to me, like those thing I said a minute ago,
like I didn't think at all in my head before. I was just talking and thinking
at the same time. I mean, maybe you think I thought it really fast right
before, but that wasn't my experience of it, because I was listening to you
right up until the moment when I started talking.
So I mean, again, stand back and look at the dramatic prediction that's made
here that so, you know, part of what's going on here is to see that we have
this experience having conversations and debates with people all the time, and
we know roughly how that goes. And then we have this abstract analysis of what
rational debate should be like. And then we will see that it's quite different
in a strikingly dramatic way. And that's what's driving this discussion is to
come to terms with this difference.
So there's, of course, several possible resolutions. One resolution is that
this abstract theory is just wrong. It's missing something important. And once
you include the important thing, then this difference will go away. And that
was certainly, you know, people spent many decades trying to do that. And I
would say they failed. So I would say the result of that literature is, in the
end this difference is robust and real, which means you need to pay more
attention to it.
And so, as I said before, like, one way to express this theoretical results is
to say, if we're alternating expressing opinions on a topic, say, the chance
of rain tomorrow or something, then at each moment, we shouldn't be able to
anticipate how the next expression of opinion by either person differs from
the one we're saying right now. That is, it's going to be different, but we
can't figure out in which direction on average, or the expected value of that
future opinion must be equal to the current opinion.
So that's– we've long known that that's a feature of each person's individual
beliefs. That is, for any one person, you're not supposed to be able to
predict how your future opinion will differ from your current opinion. And
say, something in the stock market a standard thing, the current stock price,
predicts tomorrow's stock price, and you can't do better than today's stock
price to predict tomorrow's stock price. And then the point is to say, in an
alternating conversation where people are going back and forth, that's still
supposed to be true, each person can't predict what the other person, their
opinion will differ.
So as we know, that's quite different from distinctive opinions, because often
you will take an initial stance, I will take an initial stance, and then we
will go back and forth several times, each of us apparently retaining our
initial stance. And therefore seeming to consistently be able to predict the
direction of the other person's opinion and the next thing they say. And so,
that's driving the self-examination to say, look, we know what discussion
looks like. It feels healthy. We do it all the time. Here's the theory about
what rational debate should look like and it looks very different. What gives,
we have to, we have to decide, are conversations broken, or is the theory
So, one thing you said, one way that you described rational conversation, is
that we're alternating expressing opinions on a topic such as the probability
of rain. That sounds like a Martian conversation or something like that. That
doesn't at all describe this conversation.
No, no, no, what I meant was that conversations are alternative statements and
We do alternate speaking. That one is true.
And so we have– each time somebody speaks, they either make a noise, or they
have a claim or a question or something, right?
And then a great part of conversation is we each do a lot of subconscious
work, some of it conscious to fill in the blanks, right? We don't make
everyone be explicit about every little conversational move they make. They
have to make a move that's close enough to what we guessed so that we can fill
in the blanks and guess what they had in mind. And so, at every moment in
time, there's the thing that explicitly said, and then there's all the things
we each think is reasonable to infer from what they said about their current
opinion on the topic. And so that's what I'm referring to that that bundle of
inferred beliefs, that usually contains some rough estimate about what they
So for example, if I say, "George is the best candidate. He's young and he's
lively and he's full of energy." And you say, "George is terrible. Smith is
the best candidate." And then you say something good about Smith. And then I
go back and say, "Yeah, but Smith... Yeah, but you know, Smith is old and
Smith's have this corruption accusation and George doesn't have those things."
So even though I don't say it, you probably infer that I'm still pro George,
right? So then you can infer about my position on this, you know, parameter we
Right. Right. So let me say why that makes tons of sense to me. That's exactly
how I would think that disagreements would go. Suppose you and I run into each
other, right? And we start talking, we start saying stuff and like, we're just
spouting random statements, and our conversation is super boring. But then we
hit on something where we disagree, right? You say something and like, I'm
like, "Hey, I think the opposite of that thing." OK.
First of all, now, it's like, it's getting interesting, right? Up until that
point, it wasn't that interesting. So we gravitate towards these things. Now,
why? Right? Well, it's like, suppose I think something, like, I think George
is the best candidate, right? Then like, it's like, deep down, I also know
that that's a blind spot. Every belief you have is a blind spot, right?
Because you think it's true. And you think you are going to be more inclined
towards evidence of its being true than evidence of his being false, and all
of that, right?
So like, here's an occasion for me to dig into this thing, right? But I'm not
going to let you– I'm going to fight you, right? I'm not going to let you just
persuade me that he's a bad candidate. Because what if I like, in a way, I'm
not making the best use of you. The best use I can make of you is to really
force you to give me your very best case that George is the best candidate.
And the way that I can do that is produce my best case. So you're going to say
he's the worst candidate, and I think he's the best candidate.
I'm going to produce my best case. I'm going to bring all my evidence forward.
I'm going to try to hold on, like, as hard as I can to my belief, while you're
questioning it, right, so that I extract from you the maximum of information
on this point. And so you see me holding on. That's what I should be doing,
it's what you'd expect me to do. That's how I could potentially learn from
So the point is, to say, these characters in the stories you described, they
look like familiar characters, they even be characters I would empathize with,
they're just not the characters in the rationality story. And so, thing to
come to terms with is, to what extent do we embrace or not the rationality
story? And if we reject it, what are we going to put in its place? So, as you
may know, we have these models of belief, which represent, say, having a
probability distribution over all states, which can be equivalent to a set of
claims with confidences. And that's a standard model of belief. And that, in
that space of that model, we have models of how they should update their
beliefs, getting new information, including on hearing when somebody else say
And those models don't have this story you have. You just hold about them. So,
you're imagining that there are these creatures who have a belief and they see
that as like a bias, that is they accept that they are biased about that
belief. And they don't immediately correct that bias. They anticipate that a
certain kind of heated discussion will be the most likely to relieve them of
their bias, if it's possible to do that. And that's a feature of this kind of
creature with that kind of bias, and that kind of process by which they get
relieved of it. And these rationality models just do not have those creatures.
And that's not what those stories are about.
And I didn't want to say like they see, I mean, you could say they see it as a
bias. In that sense, they could see every belief of theirs as a bias, right?
But they are anticipating being irrationally tied to them unwilling–
irrationally, you know, unwilling to listen to criticism so they are
countering that with a context.
And that's just a way in which, yes, in general, if you have some sort of
biases, one way to fix the biases internally, sort of just fix it. And another
way is to put it in an outside context that might correct for it. So, let's
imagine a CEO with a company, and there's a project the CEO wants to do. But
inside the company, people just poopoo it, and then they aren't very excited
And so the CEO has two different strategies. He can try to fix his corporate
culture, and get people to sort of be more honest and open about it. But
another standard thing a CEO does is he hires a management consulting firm to
come in and do a study. And they do a study and they favor this proposal he
likes and now that pressures the people in the company to listen to it more
carefully, because they would reject each other on it, but they won't reject
the management consulting firm because it's McKinsey or somebody respectful.
So that's an example of how often when we have some sort of internal bias we
might correct for it with something external rather than fix it internally.
Right. And I think if you ask like, what is the use of another person, right?
There are a lot of different ways I can get information in the world, right?
But another human being has a distinctive – it's a distinctive kind of source
of information. Different from like, say slips of paper I might find or books
I might read, right? So... and like, you might think we would learn to use
people for the thing that they're best for, right? Whereas we learn to use
other sources for the other stuff, right? We go to people for, but then we can
only get from people.
And it seems to me that... so I think in some sense, every belief is a bias.
But it's also clear to all of us that some of our beliefs are more biases than
others. That is, we're very attached to some of the things that we think.
Right? And so, if you think about which are people's stickiest beliefs, right?
They're going to be ethical beliefs, they're going to be political beliefs,
right? Those are sticky, because there's an emotional attachment.
And so you'd predict, my theory, that people would get into fights over those
beliefs, right? Because those are the cases where this is the kind of help you
need. And it's precisely like, how else would you quest for objectivity, in a
case where in effect– I mean, correcting your own bias, like good luck, right?
You can't see it. It's a whole problem. It might get corrected, but how do you
systematically go about correcting when you don't see it as a bias? It seems
to me like that this is the kind of process that exactly the kind of process
you predict people would engage in.
So, let's just start right up in saying that I think this literature persuades
me then, in fact, people are not typically rational. And therefore, we need to
think in terms of what happens when people who are not rational engage in
various kinds of activities. So that's, to me what the literature does for me
that is, otherwise I might have given people much more benefit of the doubt
for being rational. And this is sort of nail in the coffin says, "Nope,
they're just not." And therefore, we need to consider these other mechanisms
and approaches that could help us deal with that. But you're sort of giving
people the benefit of the doubt, as if I'm biased and I'm going to try to use
these mechanisms to correct for that, whereas they could, in fact, do the
opposite. They could be using mechanisms to reinforce force and to make the
biases all the stronger.
So, for example, say, we're in a community of gossip, and I don't like you,
and I want to accuse you of some terrible thing. I could shut down anybody who
starts to defend you as being presumptively evil, and therefore, we should
listen to them when you shouldn't even let them talk. And I could use my power
over people's biases to prevent critical debate and refutation in order to get
my way, to get everybody to seem to agree that you're terrible and should be
gotten rid of because look at this terrible thing I say you did that nobody's
really allowed to say otherwise. Right? So there's a lot of ways conversation
can go quite wrong as well as right. And we shouldn't just presume that debate
or, disagreement will produce this correction to the bias.
Yeah. So OK, let me– first this thing about rationality, which is like, I
don't think people just get the word rationality for free, just because they
came up with some system. And now that system doesn't model how we behave,
right? But we stick with the system or like, that's what's rational. That's
not persuasive to me.
And in particular, a theory of rationality that has the result that when I
have a persistent disagreement with someone, I should walk away and try to
figure out which of us is crazy and maybe they're crazy. Like, that actually
to me doesn't like, it doesn't sound maximally rational, like as I would have
pre theoretically used the word rational. That is, like, for me, it was even
like, you should walk away and think that there was something you didn't
settle with someone, right? And there could be all sorts of reasons. And maybe
if you opened up the conversation with them again, you could find those
reasons. But... so anyway, I'm like, if we're– if the best that theory can do
is to prescribe that we should, you know...
I'm going to save the theory. I think I can somewhat save the theory. I'm not
committed to doing so. But I mean, I think I can say that, let's imagine that
we do decide that it's often productive to have this conflicting arguments.
So, this is a theory say, of courts. W say, the court process goes better if
we assign a lawyer on each side, and their job is each argue for their side.
And then we, the audience, will come to a more informed conclusion by hearing
this kind of fight.
Now, for this to work. It doesn't have to be true that any of these people are
irrational, that as we assign each lawyer to defending their side and they
could be completely irrational about, you know, maybe believing the other side
is probably right but still, it's their job to defend their side and they do
the best they can. And, we could see a debate where they each, the positions
they took were each quite predictable from what the other person said before.
But we aren't going to accept those stated positions as their actual beliefs.
We're going to say, "No, that's just the role they're taking." So, once we
disconnect the sort of statements and positions they're making in the process
of this dispute forum from their actual beliefs, then the theory about
rational beliefs no longer has a conflict with what they're saying.
So, I mean, I think one interesting presupposition in this sort of theory of
rationality that say, people in the ancient world would have disagreed with,
is– people like Plato and Aristotle, is the idea that there's such a thing as
being fully rational, but like not having a lot of knowledge. So there's this
sort of rational way of proceeding, given any level of informational state
Yeah, it's possible to be rational and ignorant.
Exactly. Right. And I think that that's would just have not been– there would
have been no word rational to which that corresponds, right? And it's, it's
not obvious that there is such a thing. That there's such a state as being
rational in the face of being very, very ignorant.
Well, there is.
And you might want to hypothesize or something, but like, it's a possible
view, right? That total rationality and total knowledge will be achieved
I would embrace the point of view from what you say, "Here I am in a world
that's complicated and I've got questions to answer. Please, someone tell me
what I should do next." And rationality is advice about what to do in whatever
situation you're in. So from that standard, rationality must answer every
possible situation you might be in and say, "What should I do here?" It's much
less useful if it only tells you what to do in a small fraction of the
situations you'll actually encounter. So practical rationality should be
exactly about the typical situations you'll find yourself in and what are the
appropriate things to do there?
Right, but if you're thinking about, like, if you're imagining somebody as
learning math, say, they're in the process of learning math, right? They don't
yet know the mathematical thing that they're trying to learn. But they're in
the process of learning. And now we ask, what is it for them to learn, and to
go through the process of learning it in a perfectly, completely rational way?
Right? In the face of the fact that they're ignorant of a bunch of it. And I
would say, "That's a weird question, because, of course, they have to make
mistakes, like as they're learning if they're going to do it wrong somehow.
And that's part of the learning process. Is that rational?" Like...
So this is the great enterprise that formal social science, including business
and economics and statistics are engaged in, in the modern world and much of
computer science is exactly to take situations where you might not know very
much and try to tell you what to do in those situations. So yes, statistics,
is, even if I have a small dataset, what should I do with it? How can I...
what conclusions can I draw?
I'm not doubting that you could do that. I'm doubting you can be perfected,
right? So if... look, if I have a student who doesn't know much math, and I
want to help them learn math, like there are stuff I can say, and I can
improve it, and I can get them to a much better cognitive condition.
But it's also what I say to them, is going to depend a lot, right, on the
case. And so like, how you get more information given the information that you
have depends a lot on like, which problem you're trying to solve, right? Even
the kind – the way you're going to use statistics, right? It's like, it's
going to depend on, like, are you solving a problem over the next five
minutes? Are you solving like, a problem where you're going to take a decade?
Are you like...
Sure, but that's just true for all questions, including rationality. For every
question you can answer, the first question you might ask yourself is how
context dependent do I expect the answer to be? And sometimes you expect the
answer to be quite context dependent. And other times you might hope for
relatively general answers. And then it just depends on what you can find. So
it's certainly possible that there are relatively general things to say to
someone who's very ignorant. And it's also, you know, many contexts where the
thing you most want to say is pretty context dependent. But why is that
especially different from anything else?
I guess I just think like the... it's not that there's like plenty of stuff
you could say. And maybe there's even stuff generically that you would say,
but there's something strange to me about, like breaking off the learning
process, which is what's happening when you break off the disagreements. You
were trying to learn something together and it failed, basically. And then
saying, OK, now, how do I learn from that? Right?
And I'm not even saying you could– there are probably a number of different
ways you could learn from that solo, right? The thought that like, the main
thing you should be doing is figuring out which of you is crazy seems wrong to
me. And it seems bad in a bunch of different ways, like bad for your future
interactions with that person. Because maybe they're not crazy, right? But
like, that's... that the main like... like if you were teaching someone math
and they only got to a certain point and they didn't go any further, the main
thing would be like, "Well, the next step would have been the next step inside
the teaching of the math, not like, what do we do given that they broke it off
at this point?
Let's imagine we have a set of people who go into business together
periodically as in pairs, OK? There's a community of people, and they pair off
periodically. And for a while, maybe a few months or something, let's say they
are together in business as a pair. And then they have a choice about how long
to continue and sometimes they break it off. And sometimes they say some
things about what happened toward the end, and why they left.
And sometimes they might even say, "Well, you know, we had a joint banking
account and I came in on Monday and it was empty, and I think the other guy
took it." So you might accuse people of actual substantial norm violations in
this thing. And of course, we won't know who's right. From the outside, we see
this pair, each accuse the other one of stealing some money. But we might,
again, look at who turns up in these stories more often. And we would take
that into account substantially.
But of course, we also just have them breaking it off for reasons that they
aren't entirely sure who was wrong and was bad match, or, they had make just a
wrong choice about business project or whatever. So, this is a situation
where, still at the end of the project, when they break it off, they each
might ask themselves, "Well, how did that go? And did I do... was it my fault?
Was it their fault? What am I going to say to other people about why this went
wrong?" And this, of course, happens in romantic relationships as well. And
you might say, well, this also happens in conversations, really.
This is really helpful, because we've encountered the place where I believe in
modesty. That is, I want to say, "Who are you to judge?" Like, Oh, who's...
let me... now let me discuss... now let me assess who's at fault, because now
all of a sudden, I placed myself in the role of judge. And it's like, it
doesn't matter how rational you are. You can never know, I think that you're
rational enough that you get to– because if you were really that rational, if
you really understood this that well, whatever it is, you should have said it
to them in the conversation, right? And been like, "Hey, I think you have this
bias or whatever."
But maybe you did and still broke off as well. So again, imagine you have this
business relation, you argued about the business, and you argued about what to
do, but still, you came in on the Monday morning and the bank account was
empty, and they claim you stole it you and you said "No, I didn't steal it."
So that's, you know, it's a real thing you would have to take into account,
right? Things can go that wrong. You wouldn't just say, "I don't know, maybe I
did steal it over the weekend."
I think that it's certainly true that like you– like once I was having an
argument with someone, and they were behaving really weird in the argument.
And then– and it was kind of upsetting. And then later, I learned that they
were drunk. And that made it all make a lot more sense.
And so, I think later you might learn, you might get like special information
or something, that so absolutely. But I think it's never right to go from the
disagreement to that. There's something very unsafe about that reasoning was
like, "Well, they kept disagreeing with me so maybe they're crazy. Let me
judge. Let me be the objective assessor, which of the two of us is crazy."
Like, you should just think that you're probably not in a good position to
make like, if you weren't able to adjudicate the dispute between the two of
you, you probably can't all by yourself adjudicate this meta dispute, you
shouldn't trust yourself to do that.
So I mean, that is perhaps the usual intuition. But the point is, you have to
use theory to try to figure out what can be inferred from situations. So...
It does fit my theory, because my theory says, the reason you got into this is
that you're worried that you have a kind of blind spot. We all know we have
them, by the way. And like, it's not actually clear how you get rid of them
just by taking an information.
OK. But again, let's take the business example. You've got two people in
business together and there's an accounting data series, an accounting shows
you like what revenue they got every day, and what expenses they had every
day, and how many hours each person came in every day. Right? And from that
detailed accounting of who did what, you might be able to attribute more
blame, depending on the details. But certainly, especially in the example
where, you know, everything makes sense until the last day when all of a
sudden the bank account gets emptied. You might say, that's pretty damning
evidence that somebody did something especially wrong. It wasn't just a bunch
of people who were trying their best to look– that looks like someone stole
the money. Right?
And so, but the difference between that and other cases as a matter of degree
debate depending on what our theory says about what we should expect to
happen. So for example, there's a standard way in which you could look at a
set table of numbers and see if they've been made up because the distribution
of the digits has to follow a certain standard distribution, and often when
people make up a bunch of numbers, they aren't clever enough to make them up
to follow that distribution. You can say, "Ah, you were making up all these
numbers, right?" So we look at this accounting record, and we found, "Oh,
these numbers are made up. Because they quite deviate quite substantially from
this distribution. Now, we've got some more clear evidence, there's a big
problem here, right?"
Sure. I think it can certainly be the case, in effect that someone is in some
way, you can come across evidence that someone was cheating, extra
conversational in effect evidence, right? But like, I mean and I suppose you
could think of them and they could be like, you could think of them as like a
rhetorical trickster or something.
Right. So I mean, the point is, if you hadn't been told about this
distribution of the digits thing, you might look at these numbers and say,
"It's impossible to tell if these are true numbers or not." And then somebody
points it out, and you go, "Oh, I didn't realize that. OK, yes, I guess you
can tell if the numbers are made up, right?" Or at least if they hadn't tried
this trick, at least you can identify many cases where it's made up. And so,
the rationality of disagreement literature saying, "Yeah, you didn't think you
could tell who was rational, how? It's so complicated. But look, this theory
is telling you something you didn't realize you could you could tell. Isn't
that great? We get some new information from a kind of analysis you didn't
even think was possible."
I mean, it... I'm not sure it fits because anyway, we've already said that our
disagreements don't fit the pattern that you would expect, right? We're not
seeing other people fundamentally as sources of information. We're actually
seeing them as occasions for disputes.
Right. So... go ahead.
There's something I want to say about something you said earlier, you know,
about how we have a lot of these kinds of arguments for... we might well have
set up these conflicts for bad reasons. Like to ruin someone's reputation or
whatever. And I think that that's right, that is, I think that the sort of,
the question the way that I'm putting it and you might – it might to you seem
sort of like I'm naive or optimistic, but really what I'm doing is I'm just
saying, take the thing that we do. Imagine that it is an attempt at or a
version of something, what is the ideal version of that thing? Right? Like,
imagine we're doing something but we could do it better. Right?
Now, your disagreement literature tells me what it would be to do it better in
one way, right? We're like exchanging information back and forth and updating
all that. And I think, no, that's not the picture of how to improve this.
That's not what it would look like. If this were going really well, it
wouldn't look like that. It would look different. It would look like the thing
I've described, that is what I – the thing we do seems to me an incipient
version of a thing I'm describing. And that does happen sometimes in some
It doesn't seem to me to be any version of this other process, where that
other process looks more to me like the way we generally exchange information
with our environment, but not like the way that we have interpersonal
conversations, which I think are optimized for what it is to interact with
another mind in ways that makes sense, as what you would predict that we don't
just treat other people as sources of information, but we treat them as
sources of argumentations.
So I think, what I'm about to say you will agree with, and I think I'm
summarizing an agreement, which is to say that we have sort of a simple
abstract theory of what it is to be a rational agent. And those rational
agents, the theory says, would have disagreements of a certain form, i.e. they
can't foresee their immediate disagreements, and that humans do foresee their
disagreements, at least in terms of the statements they make. And so, we draw
the conclusion that humans are substantially different from these rational
And then the question is, OK, but different, how, and for what purpose? And
you might think, oh, you know, this theory is neglecting some key
considerations such that the optimal way to handle a disagreement, even if you
had no other constraints would be to do it this other very different way. I
more plausibly think, no, we are actually broken in the sense that we humans
are just– we have other priorities than being accurate in our beliefs. We are
proud and we're social and we're loyal, et cetera.
But nevertheless, for these actual creatures that we are that raises the
question, well, what kinds of dispute forums or contexts are, in fact the
healthiest, that bring the best out of us? And that's a good design question.
Well, how should we – how should the best debates or disagreements go? But
there's also the question of, well, if we're not so inclined to do the best
and maybe we had malicious motives, then what would good institutions be that
would constrain those malicious motives and still produce decent disputes and
debates relative to the worst case that could happen if we just allow free
rein with all of our irrationality and malicious intentions.
And so it just opens the question, so your ideal refutation, forum or style
might be the best in principle. We have to prove that yet, but we certainly
don't also have like a clear argument in favor of it either. And so we are,
you know, want to continue the investigation.
So one, like, worry that I have, especially when I got to the end of your blog
posts and this thing you're having where you're comparing conversations to one
of a pair of business partners is like stealing money, confirms the thing I
was worried about, which is like it's a very offensive way to approach
conversation is that like, at the end, you're going to decide if the person's
And the problem with that kind of like offensiveness, I think, is that just a
huge part of conversation functioning in what I see is being it's the ideal
thing that it could be, right, not the ideal thing that creatures other than
us could have but our ideal thing that we could have, is that it requires a
fair amount of trust. And I think that, like when I talk to people about why
they don't engage in argument or reputation or whatever, everyone's always
telling me about how but like, people don't listen. And they won't listen to
you if you argue with them.
And everyone thinks this about everyone else, right? So we're in this bad
equilibrium, where everyone predicts that their attempts to argue with another
person are going to be received poorly. And in fact, because of those
predictions, when people do actually argue, like it is a reliable signal of
hostility and aggression, because of all these expectations, that like nobody
would listen to anybody else.
And that it just seems to me like, it's like there's a good thing we could be
doing with our minds. It's our best hope for getting it the truth, given our
biases, and given our blind spots. And the other rational thing is not a hope
for us. It's just not on the table for us. But there is a thing we could be
doing, but we don't do it. And the reason we don't do it is that we slip very
easily into uncharitable and untrusting attitudes towards one another.
And so, if your thought is like, oh, the rational thing to do is just to
decide that most people are crazy, and they were stealing the money in the
conversation, being inclined toward that mode of thinking is going to push us
further away from having good conversations.
So, the argument you've just given is a classic argument in much wider context
about just trust in society in general, not just in conversations, right?
So, many people have noted that the most productive say rich societies often
have high levels of trust. And then people have said, well, see the solution
to making society go better is just to have more trust. And the usual counter
argument is, well, what you want is trust to track trustworthiness. What you
really need is for people to be more trustworthy, and then for them to adapt
their trust to the trustworthiness, merely encouraging trust without
trustworthiness is a recipe for disaster, right?
So if you think about a high crime area, you could say, "Well, the problem in
the high crime area is there's lack of trust, people are afraid of being
mugged or being stolen from or cheated. If you just make sure everybody walks
through town, believing everything everybody tells them, well, that's not
going to make that neighborhood crime go down, and that's going to make a
bunch of people get exploited, right?" What you need to do is find a way
simultaneously for trust and trustworthiness to increase. So, when
trustworthiness has to be honest, and so you have to honestly evaluate whether
people are actually trustworthy and adapt accordingly. So what you really want
people to do is to actually be better, good faith arguers. And then other
people can trust them more to be that and then we can all together, relax and
be more trusting once we are more trustworthy.
So I think that it's an interesting fact about conversation, that you can't
really get hurt in this way that you're describing the person who's gotten the
money stolen, right? So suppose you and I are having an argument. And suppose
that at the end of the argument, like you're not persuaded [0:59:33]
[Indiscernible] my great arguments. And now, OK, I can walk away and I can be
like, "Was Robin crazy?" And I can engage in that thought, and then maybe I
conclude you are crazy, right?
Or, here's another thing I can do, not think about that, right? And now, maybe
you were crazy. Maybe you were malicious, maybe whatever, right? You either
persuaded me or you didn't, you either gave me good arguments or not. Right?
If you didn't give me good arguments and you didn't listen to my arguments,
you haven't like done anything bad to me. Right?
Let's take an easier case. So, in the context of conversations, people will
give you some evidence or offer an argument. And we might wonder how selective
they were with that. So, as we've discussed before, if you're talking with a
really smart, knowledgeable person, they might really be able to persuade you
against things that are not true, because they are very able to select the
most persuasive evidence out of a very large set that happens to lean in a
certain direction. And so, one of the things you're trying to judge is how
select– are they being honestly selective? Or are they being maliciously
selective for their conclusion? And so, that's a kind of distrust. It's not
about just overall disagreement, but it's a kind of distrust that you do have
to evaluate in the process of a debate to decide how much to be persuaded.
Absolutely. But I think it's really a different case. From the case where you
haven't– you've landed on a disagreement, so you have your view, right? Your
view is that it's not the other guy's view. This is what the rationality of
disagreement is about. And then the question is, OK, what are we supposed to
think about this? And my suggestion is, don't think anything about it. You're
done. You finished the argument. This is where it landed up. You don't have
the resource– unless you happen to have external resources, like the guy was
drunk, you don't have further resources for evaluating, don't set yourself up
as a judge. And there's nothing that they can do to you at that point.
Yes, I think you're right, that when somebody is giving you empirical
evidence, in some sense, that's a mixed case, between your kind of the Martian
conversation, where we're just exchanging information, right? And my kind of
argumentative conversation. There's a sense of when bringing in empirical
evidence is a little bit like treating yourself like you're just the world
throwing stuff up at you, right? And that creates the possibility for
manipulation. But that's non– from my point of view, non-ideal kind of
conversation. Ideal conversations to be little of empirical content, right?
Because then you...
I think we should have to disagree about there.
You're not making maximum use, like empirical study, if you just get from the
world, right? What can you get only from another person is like them just
disagreeing with you.
Most of the way you get empirical stuff is from other people. We don't mostly
get empirical stuff ourselves, we get it indirectly from other sources. Even
But you don't need to have that in an argument, you could just have that in
like a lecture or a book or whatever.
No, no, no. I think empirical– and empirical stuff and arguments are of a
similar form, like somebody could offer a clever argument, but they didn't
necessarily get it themselves. They got it from somebody else too.
But that doesn't matter. That's no problem. You can examine on the argument.
No, it's true. It's like...
You don't care where it came from.
No, but often, you can't examine fully on the clever argument, it's too much
work. So you have to decide whether it apparently clever argument is really
clever, is fully trustworthy. And so, yes, that's often...
But that's once again I think where you're imagining that the decision is like
this other, it's like, and now that step back inside the form of my mind, and
let me make a decision. It's like, you're deciding that in the conversation,
and you're like, "Well, that sounds like a good argument. But what about
this?" Right? That is the process of deciding what to think is just happens in
No, I think people often bring theory arguments in, that people in the
discussion just don't have sufficient time and attention to fully evaluate.
And they have to decide, you know, somebody says, Well, this was in this
journal, for example, or this is a consensus for this field.
Sure. That's similar to the empirical evidence thing, right? It's like,
there's something outside the conversation it's being leaned on. And if you
Yeah, but these are like the majority of most conversations I ever hear. So
you can't exclude these things. Yeah, some...
I'm not excluding them. What I'm saying is that the rationality of
disagreement problem was not a problem of someone has said something. And you
don't know whether– where that's coming from whether or not you trust them or
whatever. That wasn't part of the problem. The problem was that you've taken
any information, you've gone back and forth, you've landed on a view, right?
And now, they have a different view. And that's the situation. I agree with
you there's a different – there's a whole giant problem about testimony. And
that's what you're raising, right? When another person is serving as having a
kind of testimonial authority. When do you trust them or not? I don't think
that's fundamental to questions about the rationality of disagreement. So I
just think those are separate.
No, but the point is, there are these doubts you can have that are brought up
by concrete specifics. And the rationality disagreement is just another kind
of doubt brought up by concrete specific. It's a matter of how strong it is
and how often it's relevant. But it seems to be the same sort of general
I think it's just a different doubt in that, like, there's a solution, I've
given you my solution to the problem of the rationality of disagreement, it
doesn't work for testimony. So we need a different solution for testimony. I
don't know, I don't know what solution is there. But it works for the
disagreement case. Namely, don't do anything. The whole disagreement
literature is, what do you do next, after you've after you've gotten this
intractable vision with someone else? And the answer is nothing. There's
nothing to be done. Because you, I mean, continue the conversation if you can,
but it's X hypothesis that you can. That's it.
So, let's imagine that you thought of yourself as fair minded on a subject and
equally willing to bring up positive or negative arguments. And then somebody
points out that in fact, you're actually pretty consistently defending one
view, and not like looking at the counter arguments. So once somebody points
that out to you, that revises your opinion of yourself. And then you might
realize that you are leaning in this direction and take that into account when
trying to make a judgment.
That's an example of showing you data that suggests that you are a bit biased.
And that's often informative to just see some data bias. But the point is, the
whole point is to say, and disagreement is such an indication. The fact of a
disagreement is an indication of a certain kind of bias. And just like many
other indications of bias, you should decide, let's just ignore it, pretend it
doesn't exist, you know, it's not taking into account all the information you
And I think disagreement is not an indication of a bias. The whole reason why
the disagreement came into being and now I'm using disagreement my way, mainly
the argument, was because, of course, the parties know that they have a bias.
And that's why we want to have a disagreement. So yes, it's an indication that
the bias issue was not resolved in the conversation.
I think in the way most people use the word bias, most people entering into a
conversation don't know that they are biased in that way.
Well, yeah, they don't know in a particular way, right. But they... but why
would you talk anyway if you didn't think your biased.
They don't even they don't even believe it in general, right? The fact that
someone starts a conversation with somebody else who seems to disagree with
them, does not most people tell you, "Oh, you're biased about this." That's
just not how most people frame it.
I think it's exactly what it says. I think if we absolutely knew that we were
totally certain about it, we wouldn't bother talking other people about it.
We're worried that's why we talk to other people.
No, no bi– there's a difference between bias and error, you know, obviously.
Right. No. But I mean, bias. I mean, we know we have blind spots. And here's a
blind spot, and we can kind of feel it, but we can't quite tell what it is. We
should stop because we're over time.
So I would reserve the word bias... I would reserve the word bias for the
place where you can predict the direction of the error, not just being aware
that there are things you don't know, but being able to say, and I'm probably
wrong in this direction.
I think that's right. So like anything, you think that's a bias, you can
predict that you're going to try to keep going in that direction for any of
your beliefs, pretty much. But then you can predict it more for your more
passionately health [1:06:56] [Phonetic] beliefs.
Well, I mean, say I say the chance of rain is 50% tomorrow, like which way am
I mean, I think you're inclined to hold on to it's being 50%. So if you have
evidence that pushes you in the direction of its being 51%, you're a little
bit sticky towards the 50%, a little bit, a tiny bit. I mean, that's not – as
I said, it's a spectrum, right?
And so, your moral and political views are going to be at one end of the
spectrum. And the views about whether your mother lies or something, right?
Those are the good at that end of the spectrum. And like your prediction about
the rain is going to be pretty close, pretty slippery. It's going to be among
your slipperiest beliefs. But you know, I mean, who knows, maybe that's
totally slippery, right? But that's not the ones we argue about. We argue
about the ones where we know there's some stickiness to it.
So you're right, we should end. Until we meet again.