William James vs William Clifford
Robin:
Good evening, Agnes.
Agnes:
Hi, Robin.
Robin:
You have chosen some readings for us this evening.
Agnes:
Yeah, so I chose a dispute between William James and William Clifford.
Clifford wrote this paper called The Ethics of Belief, in which he argues for
the claim that you should never ever, ever, ever, ever believe anything on
insufficient evidence. And James then responds, at one point, he calls
Clifford, that delicious enfant terrible, which I loved. And I was like
academics cannot talk about each other that much anymore. He's clearly both
thrilled and appalled by Clifford's claims.
Robin:
Yeah.
Agnes:
And James wants to defend the leap of faith, basically.
Robin:
And so I've read these two. I think James's essay really stands on its own. It
doesn't really need Clifford's to respond to. He responds to many other people
who make similar sorts of claims. And his claim is pretty understandable I
think. So, I would just set aside the debate and just try to take James and
say, what are his good points? So, I think he has an excellent point in there.
And then he sort of mixes that up with some other stuff that I'm not as
thrilled with.
Agnes:
OK. Tell me what you think is the excellent point and then...
Robin:
So, the key point, he gives examples of social belief formation, or social
belief influences, where if you have more confidence in your associates and
confidence in your association, and other people take that confidence and
become reassured and become more socially bonded with you, and then your
confidence is more justified. But probably not fully justified, that is, being
overconfident in your associates plausibly produces more confidence and better
social outcomes for you and them.
But even so, the method that produced that was some degree of inaccuracy at
some level knowing that you're not being entirely honest or entirely accurate,
and you're allowing these other considerations to come in. And that's sort of,
as far as I can see is his trump card, his clear demonstration that you don't
just want to be methodically accurate and evidence-based, et cetera. So,
that's I think, his strongest point, and I think it's well worth pondering and
grappling with.
But along the way, I think he too much sort of goes with a binary concept of
belief or knowledge, where he's talking about whether you believe it, or don't
believe it, or withhold judgment. And I don't find that sort of framing very
useful. I think he often just goes wrong exactly because of that. And so I
would – I'm much more comfortable with a degree of belief framing, of just
talking about what is your appropriate degree of belief in each claim. But
nevertheless, even with that framing, it's still the key point that I raised,
I think that's still a difficult issue.
Agnes:
I think that's not at all his key point. I know exactly what you're talking
about. Because when I read this part of the paper, I said, "Robin is going to
like this part." Can I just read it out loud and you tell me whether I picked
out the correct part of the paper?
Robin:
OK.
Agnes:
So this is... oh I don't know if the thing I sent you even as page number. So
it's like page five of the PDF maybe?
Here in this room, we all of us believe in molecules and the conservation of
energy, in democracy and necessary progress, in Protestant Christianity, and
the duty of fighting for 'the doctrine of the immortal Monroe,' all for no
reasons worthy of the name. We see into these matters with no more inner
clearness, and probably with much less than any disbeliever in them might
possess. His unconventionality would probably have some grounds to show for
its conclusion; but for us, not insight, but the prestige of the opinions, is
what makes the spark shoot from them and light up our sleeping magazines of
faith. Our reason is quite satisfied in nine hundred and ninety-nine cases out
of every thousands of us, if it can find a few arguments that will do to
recite, in case our credulity is criticized by someone else. Our faith is
faith in someone else's faith, and in the greatest matters this is the most
the case.
I take it that's sort of what you're referring to.
Robin:
Well, no, that isn't what I'm referring to.
Agnes:
OK.
Robin:
I think that's a separate argument we could engage I have more in mind later
in it where he says, it's page... well, it's near – before the X section, I
guess. The paragraph before X section on page 13 according to this.
A social organism of any sort whatsoever, large or smallest, is what it is
because each member proceeds to his own duty with a trust that the other
members will simultaneously do theirs. Whenever a desired results is achieved
by the cooperation of many independent persons, its existence as a fact is a
pure consequence of the precursive faith in one another of those immediately
concerned.
And he gives the example, a few highway men simply because they can trust each
other, can outdo all the passengers on the train who are not coordinating in
this way. So it's, you know, and he gives another example how many women's
hearts are vanquished by the mere insistence of some man that they must love
him. So...
Agnes:
OK, OK. OK.
Robin:
The choice to believe, without strong evidence is causing good outcomes.
Agnes:
Right. OK. So he says that – here is a line that I think is crucial for that.
There are cases when a fact cannot come at all unless a preliminary faith
exists in its coming. That's the crucial statement of that, right? Like,
unless you believe your fellow people on the train are going to stand up with
you, you won't stand up to resist, unless you believe you love this man, you
won't actually– the love won't actually come into existence.
Robin:
Right. So, I think this is a key thing worth engaging. But if you think he has
other points that you'd like to discuss first, then I'm happy to do that.
Agnes:
No, I agree with you about this one. I thought you were going to tackle the
other one. So, maybe – but let's maybe actually... I agree with you. This is
sort of his most important point is that belief has practical powers and a
practical force. He's a pragmatist, right? And he thinks, look, if belief is
going to get you to the good outcome, and in fact, if it's necessary for the
good outcome, and it's even necessary for, in some sense, for the good outcome
to come true, right? Then how can we criticize that belief?
But let's start with your worry about degrees of belief. Because I think of
his framing as pretty important. That is the fact that he's not framing it in
terms of degrees of belief is pretty important. So maybe let's start with
this. Like, I've noticed you sometimes in podcasts, people ask you a question.
Maybe they ask you to predict something, or they ask you what you think about
something, and sometimes you say you don't know. Right?
Robin:
Probably, yeah. I don't... right.
Agnes:
Right. OK. So, and sometimes a person, like will even push you or something
and be like, "Come on." you know, and you'd just be like, "Look, I just don't
know." Right? And so the way I interpret that situation, that's happened to
me, too, right? is, you know, they might be like, "Well just make a guess or
just tell me where you think it is." And you or I might be like, "Look, I
think other people, like I think experts should speak to that question. I'm
not an expert in that. I haven't thought enough about it to hazard a guess."
But when I say those things, I'm not just saying I'm ho– I'm not just holding
out on the person at least, not always.
I mean, when I say, I don't know, it's like, that's also the state in my head.
I also in my head think I don't know. I'm not just unwilling to say. I
actually don't know. And what's going on there is when someone asked me about
and I say I don't know, what I've noticed is that I've suspended judgment on
that question. You know, I'm not going to hazard a determination one way or
the other. That seems to me a thing you can do, do you disagree?
Robin:
I think the existence of suspension of judgment is an interesting question.
But I don't think it's that relevant to this essay in the sense that, like, he
sets up his opponent as someone like the quote from this page we were, this is
page 13 again, the top that starts on the previous page.
“But if I stand aloof, and refuse to budge an inch until I have objective
evidence.”
Then you know, good consequences of belief don't happen. So he is – on a
religion issue in particular, which is his focus on the essay, he's trying to
contrast this sort of involved person who has a stance and who stance is
emotional and in some sense, influenced by what they want to believe, with
this idealized scientist who refuses to have a judgment.
And so, I think he gets too much mileage out of that comparison, because I
think the opposing scientist or person who disagree this religion doesn't have
to be withholding judgment, because he is disapproving of this withholding
judgment stance relative to his stance. And so, that's the sense in which I
think he's getting too much mileage out of because the other person he's
criticizing could well have a judgment, a degree of belief. And then his
argument would need to be contrary to that, not to the withholding of
judgment. So again, I'm willing for a moment to say...
Agnes:
I think you're right about that. I think it's a problem in the paper. And in a
way, like, this is two papers. One of these two papers is about faith, and
like the power of faith to bring about the good. And is that OK? Is that sort
of like epistemically closure to have faith. And then the other part of the
paper is a discussion between James and Clifford about whether the scientists
should suspend judgment until they've like reached certainty or something like
that. And Clifford is very worried about like, he thinks you should suspend
judgment, kind of at the drop of a hat, and be like, "We still don't know,
let's keep inquiring." And James wants to say, "Look, sometimes you just got
to go for some belief, and maybe you're wrong, but you still have the belief."
And I think you're right, that they don't line up that well, because the
atheist or something, or the, you know, in that passage you just read, he's
imagining somebody, two friends, two people starting to be friends, right?
Where the one guy is, like, not willing to have faith that the other person
likes them, right? And they're like, "Well, look, this is what my evidence
shows. It's like, you may like me or not." And because they don't have that
faith, but they have some belief about whether the– they could have some
belief of whether the other person likes them. They're willing to – they're
not going to show any more than what they have. And then it will never, like
grow into anything. Right? That's the second point. So I guess we can just
take them separately. And you think... yeah.
Robin:
So, when I read Clifford, I've only read it once before our discussion. I
didn't actually see him as arguing for withholding of judgment. And so, I'm
not sure if Clifford would agree with that characterization of his essay. He
is saying, you know, people are believing too strongly when they don't have
sufficient evidence. But my reading of him was that I could account for him in
a degree of belief framing as opposed to a withholding judgment framing. But
again, I only read it once. And it's not – that's not terribly important to
the central issue in James, I think, but you know, if James is mainly arguing
against withholding judgment about Clifford, then I'm not sure he's even right
that Clifford is making that claim.
Agnes:
I mean, so the phrasing that Clifford uses is that we're supposed to question
all that we believe. We're supposed to not let ourselves believe anything on
insufficient evidence. We should guard the purity of our belief with fanatical
care, right? And the examples that he gives...
Robin:
They aren't examples of withholding judgment. So his first example is of a
ship owner, who lets his ship go out and crash in the sea when he was willing
to believe it was probably safe, when he should have questioned more whether
it was safe. That wasn't withholding judgment. That person had a belief. And
then the second case is about people accusing someone of a crime, at least in
that society, without having looked more carefully at whether there was
evidence to support that crime. And again, that wasn't withholding judgment,
that was having a mistaken unjustified belief.
Agnes:
Right. But what Clifford thinks is that it was a mistake for him to have the
belief. And he didn't have it, honestly. Right? And he had no right to believe
on the evidence that he had. And so, he shouldn't have had the belief. And he
had it only because he stifled doubt. And had he not stifled the doubts, he
would have been in a state of suspended judgment.
Robin:
So, I agree, you could even try to project that on but he doesn't actually say
that. And I could see him as being read as saying, you should have had a
higher degree of uncertainty about your boat. You should have not been so
confident it would be safe, you should have estimated a higher probability of
the ship going down at sea.
Agnes:
So here's an example which I think supports James's interpretation of
Clifford. This is on the top of page three.
No man holding a strong belief on one side of a question, or even wishing to
hold the belief on one side, can investigate it with such fairness and
completeness as if he were really in doubt and unbiased; so that the existence
of a belief not founded on fair inquiry unfit to man for the performance of
this necessary duty.
Robin:
Are we reading James here or Clifford? Sorry.
Agnes:
Sorry, that's Clifford.
Robin:
OK.
Agnes:
That's Clifford. So that's Clifford saying, if you have a strong belief on one
side of a question, you can't investigate it fairly. That suggests that in
order to fairly inquire, you have to do this thing where you suspend judgment.
Because it's like, if you had 80%...
Robin:
I mean again, you could talk about an intermediate really, which isn't
suspending judgment but which is a degree of uncertainty. So I mean, we do
want to distinguish uncertainty from suspension of judgment here, right?
Agnes:
Right. But like the point is like take the ship owner, right? So suppose that
he had, you know... so is your thought we could translate Clifford's
objection, he had too high a credence in the safety of the ship?
Robin:
Yes.
Agnes:
If he had had a lower credence, he would have been able to inquire into it.
Robin:
Or, in a more precise terminology, a wider distribution over the quality of
the ship. His belief in the underlying quality of the ship was more uncertain,
had a wider variance, wider range than he would have worried about the ship
going down and investigated further. But that's not a suspension of judgment.
It's a uncertainty.
Agnes:
OK. And why do you think I mean, so suppose you just – suppose there's a state
of maximum uncertainty about something. Right? So that there must be such a
state, right?
Robin:
Well, that's disputed, actually. But there could be...
Agnes:
Oh, OK. Let's say a lot of uncertainty
Robin:
Right. There could be a state of minimal knowledge, say, at least, you know,
the least possible about it. And on that case, that would usually be a high
level uncertainty, but not maximal.
Agnes:
I mean, I guess one question here is like, suppose that the ship owner, right,
had insufficient uncertainty about the state of the ship. On James's
interpretation of Clifford, we can blame him for not performing an act,
namely, the act of suspending judgment. Right? We can say he should have
suspended judgment. And then from that state of suspended judgment, he could
have inquired. Now, on your view, I don't think there's anything like that.
It's like he had the wrong credence and that's you know, he was stuck at that
point, right?
Robin:
Well, the claim is, the evidence he had was appropriately matched to a degree
of uncertainty, a high level of degree of uncertainty, that is, he should have
known given the history of the boat and experience with it, that he should be
highly uncertain about its quality and reliability. And...
Agnes:
What should he have done?
Robin:
Well, I think he should have inquired further, he should have...
Agnes:
You mean, after he already had the wrong...
Robin:
So, I mean, he talks about that he should have paid some money, and he talks
about how expensive it would be to go look into the ship and fix it up,
basically. So, if he had fixed it, the uncertainty would be reduced. So
that's...
Agnes:
No, but that's not what I mean. What I mean is he had, so he had evidence,
right? And the evidence supported more uncertainty than he felt. But he felt
the wrong amount of uncertainty. Now, given the amount of uncertainty he felt,
which is very little, it seems like, you have to say, he did the rational
thing, namely, letting the ship go.
Robin:
And Clifford does say that, you know, he sincerely thought the ship was safe.
And then given his sincerity, that would be a fine conclusion. He just think
he didn't earn the confidence, honestly. Right?
Agnes:
Right. But so...
Robin:
He had a sincerity but it wasn't a honest sincerity.
Agnes:
Right. So like there's – on Clifford view, it's really important to Clifford's
view, that Clifford wants to moralize about our cognitive states. He wants to
be able to blame people for the way that they cognitively conduct themselves.
What I want to know is whether your framework lets you do that.
The way Clifford's framework lets him do that is he wants to say there's an
act, there's a mental cognitive act that you can perform of stifling doubts,
right? And there's a corresponding mental act that you can perform, this is
now on James's reading of Clifford, of suspending judgment and actually
looking into the question. And the guy could have performed those acts given
the state of his mind. And he didn't and so we should blame him.
Now, on your view. I don't think there's room for any of that, or is there?
Robin:
I certainly think so. So an analogy in our world say was, when the Catholic
Church heard reports of malfeasance by priests, they should have looked into
it more.
Agnes:
Right.
Robin:
Rather than giving their priest the benefit of the doubt. So, you could say,
did the Catholic Church suspend judgment? Well, no, they maintained a
substantial confidence in their priests. And that was unjustified given the
reports they had heard, which should have been concerning and should have then
been investigated. And I would see Clifford as saying exactly that same thing.
Look, he's got reasons to suspect this boat is problematic and he's not
looking into them. He is choosing a favorable belief that allows him to go on
without spending extra money and, and without worrying. And he gets paid
insurance and under the scenario, so he doesn't personally suffer.
Agnes:
Wait, wait. I'm not... what I'm asking is if we try to stick strictly to your
framework, which doesn't make use of suspension of judgment, and which, as far
as I can tell, doesn't have room for cognitive acts. I don't know that we can
– maybe he does, I wanted you to give me the cognitive facts. But you just use
Clifford's framework again instead of yours. So, if the idea is like, he has
the wrong distribution of credences or something, but he genuinely has that
wrong distribution of credences that's how the world looks to him. And so,
whatever he does should be rational from the point of view of having those
credences. Can we blame him for doing something that's consistent with his
credences?
Robin:
The key point is there's a progression of credences across time, and he is
mistakenly not following the process that he should and the path that he
should in adjusting his credence.
Agnes:
OK. But suppose he thinks he is, right? He's being sincere, right? So, to him,
it looks like he is following the right path.
Robin:
Yeah, at two different times, that is, there's the moment when he kind of
knows what he should do, and then pushes it out of his mind. And at that
moment of fault, when he should have known better. But once he has pushed it
out of his mind, it's no longer in his mind then he no longer aware that he
did the pushing out of his mind, perhaps. And at that point, you might say,
"Well, what is he to do at that point?" Unless he could sort of remember or
see and guess that he might be biased here, that he might not be looking
carefully enough. But...
Agnes:
So, I mean, if we take the moment of pushing out of his mind, right? That act.
Like, can you explain to me how, like, without bringing in any cognitive – I
mean, without bringing in anything like suspension of judgment, how is it that
a person – how is it that a Bayesian is supposed to correct for that sort of
thing?
Robin:
So, an issue that James raises early in his essay is whether it's even
possible to choose your beliefs.
Agnes:
Right.
Robin:
OK. So, under a typical rationality framework for beliefs, one is presuming
that one can choose beliefs, otherwise, it makes no sense to recommend ways to
form or update or choose beliefs, unless it's possible to choose them. So
Clifford, I would read as implicitly assuming that it is possible to choose
your different processes of reflection and belief formation. And one of those
is which things will you bring to mind and which things will you pursue as
thoughts, as opposed to repressing the thoughts. And he's recommending a
procedure that which says, if there are doubts, then you should pursue them,
you should think about them, and see where they lead and integrate those into
your beliefs. And if you don't do that in a systematic way that makes things,
makes unpleasant things go away, you're at fault.
Agnes:
So I think the question of like, what does that mean? So there's choosing
beliefs, and then you sort of gloss that as choosing reflection, and which
things will you bring to mind. And, you know, the question here is like, how
do we flesh that out into something that actually makes sense? So, on the one
hand, you can't just choose to believe something straight, like, you know, I
can't just choose to believe that this is a pink pen, right? Like, even if you
paid me a lot of money, I couldn't believe that.
Robin:
OK.
Agnes:
So, now, to choose reflection, choose the process of reflection, right? So
that I do think is closer to what they're talking about. And you said, "Well,
you can choose what you bring to mind." But that seems a bit paradoxical,
right? Because if I'm like, "OK, I'm going to choose to bring this thing to
mind, I'm not going to bring that thing to mind. I just brought that thing to
mind in choosing not to bring it to mind." Right? It's like, you can't think
about it before in some special place, it's all in the same head. And so, for
James and Clifford, the fundamental act here is the suspension of judgment.
Right? That's what they mean by a reflective activity that is subject to your
will, is you're like, I'm not going to have a belief well one or the other.
Robin:
Well, I mean, I can make the same argument you just made. I could say, well,
if you have a judgment, how could you suspend it? There it is, you can't make
it go away. Or, you could make that, again, tautological claim. So look, I
could say to you, you know, you're clearly capable of writing a book, right?
So write me a book right now. No, you didn't do it. I guess you can't write a
book. Well, you might say, well, a book writing is a process that takes place
over a long time. You need some input and some resources and then, given a lot
of structure and other supporting resources and context, you can write a book.
But you can't just write a book without any of the rest of that.
So similarly, you know, you're a big complicated mind, we each are big,
complicated mind, we have big, complicated processes that our minds are going
through. These processes involved what thoughts come to mind and what beliefs
we have and what actions we take. And we can influence these things, but not
just in the arbitrary way of picking one thing and changing it without
changing anything else. We can change the larger context in which these things
happen, which then indirectly changes the results that come out of it.
So you could say, indirectly, you can choose to believe by choosing the larger
context, which will eventually produce the beliefs you have, but you can't
just pick a particular belief and change.
Agnes:
I think that, I mean, I think I can't choose to believe this is pink even if
you give me a really long time. Though, I could, like I could do some stuff,
right, that would bring it about that at the end of that process, I believe
this was pink. Like, suppose that there's some special brainwashing thing that
I know I could do and that makes – or gives me some kind of colorblindness,
right?
Robin:
Right.
Agnes:
But that's not what we generally mean by choose to believe, namely, injure
yourself in such a way that you possibly come to believe.
Robin:
Right. But the point is...
Agnes:
So I don't think the question is time. That is I could write a book, you could
give me a time but I can't choose to believe this is pink even if you give me
a lot of time.
Robin:
Right, right. So I could say, "Can you go to New York City?" And you might
say, "Yes, I can." But does that mean you can go anywhere? Anytime? Can you go
to the moon? Can you can you go to the sun right now? Well, you can't go
anywhere, just because you can go some places. So to say you have some control
over beliefs doesn't mean you could make all possible beliefs into any value
in any context. Again, there's this big, complicated machine with all this
stuff going on, but you can have some influence over it. And that's what we're
talking about is the influence you do have how you use it.
Agnes:
I think the influence that I have, I don't think I can choose to believe
anything. That is there's nothing of which it's the case that I don't believe
now that I can choose to believe it, no matter how much time you give me. But
I think that I could choose to inquire into something to inquire into some
question, right? And I could even choose to inquire into some question why I
already have a belief.
But the way that I do that is by suspending judgment in both cases. When I
inquire into a question, I suspend judgment. When I have a belief but I want
to know is it really true, I suspend judgment. And so that's why I'm saying,
if you believe in this cognitive activity thing, this reflection, this choice,
the place where that seems to me to be situated is the suspension of judgment,
but you don't believe in suspension of judgment. Then I don't know what you
mean by cognitive activity.
Robin:
I mean, so we have a rich literature in cognitive psychology, and then a much
larger area of human sciences that describes humans as complicated creatures
that have lots of complicated processes by which they choose actions, form
beliefs, discuss, inquire, et cetera. And the idea of suspension of belief is
just one tiny drop in an ocean of many different things that can be going on
in the mind. I don't see why you would focus on that as the one way you could
possibly influence this complicated process. There are vast numbers of levers
and channels of influence.
Agnes:
Well, so I'm eager for you to tell me some of them. But first, I just want to
say remember that whatever this is, it's going to have to ground for Clifford,
a moral claim of blame, right?
Robin:
Sure.
Agnes:
So it can't, for instance, be a lever that you're unaware that you have, or
that you can't push it well, or whatever, because then we're not going to be
able to make that moral claim. And then secondly, I want – you made an
objection, I thought it was a good objection to, like you said, Well, can you
choose whether or not you suspend judgment if you can't choose what you
believe? And I think that's a fair question. And I actually think people are
much too ready to think they can simply suspend judgment. But I think here's
an illustration.
You know, I think in a conversation, I could say, suppose we didn't know.
Suppose we didn't know what year it was, say, we do know what year it is,
right? But suppose we didn't know what year it was. And then I put something
before you, right? And then I want you to consider how we would think about
our situation if we didn't know that? And I think you're able to do it, right?
And you're not like, "No, but I know what year it is." And I'm like, "Yeah,
but like, just suppose you didn't, right?" And you're like, "But I can't, I
can't stop." I think you're actually able to do it.
And so, I think it's not the case that all people in all situations are able
to suspend judgment. And I think we do it much better in social context than
by ourselves. But it's a real thing we can do, and you can see people doing it
in conversation, and you can see them failing at it in conversations, right?
Oh, a good example is philosophical thought experiments, right? Like trolley
problems. They require you to suspend judgment on so many questions. It's
like, but what if the trolley could be made to break down or something? And
it's like, no, no, no, don't think about that, right suspend judgment on all
these questions that you would be asking if you were in the actual trolley
situation. Only ask yourself, "Should I push the one guy over the bridge to
save the other five guys?" That's the only question I want you to ask
yourself. I don't want – I want you to suspend judgment and all other
questions.
OK. When you do this, some people fail at the suspension of judgment, right?
But some people succeed. So it seems like you can do it, it's a real thing you
can do. So there's my defensive suspension of judgment.
Robin:
So again, I'm not claiming suspension of judgment doesn't exist.
Agnes:
Well you just, you made an objection. You said if you can't choose your
beliefs how can you suspend judgment?
Robin:
You're right. But I was trying to respond to your claim that the form of your
argument was I can't change my beliefs, because there it is my belief, how can
I change it? Right? And I was saying, well, there is your judgment, how could
you change it? I was just trying to map the two arguments, show the structural
similarity.
But in terms of my claim about the situation, I would just say, people clearly
can make some choices about themselves. I mean, we can choose some things
about ourselves and we can choose some things about how we think. We can
choose who to talk to, what we read, which issues we engage. And those things
causally influence what we believe. So we can indirectly influence what we
believe by influencing the various things that causally influence what we
believe. So that's the QED of, of course, we can influence what we believe.
Then the issue is like to get into detail which particular levers have which
influences over which beliefs?
Agnes:
I agree we can influence what we believe. It's just that we can't choose to
believe some particular proposition, but we can choose to suspend judgment on
some particular proposition. And I think the asymmetry there comes from the
fact that suspending judgment is a form of attention, right? So to say suspend
judgment about P says, don't attend to P. Take your attention somewhere else.
And I think attention is under the direct control of your will, at least to
some extent. But I think that...
Robin:
There's other ways we control attention that aren't about suspending judgment.
Again, I agree that suspension of judgment is a thing that can exist, but
there's lots and lots of other things that exist. And Clifford isn't really
talking as far as I can tell about this suspension of judgment that you're
talking about. He's talking about many other things.
Agnes:
I mean, so he does – the word he uses is, is like, doubting, what he thinks we
need to be able to do is doubt. And I'm fine with that. I'm fine with using
the word doubting instead, right? That's just what I think is captured for
James by suspension of judgment, right? Doubting or being skeptical, or not
accepting, refusing to accept a proposition. That's what he's talking about.
He's worried we'll let one in that's bad, and it will stink up the whole
operation, right? That's the language he used.
Robin:
I mean, it seems like, if we can get past this terminological issue, we agree
that people do have a lot of influence over their cognitive processes, and
therefore over where they lead. And I think we can agree that people sometimes
choose their influences in a way to make unpleasant beliefs not appear as
likely. And so people look away from unpleasant things in some ways. And
that's in a sense what Clifford is criticizing here, is a choice that would
make you look away from something that was unpleasant, but sometimes its your
responsibility. And so, if we can say you should have looked into that, and if
you chose not to, then we hold you responsible for that. And Clifford is
correctly pointing out that as a common structure.
But again, the interesting thing is that overall, Clifford would seem to
presume that you should just always choose the context of your beliefs so as
to make them as accurate as possible. And James then has this other
interesting argument about why that's not always true. And then we have to
wonder, what is the scope of that counter argument? How widely does that
infect our overall rationality project?
Agnes:
Right. So like, I'm fine with the idea that, like, it seems important to the
debate between James and Clifford, that there is something like a level of
accuracy of your beliefs that you can like almost like choose, like, you can
be like, I want them to be as accurate as possible, middle accurate, right?
And that James thinks Clifford wants to set it up to 100%. And he thinks
that's absurd. We don't always need that much accuracy, we can have lower
amounts of accuracy. And that is to say, it's OK to just risk being wrong
sometimes. And accept, like, you know, on your view would be like having too
high of a credence in something, you should have a low credence or whatever,
right? But for the way James puts it is just accept a false belief.
Robin:
And that's where the binary thing, you know, objects, like he is talking about
it like you have to pick A or B and you know, A or not A and then either way,
you might be wrong. But you might say, well, what you want is a probability of
A and so you pick an intermediate thing and you don't have to choose between A
and not A, you can decide what your intermediate credence is. But you could
still say, you could put more work in and then get a better estimate of the
probability of A, but is that worth the bother?
So, I mean, James certainly points out that, like, we have a lot of influences
on our beliefs that we wouldn't fully endorse. So he goes over lots of
sentences and flowery language, or he talks about how people have passions and
biases and other inclinations, and that he even makes some extreme claims that
biology would never predict any sort of accuracy in beliefs or something,
which seems too far. But he does, basically argue that there are all these
other influences in our beliefs, and we're well aware of that. And he might
say, "Well, don't fight it too hard, that's not worth the bother." But he goes
beyond that in his discussion of religion, and his examples of sort of the
social belief of confidence producing the thing you have confidence, and that,
to me, is, again, the most problematic example.
Agnes:
So, I mean, I think that, you know, in terms of choosing your beliefs, what
both Clifford and James are imagining is a kind of cognitive labor that you
can engage in. So I think, sort of, by yourself or something like where it's
like, I have a given amount of information and evidence, there's a certain
state of evidence, right. And I can do sort of a slightly sloppy job. Or I can
do a more Cliffordian very thorough job, where the Cliffordian, the result of
the Cliffordian job is going to be a more accurate estimate and probably like
a lower confidence, and whatever I think.
And so, the question of like, controlling you – like, of course, there are all
sorts of ways of controlling your beliefs by like interacting with the world,
right? And of course, if you do this thing with Clifford, then you'll be
motivated to do these other things. But the question is just like in foro
interno, right, like inside your head, is there some something you could be
doing to have a more accurate estimate? Right?
And they think there is. They think there's a cognitive act, you could perform
of doubting, right? Given the same evidence that you have, like nothing
changes, you don't interact with anyone else. But there's something you do in
your head of doubt. It's what Descartes does in the meditations, right?
Exercise doubt where he's like, let me pretend that all my beliefs are false.
Now, let me try to reconstruct in the beginning. And so you have room for that
idea of just the cognitive activity of doubting the things that you already
think?
Robin:
I don't think either of these authors is committed to sort of an a-social
process for these things. I think when he talks about the example of the
person with the ship, he would have collaborated with other people to
investigate the ship, he wouldn't have studied the whole thing himself.
Agnes:
If we hadn't stifled his doubts. So there's two stages, right?
Robin:
Sure.
Agnes:
There's the doubt stifling, and then there's the collaborating.
Robin:
Right. But...
Agnes:
And I'm just focusing on that first part.
Robin:
But no doubt, you know, groups of people can't do things together unless each
one of them does something in the process. So, of course, there'll have to be
individual things that are done in the process of social things being done
together. And certainly, I don't know if you want to describe it purely as
this one binary act of doubting, I think it's more complicated than that. But
it's fine to have that as an example.
Agnes:
So among other mental acts, you do think there is this thing doubting. And you
just don't think it involves a suspension of judgment.
Robin:
So when you talk about suspension of judgment, part of the examples you give
seem to be more like considering hypotheticals. So I think that's a little
different. So on a particular claim, saying, "I don't have an opinion," I
think that's a somewhat different act than saying, "OK, let's assume A or
let's assume not A and see where that goes." That's conditional reasoning,
hypothetical reasoning. And I think we are capable of that, too. As well, as
we're capable of saying, "Let me not have an opinion on this." But one isn't
required for the other, you can do them separately.
Agnes:
I agree. But I think that conditional reasoning, and just straight direct
inquiring into the truth of something when you already have a belief that it
both require suspension of judgment. So, if I'm...
Robin:
I disagree there.
Agnes:
You don't think that using a hypothetical requires suspension of judgment on
the conditional?
Robin:
Correct. I don't. Right. So I think, say, if we're talking about the chance of
rain tomorrow, and I think there's a 10% chance of rain. You know, you could
still say assume there's rain tomorrow, let's walk through that. And I'm happy
to walk through that scenario. Even though I still believe there's a 10%
chance of rain tomorrow. I'm quite – it's quite possible to elaborate various
scenarios, even though you have beliefs about which scenarios are more likely.
Agnes:
But if you're assuming, suppose you're in the act of assuming that it's going
to rain, right?
Robin:
Right.
Agnes:
You're imagining you're in the world where it will rain. Right? And that, in
fact, suppose it is raining, right, in that world.
Robin:
Yeah.
Agnes:
Like 100% chance now, right?
Robin:
Yeah.
Agnes:
In that world that it's going to rain. because it is raining. So it's not 10%,
it's 100%.
Robin:
Again, you can reason about a possibility, without necessarily doing this
mental act of putting yourself in the world where it's true. There's lots of
ways to reason about conditionals and hypotheticals that don't involve sort of
a full mental immersion. But again, I think, you know, I think the most
interesting issue that I'd like to return to...
Agnes:
OK, OK. Yeah, yeah, let's go back, let's go to the interesting issue. I think
that's an interesting one too, so think about it. It seems to me that
suspension of judgment is required for like, reasoning about hypotheticals and
about – and like when you're, when you're simulating another possibility to
detach from the way you see the world. But...
Robin:
If you have a broad enough definition of the term, then it could easily be
true, but under a narrow definition, I disagree. But I just don't see that
much as at stake there. The more fundamental thing, I think here is that we
have this big complicated machine in our head, and it becomes a more
complicated machine when we share it with others and we together do things we
think together. But this big machine, we are aware of lots of influence over
it, and lots of ways it can go wrong. And we have some norms about what good
reasoning consists of, or at least what bad reasoning consists of.
And we have some norms about how you're supposed to avoid certain kinds of bad
reasoning. And these norms are usually justified in terms of how the bad
reasoning will produce less accuracy. And academic norms are rich with these
sorts of examples of concepts of rationality, concepts of statistics, concepts
of logic, wherein we find a certain pattern, we say, that would be bad
reasoning, you shouldn't, you know, shouldn't draw this conclusion when you
have these other beliefs together with it. And if you do, you need to watch
out for that and avoid it.
And Clifford, I think is appealing to some of those sorts of examples in our
mind of when you would have bad reasoning. And the problem, I mean, there's
several problems with it. One is that it's this huge, complicated machine. So
even if you had a perfect machine, and you knew that a particular thing would
be a defect in a perfect machine, it doesn't necessarily mean it's a defect in
the actual complicated, messy machine you have.
And it – but in addition, you have this even stronger example, where you can
see in certain situations where having inaccurate beliefs is functional and
useful, exactly when other people can see your beliefs, and it influences how
they react to you. And so this is a standard thing in game theory and
signaling. And it's something you know, a big part of my book, The Elephant in
the Brain, that humans evolved to have inaccurate beliefs in many ways,
because of this fact that other people could see them. And so they evolve to
have inaccurate beliefs in order to influence how other people treated them.
And that's a big source of our blind spots and inaccurate beliefs in our head.
And then we have to face the question, what should we do then in the face of
that? Because we have all these norms that are designed around making more
accurate beliefs in the context of a machine that was roughly right, but
sometimes making mistakes. And here we are this machine that makes a lot more
mistakes than we realized and for good reasons, sometimes, you know, how can
we adapt our intellectual norms to accommodate those sorts of cases? Do we?
Should we?
Agnes:
So, I think one thing that's interesting here is whether to James's fact,
James's argument that sometimes a fact cannot come at all unless a preliminary
faith exists in it's coming. Right? Well, that's essentially social. So he
gives – I think his most powerful example is the train, right? Which is
social, right? Where a whole train of passengers can be looted by just a few
people because the few people can count on each other, whereas the passengers
can't count on each other. They can't believe that other people will back them
up.
And so that's a social case, that's a case where it would be socially useful
to everyone for each of us to have inaccurate beliefs. Although I'm not even
sure it's right to say, I mean, I suppose it's inaccurate from the point of
view of your evidence, right?
Robin:
Right.
Agnes:
But it turns out to be accurate, right? And that it ends up predicting what
actually happens.
Robin:
So, these examples are actually like part of a larger literature that I
already am aware of, more recently, you know, as you see since James, and so I
would say, you could save that example through that argument but there are
other examples you couldn't that James doesn't bring up. So I'm conceding this
larger point that James isn't even making, that in many cases, you have to be
somewhat irrational.
So a closer example would be his persuading a woman to love him by not
accepting anything else, right? So, that actually doesn't usually work. So the
scenario we're imagining is, is this man, you know, has this 80% confidence
that she will love him in a case where it only works one in 10 times. But his
80% confidence, one in 10 times at least convinces her to love him, and that
he has to be inaccurate there to make that work. If he had the one 10% chance
that she will love him, the story is that wouldn't work on her. It would only
work if he has the 80% confidence that she will love him. And so, that's more
of a case where it does need to be inaccurate to work that way.
Agnes:
Right.
Robin:
And the examples we all know of are mostly social. I don't really, I can't
think of a non-social example, where you have this benefit from inaccurate
beliefs, unless you want to think about it internally, like, you have to split
parts of yourself and you have to motivate yourself by lying, you know, by
fooling yourself in a sense, there are examples like that.
Agnes:
So I mean, here's why I think it's important, whether it's social or not, the
entire point of this essay is about belief in God. Right? And I mean, it – of
course, belief in God is social, in a way, because there's God, right? But God
can see through us, so a lot of The Elephant in the Brain style effect is not
going to be that relevant, right? You're not going to trick God. And so, you
know, like when he talks about who gains promotions, and honors and stuff are
the people who think they can get them, not the people who discount them.
Right?
Robin:
Right, and perhaps, inaccurately think they can get them, over confidently
think. So, he is actually, I think, playing both sides of the street in terms
of the religious things. So he takes Pascal's wager. And he thinks that's a
terrible argument because he thinks God would see through, right, that you
were only believing in God in order to win Pascal's wager. And he explicitly
says, God would see through that and he should punish you. And I would be fine
with that. That would be just a terrible thing.
But then his essay is basically arguing the same thing. It's saying, You need
to just believe in God, because that'll work out well. He's not saying,
"Here's the evidence for God." He's just saying, you should be in this mental
mode, where you just believe in God and have this attachment to it. And that's
perfectly reasonable, and God will appreciate that. So, you know, the sort of
motivated reasoning for believing in God that he's suggesting, he doesn't
think God will punish in the ways that it would punish the Pascal's wager's
motivated reasoning, but it's not clear why his version is any better.
Agnes:
Yeah, I mean, like, I guess, I didn't think that his argument against Pascal
was that God would punish it. I thought his argument against Pascal was that
like, you know, you're pretty far away from religious faith if this is even
occurring to you. Which is to say, James thinks you can only will to believe
something, when the thing is a live hypothesis for you when in some sense,
like you're really compelled by the thought that maybe God exists. And he
thinks like, someone is using Pascal's wager...
Robin:
But it's not clear why.
Agnes:
He says it's a last desperate snatch at a weapon against the hardness of the
unbelieving heart, right? He's like, come on, if you are – you're too far
gone, if this argument would appeal to you.
Robin:
I don't think that's true. I mean, I think you could be, like, inclined to
believe in God and Pascal's wager which push you over the edge. I don't think
you have to be sort of the extreme disbeliever in order for you to consider
it. And that's what his thing is doing. He is actually trying to push you over
the edge too. He's saying, "If you are inclined to believe in God, well, come
with me and I'll give you a reason why you should go with that inclination.
And the reason you should go is because that might cause God to help you and
be nice to you, if he really exists." And that is Pascal's wager argument. He
is actually giving the Pascal's wager argument even though he makes fun of it.
Agnes:
I think that what he's saying is that in order to see that – it's closer to
this famous statement, like I believe in order that I may understand, Credo ut
intelligam, that if you believe in God, then the proof will come to you and
like you'll understand it, you'll have a clearer vision. But the clear vision,
the fact, is the product of the belief, right? The Pascalian picture is one
where you take the masses in the holy waters and eventually somehow you just
get into this mindless habit of doing all the religious stuff and in the end,
you just forget that you didn't used to be religious, right? That's how James
is interpreting Pascal, I think pretty reasonably.
And James is like, "No, no, we want the actual facts to come. We want you to
have this religious revelation. But you're not going to have the religious
revelation, if you don't start with the belief." Just like the people aren't
going to stand up on the train, unless they believe in advance that it's going
to work. So the fact has to be has to be ushered in by the belief.
Robin:
So, I mean, he doesn't really lay out the religious version of the argument in
as much detail as he does the social version, right? So he gives more concrete
examples of the social but for the religious one, somehow, somehow, if you
aren't kind of believing in God, you can't see the evidence that God exists,
and he doesn't really explain why that would be true. He just posited – he
suggests as a possibility that you should consider.
Agnes:
Right.
Robin:
It doesn't make a particular scenario for whether that would be true. Is it
God who chooses not to show the evidence unless you believe in him? Because, I
mean, that's a complaint people made about God, like, why does God punish
every...
Agnes:
I agree. It's the flaw of his paper. I totally agree. But let's imagine 10
different Robinson Crusoe's, OK? They get stranded on like, 10 different
islands, right? And they have like different like degrees of belief as to how
successful they're going to be, right? And are they going to be able to
survive? And it's at least plausible that like, their degree of actual
success, and surviving, is going to be like, suppose we found that the ones
that had – that were slightly overconfident were more likely to be successful.
The optimistic ones, right?
And the pessimistic ones were less likely. They were likely to die earlier.
And this is not social, right? They're just by themselves on the island. So,
let's imagine – I think that's what James is getting at with the example of
the people who believe they're going to get honors, get honors. That is
social. But I think it's not essentially.
Robin:
So in economics, we have this issue of the theory of the second best, we call
it. When we can figure out the first best solution, then it's just the right
answer. But when we figure out a second best solution, it's much harder to
justify that choice because it depends on so many other second best issues.
So, if our minds were just almost completely rational, and then if we asked
should one little part of it be broken and made irrational, then we would say,
"No, don't break it, like keep the whole machine going."
But if you've got a machine that's just irrational in lots of different parts,
it may well be that making one part more rational makes the overall system
worse. Different part, different irrationalities may compensate for each
other. It can just be a big, complicated, messy system. So it's just not true
that a big, messy, broken system is made better by fixing any one thing that
looks broken. It's just true about broken systems that often they happen to be
broken in ways that two different broken parts sort of support each other. And
you would want to be careful.
So that's just a basic question about the human mind. So we have – most of the
rationality analysis we have, imagine somewhat of a perfect reasoner, who is
going through the ideal reasoning process and then it's identifying how that
could go wrong if you ignored some evidence, or you know, was wishful thinking
about some conclusion, et cetera. But if you say, no, I'm nothing close to a
perfect reasoner. I am this big messed up machine, then all bets are off. Like
it's not clear, necessarily what which things are improvements are not.
And that can also be true for say, a company – say, you have a company that's
not doing ideally well, and you think its a perfect company, well, then you
need, say, better compensation schemes or better incentives. But in a broken
company, those might make it worse. You know, you just don't know until you
look at the details. And so that's a fundamental problem with rationality
analysis, which is, once we grant that humans are broken in many ways, and
have all these, you know, ways in which we are not ideal, then it becomes
harder to have as much confidence in any given recommendation for how to think
better, how to act better, how to interact better.
Agnes:
So suppose that, you know, the perfect rationality is not in the cards for us,
right? Then it seems like it could at least – and suppose that there's a
change that we could make, where if we make that change, so – in some sense,
like, we can still call that perfect rationality, rationality, but like, in a
way, we've kind of given over our word rationality to a not very useful
context, right?
Robin:
Right.
Agnes:
And like, what we originally meant by rationality is like, what cognitive
changes should we make, which ones we praise, which person do we not? Right?
And now we've allowed it to be hostage to like, you know, these other
creatures, right? So now, let's say we have rationality subscript One OK,
which we're going to now we're just going to call rationality, right? Which is
like, OK, but like, given that I'm broken and stuff, what's rational for me to
do? So then it seems like it could well be the answer is believe in God,
right? If the belief in God has the corresponding effect to the overconfidence
in the Robinson Crusoe.
Robin:
So, here is key meta problem, right? When you say, let's try to reason about
whether believing in God would be a good thing.
Agnes:
Yeah.
Robin:
What we're going to usually do is invoke our usual kinds of standards for
arguments that are the usual rational standards that we would apply to someone
who we thought was not broken. So, once we realize that our reasoning process
is broken, it's hard to know how to monitor and manage our reasoning in order
to be effective. You could make a bad argument for why we should believe in
God. And I wouldn't necessarily be able to criticize this as a bad argument,
because my criticisms of such arguments are based on the idea that you're
nearly close to a good argument. And therefore broken things are identifiable
that way. Once we get really far away from assuming that our arguments are
anything near any good, how do we know which are good or bad, good or bad
arguments? How do we criticize them? How do we improve them? How do we
critique them? We have a big problem there.
Agnes:
I mean, so like, the way that I think about arguing and reasoning, it doesn't
– this is related to our last conversation. It doesn't situate itself in these
ideal reasoners. And if yours does, that seems like a giant handicap for you.
Robin:
So, let's take your favorite, you know, Socrates, like often what he does is
shows somebody a contradiction, right?
Agnes:
Yeah.
Robin:
But there's no – once you know you're broken, there's no particular reason you
shouldn't accept contradictions. Contradictions aren't per se evidence that
you should change your mind. They are only evidence for someone who is
consistent. Consistent people should not have inconsistencies because a set of
consistencies, consistent beliefs together can produce more consistent beliefs
that are useful. But a set of inconsistent beliefs, it's not clear why you
should take away any one inconsistency.
Agnes:
I mean, so this is a you know, this is a point of great controversy in the
history of philosophy, contradictions. But Aristotle has a view that I think
is correct, which is that you can't believe a contradiction. Like, you
literally can't do it. You can assert something and like a little while later,
assert something that contradicts it.
Robin:
OK.
Agnes:
But it's not that because you believe the contradiction, it's because your
beliefs are shifting, and there isn't a fact of the matter about what you
believe. And so arguably, that is what Socrates is showing people. He's
showing them, you're all messed up, there's no consistent thing that you
think. And you thought there was, right? It's a shock to people. They thought
there was something that they thought, but there isn't anything that's a
thing. And what he's trying to do with them is to stabilize their thoughts, so
that there becomes to be something that they think.
Robin:
So I think, in fact, a great many ordinary people, when you point out an
inconsistency on some topic of difficulty then they are actually quite capable
of simply turning away, ignoring it and forgetting it, and going on with their
previous mental state. That is actually what ordinary people are quite capable
of doing. So, I think it's an idealized, you know, benefit of the doubt giving
to people this claim that they would, in fact, correct the inconsistency once
they see it and make it go away. I think that's the idealization of ourselves
as something that's trying to become more consistent.
Agnes:
I don't even think Socrates thinks that they succeed in correcting it. That
is, so, the inconsistency, remember on this interpretation, there isn't some
inconsistent proposition that you believe. That's not possible, right? It's
just that there isn't anything you believe there's just – you just spout stuff
you just about different things at different times. And you have an image of
yourself as consistent over time. You're like, "Yes, I'm a thinker. Here are
my positions." Right? But what you even mean by those positions changes over
time.
And I think you're absolutely right, and Socrates would agree that you're
right, that people are kind of prefer to be in denial about that. And as long
as they're not talking to Socrates, they can just keep going fluctuating
saying a different thing at every time. And only while he's talking to them,
holding them to claims that they've made or they enforce.
Robin:
But they do have beliefs, they just have– I think they do have inconsistent
beliefs. These are –it isn't that they don't believe anything. It's just that
these different beliefs are incoherent.
Agnes:
Well, I think they have beliefs. That is, well, I think they have beliefs that
extend over time.
Robin:
So you know, I would think, basically, if you have, say, two different areas,
and you have behavior in the two different areas, you might have some
principle you think would make those behaviors be coherent in certain way. But
it turns out, they aren't coherent. And you just have different behavior in
the different areas, but you still have those behaviors. And so each belief is
a description of your behavior in that area, or it shows roughly your
tendencies in those areas, so.
But again, the key point is, I think we each aspire to be more consistent and
coherent. And so when we identify an inconsistency, we do want to make it go
away, if it's easy. And so that's showing a general heuristic of trying to
sort of eliminating inconsistencies locally when you find them. And that a lot
of our other rationality heuristics are of that form. They are basically
taking a local structure, noticing how it deviates from some ideal structure,
and trying to push it in that direction.
So for example, probabilities. If I say the probability of A is 30%, and the
probability of not A is 80%. We go, no, no, these are supposed to add up to
one. And once I realize, Oh, they're supposed to add up to one, then I try to
move them in between til they're being 25 and 75. But that's the idea that,
you know, somehow moving local beliefs toward this ideal constraints that
consistent system was satisfied is, in fact, a good move.
Agnes:
So, if we go back to my multiple Robinson Crusoe's, it seems to me that the
idea that, you know, it's a sign of their brokenness, that overconfidence is
going to be correlated with success, where I mean, I don't actually know this
is true, right? This is an example I just made, an empirical example I just
made up, but...
Robin:
It is possible.
Agnes:
OK. So, you might think, like, the idea that, well, the ideal rational agent
would just be able to be the right amount of confidence, sort of, by
definition, right? Would that ideal rational agent, what you're doing, I
think, is sort of helping yourself to preferences, and to the stability of
those preferences. And like some of what I think happens when you lose faith,
or you lose hope, is that you just stop caring, you stop having preferences,
right?
So like, some of the work that certain kinds of belief do is they generate
preferences. They generate desires. So like the example that he uses of like,
a friendship that comes to be because each person sort of overdoes it at the
beginning, right? It's like, the only way it seems to me that you get that
ideal reasoner is by that reasoner not facing a problem that actual human
beings have to face. It's not that they're less broken than we are, it's just
that they're solving a much easier problem. They are allowed to take
preferences for granted, whereas actual agents have to maintain, manage,
increase, et cetera their preferences.
Robin:
So, I mean, the key problem here is that, once you deviate from these sort of
rationality models, you can make up all sorts of plausible hypotheses and find
many concrete examples that would seem to support each one. But it gets harder
to sort of reason more systematically about how often they are actually
useful, or a better thing, because we sort of left the world where we could
make such broad generalizations.
So, imagine we have a company, and we want the company to maximize profits.
And we find that in the shipping division, you know, they would make more
money if they ship it all on Friday, instead of on Thursday or something. And
we say, "OK, they should ship it now on Friday." And we say, "Oh, you're
assuming that the rest of the company is all profit maximizing." But what if
shipping on Thursday, even though it costs more in shipping somehow helps
marketing do something else differently?
Agnes:
Right.
Robin:
And they've got some way in which they're messed up on marketing. But given
that the way they're messed up on marketing, then shipping on Thursday is a
better thing to do than shipping on Friday, even though it looks like just
considering shipping, that would be better. And so, that's just sort of the
general example of what you're trying to – what you mostly want to have is
modular analysis. You want to be able to look at some small subset of your
reasons, or your beliefs, or your methods and reason about those and just
improve those. But the more you have to take into account lots of other
contexts in order to decide where any one thing is an improvement, then it
gets really hard to figure out what's an improvement at all.
Agnes:
So I mean, I see that it can seem ad hoc to just pick this one thing, but I
don't think I just picked it at random. That is we've had many discussions
about how in some sense, like there's a kind of Achilles heel of an economic
way of thinking, which is trying to understand motivation, right? And where
does motivation come from and how is it sustained?
Robin:
Sure.
Agnes:
And also, how is it like social? It's in a lot of ways social, right?
Robin:
I wasn't accusing you of making up a random example But to me, the most
interesting question here is, our larger intellectual norms and how we
critique each other. So, Clifford is coming from a context wherein he wants to
be able to support certain criticisms of people. And we do similar things in
legal reasoning, and in academia. We often look at some structures and we say,
"Ah, that's incoherent. That's inconsistent. You did p-hacking, you did
phishing, et cetera." And then we say, "You need to stop that here. We need to
fix that." And that's usually a very local modular analysis based on some
pretty relatively idealized assumptions about the rest of the system going
well, and saying, we should fix this given that.
If we can't make that assumption now, how do we criticize a false legal
ruling, false evidence presented at a trial, somebody lying about their, you
know, something else that happened in their lives? Right? So you know, for
example, we usually punish lying and discourage lying and want to disapprove
of it. But we do know cases where lying is a good thing. So can we just go
along with our usual habit of criticizing lying? Or, do we have a new system
that tells us what all the exceptions are? And without that system that tells
all the proper exceptions to lying how do we justify our criticizing lies?
Of course, this is true for most moral rules or other things, you know, murder
is bad except it's not always bad. But we don't have an exhaustive list of
exactly when murder isn't bad. And so, most of these rationality rules, you
know, are they usually right and there's just a small range of exceptions
where they aren't? And we'll have to, you know, suffer the fact that sometimes
we're wrong about it, or they just wrong in general, right?
Agnes:
I mean, I think that even in putting the question in this way, like, in your
mind, James has won this battle, right? Because Clifford's thought is no, no,
no, there aren't a bunch of different contexts, there isn't like, science
versus real life or whatever. There's just, here's a rule, always use it in
every context. Never believe on insufficient evidence, if anyone ever does it,
blame them. Blame all of humanity at every time under every situation for
doing that. So, you're a Jamesian, in that you think, no, sometimes it's OK to
believe on insufficient evidence.
Robin:
So we might say, say, for lying, right?
Agnes:
Yeah.
Robin:
Say we had a law against lying, and we could, you know, so you could prosecute
someone for lying. And now, you proved to me there are cases where lies are a
good thing.
Agnes:
Yeah.
Robin:
So now I say, if my two options are either we punish lying, or we don't. I may
say, let's just punish lying and in the cases where lies are a good idea, we
just lose, we suffer there. But unless we can find an intermediate process
where we only approve of the good lies, and not the bad ones, maybe we have to
accept – you know, so I might go along with Clifford, I'd say, "Fine, if
you've got the ship owner, and he didn't look carefully enough at the ship,
let's criticize him." And yes, it's, it's quite possible that given the
complexities the world, that was actually a good thing for him to do, but I
don't know how to figure that out. And so we're just going to go with the
simple rule.
Agnes:
Well, here's two things. So first of all, I don't think Clifford thinks that.
That is Clifford doesn't think, yeah, you know, most of the cases are going to
be one where you should criticize him, or it would be too inconvenient to come
up with a more sophisticated rule. Therefore, we should blame all the cases.
He thinks all the cases are wrong. It's always, always a mistake in every
case.
Robin:
Well, no, seriously, at that point, once we realize that say he could be
lying. Right? Once we realize he could be sophisticated, we're reading him as
if he was a coherent person telling the truth. Once we realize Clifford might
be lying, Clifford might be realizing exactly what we're talking about,
realizing deciding this is the best presentation to give the world because
this has the best effects in the world. Again, we can't even take an author at
their word anymore, because we realize, well, no, that's the naive assumption.
Agnes:
Sure, if we're allowing ourselves that, if we're allowing ourselves as
Straussian reading of Clifford, then it could be. We could make them totally
consistent. They could be actually be saying the same thing.
Robin:
Exactly.
Agnes:
But I also think you haven't satisfied James there with that interpretation.
Because I think James doesn't – I mean, well, obviously you wouldn't have
satisfied him because you're going to stick with the Cliffordian rule, but
also because James wants to say more than just there are cases in which blah,
blah, blah. He thinks these are all of the most important cases. That is what
he wants to say is like, "Yeah, OK, maybe there's this like little corner of
science where you could like care about evidence and stuff." But for real
life, oh, he has such a great line about the clouds. I'm going to find it
somewhere. You know, in real life, we have to like...
Robin:
But I can go further. I mean, he only gives a couple examples. I have the
whole book, The Elephant in the Brain. I'm showing a much wider scope and in
much more detail that in fact, these sorts of usefully mistaken beliefs are
very widespread. So that's the context in which if I'm going to be pushing the
honesty norm, I have to accept that scope of how often it goes wrong. And so,
but I still might be willing to do it.
Agnes:
And yet you still think – you still might want to have the honesty norms.
Robin:
Because I don't see the other alternative feasible, right? If we just excuse
everybody from everything, we never hold them to any sort of intellectual
norms of argument, coherence and evidence and consistency, et cetera, how are
we to cooperate together? How can we inquire? How can we refute? It seems
like, in some sense, refutation is threatened by, if we can't presume that
someone's trying to be consistent.
Agnes:
Like, it seems like, on that view, when you were blaming someone, you couldn't
do it sincerely. I mean, like, if you had the full point when you have...
Robin:
Well, I could do it sincerely, if I was incoherent and inconsistent, right?
That again, that's just showing more examples of how like, once we give up on
these useful norms, then so many of our usual arguments no longer have their
strength.
Agnes:
Seems to me that you're already in that situation, though. Like, with the
version where you get it back by saying, we're just going to have this, we're
going to – we don't see any other alternative so we'll have the principle of
honesty. That's already having given up on all the usual norms, right? It's
like, well, I don't actually think they're being honest but I'm going to...
Robin:
So I think we're over our usual time duration here.
Agnes:
That's true. OK.
Robin:
And so, we've done enough for tonight. But until we meet again.
Agnes:
OK.