Why is Truth Good? with William Eliot
Hi, Robin. Hi, Will.
Hi, Agnes. Hi, Robin.
We are joined again by Will.
And he is going to tell us what we are going to talk about.
We don’t know.
Well, a surprise sort of. I gave you a little hint a couple of days ago. It’s
a topic I’m – again, I’m curious about it. I think I’m pretty quite confused
about it. But this is something that there’s a lot of discourse and there has
been for many millennia on this very topic. And it’s kind of like why is truth
good? Why is truth good? Or in parallel, although it’s not the actual opposite
of this question, why is fiction bad? And these kind of explorations. So the
thing about truth is I’m just worried that a lot of defenses for why truth is
good rely upon the definition of truth and that truth represents reality, the
truth is linked to what is actually the case rather than hypothetically, so on
and so forth. But I’m reluctant to use those as reasons why truth is good. It
just seems like maybe like a tautology, some kind of argument that’s looping
back on itself. It has defined itself and then saying – and then someone else
is saying it’s good because truth is defined as truth.
So can I give you one standard philosophical reply, I’m not saying most
philosophers would give us but some would, that I find practically satisfying.
That’s the place to start. You might think – like supposed we are playing
monopoly and you are like, “Well, why you trying to get so many houses and why
are you trying – and hotels and things, why are you doing that?” And you’re
just like, “Well, that’s what monopoly is.” Right? Part of what it is to play
monopoly is to aim at certain things. But supposed you’re not playing
monopoly? Or you’re playing baseball, why are you trying to get a homerun or
whatever, right? There are certain goals that are internal to certain
activities. Given that you are playing those activities, you are going for
those goals. So you might think believing is like that. It’s like monopoly or
baseball in that it has a kind of constituted aim that part of what it is to
be in the believing game is to be aiming at the truth. And we all are in the
believing game, we all in fact do, not all the time, we do all the kind of
stuff besides believing. We have desires. We form hypotheses. We entertain
conjectures. We do all sorts of things that are not believing but in so far as
we do play the believing game, it’s just written into that game that what you
are aiming at is the truth.
I would invoke at least for a reference as Agnes did, the usual decision
theory rationale. So the usual decision theory says you have to make a bunch
of choices and you have preferences over these choices. And if your choices
meet some certain academes of certain kind of consistency that most people
think are reasonable axioms at least to meet ideally then you can be described
by a set of utility functions and a set of beliefs which are consistent with
these axioms and that these beliefs are of the sort of beliefs about truth or
at least they have that same formal structure. And within that formal
structure, if you were to get more information that narrowed your belief
distribution then you would make better decisions according to standard
decision theory. So standard decision theory says that there’s a set of
possible worlds, you don’t know which one you’re in. You want to make the
decisions that matched the worlds you’re in and the outcomes you want. The way
you do that is you have a set of preferences and you have a set of beliefs.
And the beliefs are literally just about which possible world is the actual
world. And so, the claim there is knowing the truth or having more accurate
beliefs literally is the same as knowing better which of the many possible
worlds is the actual world. And if you didn’t care which of the many possible
worlds was the actual world then the truth wouldn’t be very useful.
Yeah. OK. So there are kind of two explanations there. One is Robin’s one and
one is Agnes’ one. I’ll reply to the latter first. So it kind of seems that
the words that we are getting around here is reliability. So truth is
reliable. But I also don’t – I’m not fully convinced that that isn’t also the
definition of truth is that truth is reliable. And so to say truth is good
because it is reliable, I’m just – for some reason, I’ve got – I did it. This
is a circular argument that is really toxic to me and I’m kind of allergic to
it. Anything that seems like it’s looping back on itself.
Do you reject or accept the concept of many possible worlds and one actual
world? That is, if you accept that then I almost got you there according to
the standard story by saying, “Well, truth is just which is the actual world
and you could be wrong by thinking a different world is the actual world
compared to the one that is the actual world.”
OK. Yeah. So if we look at in this world level view where you’ve got as you
say the actual world and then you got many theoretical worlds, which you may
also describe as imaginary worlds, and I subscribe to that, I think there is
one – I mean leaving apart my Everettian interpretation of quantum mechanics
thoughts, I subscribe to that. Now, the question is, why should we live in the
actual world as opposed to a world. There are several different gradients? We
could go down one which is – we could find an explanation for the way things
are, which isn’t “true” but is true in that we have a definition for it and
the definition seems to correspond. So I’ll give you an example, right? Before
we knew about certain cosmological things such as heliocentrism or before we
had kind of lost of collective faith in a higher being, we had narratives and
stories which we thought were the actual world and we survived and lived quite
happily in those imaginary worlds. And that’s not to say that we found the
actual world now somehow because we still are very unsure about those sort of
things. But it seems that there’s this loyalty to pursuing what is the actual
world against what is an imaginary world such as a world based off of a
religious text. We just as society has chosen to go that direction. I’m
So I mean it seems to me that in Robin’s picture, he afforded the possibility
of separating the question of what is the actual world going to look like,
which world are we in, and the question of, what would guide our decisions?
Right? And so when you say, “I’m skeptical that we just defined two things,”
well, I’m actually not sure which of the two you are worried about conflating
truth with. But certainly, there’s space here for the thought that you could
learn – you could get more information about which world you live in and that
could have no impact on your decisions, right? So there could just be facts
like, I don’t know, counterfactual historians or something, right? They are
trying to look at – they are interested in certain weird sorts of truth. I
mean it’s a little hard to specify those truths in terms of possible worlds.
More challenging facts.
Yeah, certain facts. But the point is, it seems like the concept of truth can
still apply there. So I guess like it seems to be that what Robins is saying
is that a subset of truths can be useful to us. And so, he is not conflating
truth and usefulness. And in fact, he would be saying, not all truths can have
any impact on what decisions you make. Only some of them do and we invest
energy inviting those ones typically. But – so is your worry that even the
sort of more minimal claim of truth is – where truth is knowing which world
you live in regardless of whether that’s going to be useful to you or not that
even that, that may not be capturing the concept of the truth. Or is it that
you want to separate out the idea of truth and what’s useful in making
decisions? Williams: I think it’s more of the second one. I’m curious. I
should also say, I’m someone who is quite scientifically-minded so obviously I
care what is true because I mentioned it in doing analysis in the world and I
find it useful in that sense. But the way this started at this point of like
circularity or at least what I can sniff some circularity, but maybe that’s an
illusion in the moment.
Right, because I guess if you pick the second one, that’s what I thought you
were picking then I really think there isn’t circularity. That is, there are
two different concepts that he is bringing in, one is which world is the
actual world, and the other is what bearing do they have on our decisions?
Right? And the first is the question of truth and the second is the question
of like how do we live well or something. And Robin is saying that decision
theory says that this thing over here which is defined independently of
whether or not we make any use of it, not circularly, that can actually – it
does – it’s going to help us typically sometimes, sometimes these things will
help us make decisions.
So I think it might help to try to paint the picture of the non-truth oriented
people in as vivid a way as possible because I think if you look at that
picture in detail, you will see a version of truth there too. So let’s imagine
a religious community and we are setting aside all the practical ways they
farm and build houses and things like that or just looking on the topics where
they talk about their religion. And there’s a set of priests and other people
in this religion and they are making claims or they are discussing various
claims about the Holy Scriptures and the various events that happened and what
their gods might approve. And in that world, they are socially coordinating so
that for example, when the priest makes a statement about a certain topic,
they just make sure they never disagree with that in the future. So it’s
socially constructed process by which they chose that but then they’re just
going to stick with it. And so over time, they accumulate these things and all
these other just stick to and you as a member of the religion, want to do that
too. You want to go along with that. But along the way, it’s somewhat useful
to maybe to guess what the next version of that will be because you might –
you have to pick sides in these various political battles and you want to pick
the side that’s going to be aligned with the one that’s going to be officially
declared instead of the one that won’t be. And we could say that if this is
the opposite of a truth-oriented world because it’s all focused on the social
community and what they arbitrarily pick, but nevertheless, you as a member of
that community have this decision theory task which is what will they actually
pick? And for you, that is the actual world compared to the possible worlds
and you are trying to guess where will the community go in terms of making one
choice or another. And so you use decision theory in the same way you might
just do it anywhere else. You’re saying, “I’m trying to predict what this will
be. I’m using my various clues to guess that, the truth is what they will pick
and that’s the truth that’s relevant here.” So this is my attempt at a
construction of saying, try to imagine the world that’s the least
truth-oriented. And there’s a kind of truth that matters there too.
Yeah. Even within a world of fictions, you might still be wanting to operate
in a way which is reliable and predictable.
Which is a kind of by definition …
True to the story.
Then you might ask, well, which kinds of constructed truths should we embrace
versus non-constructed truth? So I think – so there’s this whole field of
philosophy of science and the sociology of science and a lot of that field has
for a long time been focused on the idea of the social construction of science
and the key claim that many people disputed or a part of them was the idea
that the social truths that science produces were socially constructed. They
were the result of some negotiation, political process, rather than an
objective nature. But for our purposes here, it’s still just as much a truth.
Yeah. OK. And that extends even to how we live our lives today right now
because the things which we take for granted as being true might not actually
be that way. But we can still act in a way which seems to conform with
decision theory because we are just using what we know will continue. Is that
the process there? Is it what we know will continue or …?
A socially constructed truth is still a truth would be the key point.
That people believe.
No. It’s a truth. That is, like in the United States, we all drive on the
right side of the road, right? That’s socially constructed. In another parts
of the universe, it could have been the left. The physics of the universe
didn’t declare that. We socially chose that. But it’s still the truth that in
the United States you drive on the right side of the road.
There are different things you might mean by social constructed but yeah. One
thing you might lean in is like it’s a kind of fiction but not that it might
be – it’s a reality that’s the product of decisions.
And the question is, is there a difference between those two concepts? What is
an example of a socially constructed truth that isn’t actually a truth with
respect to some social world?
Yes. And so, you’ve been asking about a lot because I was like maybe you are
asking about the isosceles triangle, is that a social construct?
Yeah. I mean I guess I’m inclined to think that yes, that the really pure
example of things that are socially constructed are things where in order to –
or so to speak, there’s no way – no instrument that could apprehend them other
than the mind that believes in them. And so, the example for me, the purest
example in all of my polls and I was very surprised because though this was
deemed socially constructed was not by at – the margin was not as big as some
other cases like money or whatever. It’s winning and losing. I think winning
and losing is purely socially constructed. There’s no reality that that
corresponds to. So, there are other realities that sometimes track like the
sometimes the winner gets a gold medal, right? And so then you could – your
instrument could pick up someone is having a gold medal. But your instrument
doesn’t pick up that that’s the winner. So there’s a symbolic assignation of
something as winning or it could be that death is losing.
But in a social world where people are trying to win and not lose and trying
to calculate which lose will help them win the game, we can use expected
utility theory to describe that situation and the true state is the state
where the person you thought win wins and that is truth. You might say it’s
disconnected from other truths or arbitrarily constructed truth, but it’s a
truth who won the game.
Can I give you a thought experiment that I just came up with? So suppose we
have Robin handsome utility function and someone who knows it. So they know
all of the things that you are trying to bring about in your life and
obviously, you own learning of truths and your view, it serves as utility
And suppose they were to say to you and you believe them, OK? That here’s a
bunch of falsehoods which if you were to delude yourself into coming to
believe them, overall, you can have more utility, given your utility function.
Here’s a pill. You can take this pill. If you take this pill, you will believe
a bunch of falsehoods and that will overall satisfy your desires better. Do
you take the pill?
If I’m completely convinced that it will achieve my purposes then I have to
want to do it. [Laughs]
It’s interesting that you frame that, right?
Well, there’s no escape there, right? But of course, I have to wonder whether
– but my assumption, you said it will give me all the things I want, right? If
that’s true, how else could I want to do, right? I might think it’s not
possible that this thing will give me everything I want. For example, I might
want to be in control of what I do and understand it. That may be part of the
things I want. But if you tell me, “Yes, that’s a negative but it’s out way
better than positive. You don’t exactly how but just trust us. It really is.”
Then I kind of have to believe that, right?
And what about you, Will? Do you take the pill that you’re going to believe
that a lot of falsehoods? It’s a lot but overall, your desires are better
It’s just tricky because I care a lot about things that are true but I have to
question why. And it may just be because I have – like Robin, a utility
function that just wants to operate rationally like continue life in a way
which is just going to be in accordance with what will give me the best life.
But if I wanted the best life and I need to accept falsehoods to get that,
then I would do that. I’m sure this is like a Kafka story or something.
This is a point just to summarize what should be the obvious extension to the
initial theory I described, so I described the standard expected utility
framework, right? And in that framework, the truth is this hidden instrumental
thing that you’re privately using to calculate what to do but actually in our
social world, we have various truths that we agree on or fight out or ascend
to, and that’s part of the object of our world in terms of what we want. It’s
not just a hidden set of tools we use to achieve other things. They are
central objects for our many games that we fight over. So, playing that other
role could then give more for you to want to believe the non-truth according
to that story because it could help you win these games where the truth is
part of the game.
I think the thing is that in life, there are like actually a lot these things.
There are a lot of these non-truths that will help you get ahead in the world
and achieve other things you might want. And I see you, Robin, as somebody who
has systematically avoided them.
Yeah. This is what I was about to say, which is like the reason Robin was kind
of cringey at that question maybe but like just as a general thought of his
way of thinking is, it is constrained by what is truth, which is I guess I’m
kind of surprised that you’ve sort of placing quite a lot of priority on this
social film over the top of truth.
So a simple way to think about living your life is that you are a creature
that’s part of a species and you aren’t that different from all other members
of your species and you evolved over this long time. And evolution puts you in
this game you are playing and it gave you various strategies and habits and
you are aware that some of those strategies and habits have you not being
fully honest or aware about various truths around you. I think that’s of the
things you can kind of figure out about how you are and how you are made in
the world. So the question is then, when you start to see a way in which your
inherited programming is leading you to believe false things or to embrace
false things, how resistant should you be to that?
And you might say, well, evolution produced these habits that you have for the
purpose of winning the usual games which is your best guess of what you want.
And so, you should go along with whatever habits evolution is suggesting for
you to believe the truth when it suggests you do that and to hide from it when
it suggests otherwise. That would be the average, exactly. So, the question
then is, how good a guide to what you want right now in the world you are
actually in is the evolved habits that your ancestors have bequeathed to you,
that were pretty good in a different ancient world for some sort of an average
person in that world? And so one thing I could say is, if I specialized in
trying to figure out complicated detailed truths in particular areas of the
world wherein people are quite inclined to mislead themselves then I have
assigned myself a task that includes resisting these pressures. That’s the job
I had picked. That’s not necessarily a guide for everybody else. They didn’t
pick this job. They aren’t necessarily in this situation.
It has to be a guide for everybody that reads your book.
It would also be an issue for anybody who takes as input the outputs of what I
produced. Yes. So other people will have to ask, to what extent do they want
to specialize in areas of thinking and ideas where in, there will be a lot of
temptations that evolved. Inherited temptations to believe something other
than what’s true. And how comfortable are they with embracing that as their
goal and agenda here, realizing that there are substantial chances that that
will come at costss that at least from evolution’s point of view, evolution
guessed that wasn’t a good idea. And you’re going contrary to its guess.
Yes. So I think that angle of evolution is interesting. We talked a lot about
evolution last podcast. But I asked my friend this question about truth,
whether or not truth is something that we can appreciate without defining
itself. And he said, the truth helps you not die. The truth is something that
has in a sense a life for evolution because of that fact that it’s linked to
reality, it gives you the best tool for working out how to stay alive and to
Right. Well, say, some parts of your mind using this truth orientation would
realize that launching yourself into battle ahead of the attack force might
lead you to die.
But other parts of your mind realize that declaring to your feudal compatriots
that that is exactly what you would do because you feel so tied and allied to
your group might also be exactly a winning strategy at least among those
people making that declaration quite against these other truths you might
know, right? That is, evolution could have told you that there are some social
truths that are especially important to us, that are in some sense in conflict
with these other more basic physical truths.
OK. I’m not sure I understood that entirely then.
So in your religion and social group, we are thinking about attacking those
people or they are attacking us and we are saying, “Will you run to the front
and fight or will you run away and hide?” And you are saying, “I will run to
the front and fight.” And I expect every good person to do that and I feel
really determined to do that. And evolution might have told you exactly to say
that sort of thing and to believe it sincerely, at least until the moment you
get near the front of the battle, in which case, it will trigger other
instincts. And that’s the truth that evolution could have selected you to
believe and say, at that moment. Whereas, from a distance analyzing the
situation, you might realize you could get killed running to the front of the
I see. It’s kind of evolution has made you believe – you could find an example
where evolution has made you believe something that might not be actually
We were going to the not getting you killed part, right?
That there are times when evolution entices you to pretend to not believe you
will be killed in situations where it does look like you would get killed
exactly in order to win other social games.
Why wouldn’t it just have you believe like yes, I might get killed but it’s
worth it? Wouldn’t that be the better belief?
It might find it hard to move you that way, right?
It’s constructing you.
So why not construct you …
Because it doesn’t have group think. Isn’t that the thing here? It doesn’t
have a type of thinking where it – this idea that evolution is just this –
evolution is kind of abstraction of what a gene individually wants to do. And
within your body, it wants to stay alive or it wants to pass on to a next
descendant. So to die is counterproductive to that aim.
Right. But risking death might be worth it because you have a chance to pass
your genes to the more upper class members of the group.
So from the point view of decision theory, it’s completely accurate to say
that if you want to change an agent who has beliefs and preferences in order
to get them to take other actions, it’s sufficient to just change the
preferences. You don’t necessarily need to change their beliefs. So more
plausibly, your mind is just this huge complicated amalgam of various
processes with all sorts of complexities that evolution found it hard to
manage and it’s taking some sort of opportunistic, easy way to get you to do
what it needs you to do. And the easy way will be to play both beliefs and
preferences even though in principle, it would only need to change
preferences. It would be enough to make you determined to go to the front of
the battle even if you are likely to die. But there’s this other process
that’s inside that they hear about, “I’m going to die,” and it freaks them out
and they start overruling other parts of your mind and like turning off things
and evolution didn’t know how to deal with that. And it was just easier just
to tell you, “Oh, you wouldn’t die.” Oh, OK.
To me, that is surprising because suppose there is something I don’t
particularly want to do, so like it’s hot in this room so I don’t particularly
want to go outside and run around right now, right? But like if you made it
worth my while, I could develop a preference for that. OK? Give me enough
money or whatever. But now, suppose you are like, “Look, I’m going to make it
worth your while. I want you to believe that the sky is red.” And you just
keep adding more money into the equation and I’m just like, “I’m sorry. It’s
not doing it for me. I just don’t think it’s red.” And so it seems like the
levers, the preference lever is just more movable than the cognitive lever of
just getting me believe false stuff.
So we did this episode previously on rot.
And I think rot is a concept that is hard to bring into mind for people who
don’t have a lot of experience with complicated systems. So I think anybody
who has inherited, say a large complicated software system or even rule system
for a company, and then tasked with making some change, is very familiar with
the concept that even though in principle, there would be changes only in one
section that could produce any outcome, as a matter of fact, you will be
searching for easy wins that allow you to think make minimal changes to do
whatever you want. And so, most people in large bureaucratic or software
systems are looking for these things called hacks. A hack is exactly not a
general elegant solution but just an opportunistic small, local solution that
will locally get you what you want but in general, not get you – get it in
general in an elegant way that will make it easier to make more changes. So
your mind is hacked. Your mind is a whole collection of hacks. Evolution
hacked your mind because it really couldn’t have a large integrated abstract
perspective. It could just at each point make one win and another lose in
order to move your ancestors toward what it wanted.
Can I ask you a question, Will, about this question?
About what’s good about truth? So as I see it, human beings have two
fundamental forms of motivation. One of them is to believe what’s true and the
other is to pursue what’s good. And Robin is inclined to do like a translation
thing. Well, the reason we want to be believed as true is in order to pursue
what’s good. I’m not sure about that. I think we have a pretty strong, robust,
just independent thing where we want to believe whats true even if it’s not
going to be good, even if it doesn’t satisfy our desires or preferences, and I
don’t think that could be captured as we have a preference, that is. I think I
could reconstruct the examples, well, you’ll believe more truths if you
believe this lie, I still don’t want to believe the lie. OK. So to my mind,
there are these two basic modus and you are worried about one of them, namely,
to believe what’s true one. And I wonder why you’re not worried about the
other one? You are like – why aren’t you like, “Well, there’s this thing we
called good and we are always like after it?” Is it just whatever we pursue or
is there anything out there? Are you also worried about that and about it
being in some sense arbitrary or are you only worried about truth?
I think I probably am worried about that. But I’ve also recognized that I have
very limited experience to really be thinking about these things in general.
So I kind of – I’m concerned that some things are very tricky to define and to
untangle so that we can be extremely confident that appropriating them or
trying to make progress towards them is a good idea. So I guess that’s kind of
where the truth questions come out of. It’s interesting because from the way
you’ve raised your question to me, it made me think that you wonder whether
I’m seriously questioning truth. Well, I’m not in that mode. But what I’m
saying, why is it that we like truth and can we say why without just defining
truth? But I guess that’s more of my question here. It’s not that I’m
wondering why the truth is something good in general. By extension, it
wouldn’t be that I’m worrying whether good is something arbitrary or something
that we should move towards. It’s what is the truth or what is good? And just
kind of incidentally, do we have a way of defining that which isn’t looping
back on itself. Maybe it’s this – maybe it helps if we come to this idea of
reliability. So this is what I said earlier. And I think it could – and it
kind of based off of what Robin was talking about with decision theory. So it
seems that reliability is the core, useful element of truth, which is that it
can be used in a sense, in a predictive modeling sentence. And I’m curious as
to what you guys think. Is reliability part of the definition of truth or
something that we take – that we use truth to do a reliable process or to act
reliably or yeah, as I was saying, is reliability part of truth itself?
So quite often, we have sets of related concepts that we see as fundamental.
And then we are asking, which is the fundamental concept that this sets? And
that’s hard to do because they are often like quite strongly correlated and
the deviations are hard to uncover or measure and we are often stuck in that
way. It’s just hard to pick out one. La Rochefeld has this quote or something
like, “You can’t look directly at the sun or at death.” Where it’s just these
things, if you try to focus on them very particularly, they’re just too much
for you and you sort of take them from a distance. But I would – its a common
observation, that I think is roughly true, that we have these sets of related
concepts and we have different ways we could summarize them but they usually
in effect, produce the same thing. But if you’re trying to push on which one
is the more fundamental, it’s hard to do. So for a suspected truth and
prediction or a reliability, it is a theorem I believe that typically on
average, when you are more truthful, you can make more accurate predictions,
right? Then you might say, “Well, could I flip that around and make
predictability the axiom?” and the other things that you result and that’s
just a matter of well, can you? I mean it’s hard but maybe it’s possible. But
even then, you might say – if you just care about predictability then maybe
you say this truth is a concept you don’t need because predictability is the
concept you really wanted. But can you really disentangle this too? What’s
point? I mean why bother? Don’t you kind of know that you want to predict
things and the truth helps you to do that and isn’t that good enough?
But this I think the question of induction, right? The things which
historically had been true, will they continue to be true? And you can’t see
historic occurrences of them being true to help you in the future. So maybe
that’s one argument against this rotation from predictability to truth. It’s
that induction argument from David Hume precisely for that reason.
But you’re going to have to make an assumption like that anyway to use truth
in some other ways. You’re not going to escape from making an induction
assumption. Maybe you should just give in and realize you’re going to have to
make an assumption like that.
No. I mean you could just refrain – you could just say, well, there are ideas
and there are impressions. And the impressions, we have just direct awareness
of. And causation is an example of relation of ideas and those are things we
can’t now explain. That will be Hume’s – I mean, sorry. Hume doesn’t actually
take the skeptical route. He thinks he is somewhere around it but people are
more impressed by Hume’s problem than his solution. Hume thinks that habit has
a kind of legitimacy. So the fact that we just keep doing it that way …
He said something like, “Let’s favor consistencies.”
Yeah, like there’s no – that the idea of necessary connection he thinks is an
illusion. The necessity or the idea that there are forces and stuff going –
but nonetheless he thinks you can kind of just go back to the habit you had
before. I mean just don’t think too hard about the causal forces. But I wanted
to raise something that is always – I’m not sure this is going to be directly
relevant. But it’s something that has always struck me as a really interesting
asymmetry between our pursuit of truth and our pursuit of goodness. And it’s
this. Whenever you are pursuing goodness, whenever you are trying to achieve
the good, you are trying to do some good. There’s something that you see as
being good and you are trying to bring that about. So if like you’re hungry or
you’re in some goodness-oriented process then you are not just trying to
achieve good in the abstract. I mean there is some good. Right? But – and so
you are committal. You could be wrong. But you are committal in so far as you
are pursuing good. You’re committed with respect to the goodness of something
that you are trying to bring about. And truth doesn’t work like that, and
that’s so interesting. So suppose you are trying to figure out whether P – you
are not trying to make it be the case that P or trying to make the case that
not P. You want to know whichever the right one is is the one you want to
believe, right? But with respect to the good, that’s actually not the way it
goes. You actually start with some like, “No, this is what I want to bring
about.” And you could later turned out to be deceived or whatever. So I think
this is part of what makes orientation to the truth just a little bit puzzling
is this kind of openness where I’m like, “I just want to know which way it is,
whichever way it is,” which I want to believe P – conditional on P is being
the case, I want to believe P. But conditional on not P being the case I want
to believe not P. And so it’s like which one do you want to believe? And it’s
like, well, whichever way is true. So there is this really interesting kind of
opacity and it’s like – it sounds like well, conditional on my being hungry, I
want food. But conditional on my being not hungry, I don’t want food. OK.
Which one are you? And you can answer. You are like, “Yeah, I’m hungry.” And
so, I’m not unsure which of those two ways I’m going. And so I think at least
it’s my sort of intuition that some of the reason why people raise the puzzles
that you just raised and be like, what are we after in wanting truth? It’s
that orientation to the truth is open in this really distinctive way and so it
doesn’t look like our other pursuits.
So I’d like to sort of mention how truth ends up being political [Laughs] in
that in our world at least, it’s very common for people to describe the
difference between their political group and other people’s political groups
as that theirs is more truth-oriented. They are a part of the reality,
community, etc, and that the other groups are more delusional or unwilling to
face the truth or self-deceived. And in that framing, then people are clearly
saying that truth orientation is better. Their group is right more because of
that, and that’s their main explanation for the disagreement between then and
the others is that they claim that really deep down, the other people know
they aren’t being so truth-oriented. They really know that they have these
other priorities. And that’s just this common framing that people use on many
different sides to explain differences on fundamental topics and to describe
why they are right and other people are wrong. And some people, like say
rationalists for example, it’s a community we are somewhat aware of, play this
game further. They don’t just claim that we are right and they are wrong. They
try to point to many habits of our community which they can use to say, “See?
This shows that we are more truth-oriented. We know Bayes’ theorem. We collect
statistics. We have these refutation processes or whatever.” And many academic
disciplines have done similarly. They have some other group that they feel
that they are rivalrous with and that they are disagreeing with and they will
point to their methods, statistics, logic, whatever, as the reason why they
are right and the others are wrong. And so that’s to me the context I tend to
think when I hear this discussion about how important is truth. I think almost
everyone when it comes to this us versus them thing will be leaning and what
we care a bit more about truth even if they might also acknowledge that
sometimes we are better off not acknowledging the truth. That they want to
have that be a human universal and not something that’s distinctively more
true about their group because then it makes their group wrong.
Yes. So it’s common between all the infighting groups that they all profess to
care about truth.
Right. In the abstract when it’s us versus them. But if you move to – when you
move aside from what group you are with context and start to think about say,
being romantic and how much you believe in your romantic ideals then people
will start to celebrate like being optimistic and not necessarily being
cynically-oriented toward raw facts. But then they will celebrate some other
stints toward truth outside of this political rivalry.
Yeah, that’s interesting in and of itself is – I mean maybe it is romanticism
is one way of describing it. But like it’s this kind of choosing this kind of
misty, rosy view of life because it’s preferable to something that is – I mean
you use the word cynicism. It’s something about ignorance maybe or something
about choosing not to get bogged down in the details has a valuable element to
it, at least in a romanticist frame of mind.
Yeah. When you are thinking about a romantic versus a nonromantic, most people
find the romantic more interesting and attractive and friendly. The
nonromantic seems dour and distrustworthy and suspicious in that romance
context. But again, we go to the political context and people slip around that
they are definitely on the truth side.
Yeah, I challenge this because in a sense, I find what could be sadder than
not looking at the truth is – what the truth is or not appreciating the
photosynthesis is what makes plants green and instead having some kind of
fantasy about a magical line running through the wand for example, like a
romantic might say some beautiful fiction. In a sense, maybe this is kind of –
I’ve swallowed my argument before. I’m putting it out into the world. But I’m
saying isn’t it kind of almost a romantic thing to appreciate the world as it
is rather than as misty-eyed view would say it is.
So, we’ve recorded a podcast recently on James criticizing Clifford where
James offers exactly the opposite argument of how you will need to
romantically believe in things to make them happen.
I’m so pleased about that, Robin. You really remembered what we talked about.
Yeah. So can I give the example that James gives? So sometimes you have to
first believe in the fact before the fact can come. So here is like one
example that I like. You’re on a train and some robbers come to rob the train
and James is marveling that like say, a band of five or ten robbers can rob a
whole giant train with like hundreds of people in it maybe. And it’s like
imagine if just all those people believe like we can overcome these robbers,
they easily could, right? If they all just had faith, right? And they might
say, “Well, look, there’s no evidence, right?” But like if they all just were
to believe, they would then rise up together and they would overcome the
robbers. And James says, there are tons of stuff like that in life where the
cynical point of view is just equivalent to the point of view of like, I’m
just being scientific and I can tell you that I can’t myself overcome these
robbers and I have no reason to believe that everybody else is going to rise
up. And so he just thinks like the people who have the – the other example he
gives is like often when you want to be friends with someone, in effect, you
have to treat them in a friendly way before you like have evidence that they
are going to be friends with you. And by sort of giving them the benefit of
the doubt and sort of seeing them as your friend, you make them your friend.
And if you were going to be sort of ruthlessly truth-directed in one sense,
you would never believe and you would be like, “I’ll wait and see whether he
is going to be friendly to me,” then nobody would ever be your friend.
Yeah. But isn’t it kind of incorporated in the ruthless and truth-directed way
of thinking that you would default to that behavior because in a cynical way,
it’s going to succeed?
Suppose people can see through you, suppose you are not much clever than
everybody else, which is the average person isn’t much clever than everybody
else, then it maybe that if you take that attitude, people will see through
you and nobody will be your friend.
A third example given was that often, someone will woo or seduce someone else
via a high-level of confidence and success of their pursuit. Right? If they –
if say this will work 2% of the time and they projected their belief of this
has a 2% chance but I’m going for it, that wouldn’t be very persuasive or
attractive to someone being wooed if they believe this has a 70% chance of
working even though objectively, it has a 2% chance. And they persistently
pursue it as if they had such a 70% chance. More often, they can succeed.
Yeah, sounds convincing.
But these are cases then where their world has conspired to make you more
successful if you don’t believe the truth.
And there are also cases where classically one invokes romance like romance.
People will have romantic visions about romance and it’s like you might think,
“Yeah, you can every date you go,” and you kind of have to think, “Maybe this
is going to be the one.” If you didn’t think that, this probably would not
work out. If before your marriage ceremony you are like, “Fifty percent chance
of divorce,” that increases the chance of divorce that’s higher than 50% you
And in say, philosophy of science, people have suggested that most researchers
need to have overconfidence in the success of their research program to
motivate their pursuing some unusual approach compared to the status quo and
that science wouldn’t work nearly as well if people were not substantially
overconfident about their particular research programs.
OK. So these are all like reasons why in a sense, not believing the truth or
not at least thinking about the truth or focusing on it or making it a key
part of your frame of mind helps you to succeed. But this also seemed kind of
weird as well like isn’t this not playing, to go back to monopoly, this is
kind of like not playing monopoly for the sake of being a better monopoly
player in a kind of strange way. I’m not sure how the analogy would map
directly but …
So I mean if the monopoly game was supposed to be just the analogy to belief
game, which is how it was originally setting it up then it would be just be
you would lose the game on purpose sometimes, which doesn’t make any sense. If
the monopoly game, we now change the analogy in its life, well, it’s clear
that sometimes in monopoly, you will sort of like intentionally give up a
property or intentionally not buy some – you might not buy boardwalk and park
place because you are like nobody ever lands on them for some reason. And so
the rest of you …
At least the brown ones.
The blue ones at the very end, they have these huge rents but like somehow,
the game is designed that like nobody ever lands on them. And so, it’s like
globally, you are still trying to make money but like locally, you might lose
money because you think overall you’re going to make money. And so, I actually
do think that like Robin’s framework, the decision theory framework, has an
easier way of making sense of these cases than the one I was suggesting where
belief is the truth game. I think the thing my framework has an easier way of
making sense of is the fact that in the cases where it’s to your advantage to
believe something false, you can’t actually just get yourself to go ahead and
do it. So suppose your point of view is just like you start out with this
point of view, look, there is like for each date, there is like a 1% chance
this is going anywhere. And someone is like, “If you think about it that way,
it’s never going to work. You have to think this is a 90% chance with every
date.” It’s going to be really hard to get that percentage shift over and you
could give them the whole argument we’ve just given and yet, it doesn’t work.
And you wonder, why doesn’t it work? I mean this person just wants it to work
out and you’ve shown them what happens. They have to do it in order to make it
work out. It’s just they have this wrong credence. Why can’t they just adjust
in the way that you would adjust with the properties?
But as an actual factual matter, many people do successfully grow up in
environments where they are trained in habits of dating and professional
competition and school such that they assimilate habits that are basically
optimistic lies but they never had to explicitly address it and consciously
adopt it. And so, they can quite successfully be not truth-oriented in those
Absolutely. But the question is, why do we have to that giant rigamarole with
respect to preferences? I mean you can just shift and adjust when you see you
have reason to do. Here, what you’re saying is in effect, you have to
elaborately construct the environments such that the person winds up
believing, “No, really, it is a 90% chance this time.” And then they are like
insisting that this is the truth. Right? That’s a distinctive feature of the
truth that it works that way that you had to do this very expensive process to
get them to believe, which is a process that appeared to make the world like a
Truman Show type world that you set up for them where they are like they have
all the beliefs that will make it work out for them because you couldn’t just
get them to just select those beliefs, even though all they want is to be
I’m just kind of interested in something you talked about earlier. This like
almost in what direction is the gravitation pull. So you were talking about
truth having this strange thing where you care not for what the answer is but
you care about the fact that it’s true as opposed to – of a situation such as
goodness but you do care what the answer is because in a way, the quality of
its goodness is somehow more closely bonded with the thing itself as opposed
to this truth which is a big circle around everything. I don’t know if …
I mean your concept of reliability is then closer to the concept of goodness,
right? So …
… reliability is this sort of thing that you know which direction you wanted
to go and you’re happy when it goes there and that might be a reason to think
of reliability as more fundamentally what you want than the truth. You want
the truth indirectly because it produces reliability rather than vice versa.
Right. But your original worry when you were like worried about Robin’s view
is something like yeah, but why think the truth is just reliability? Is that
somehow circular? Is it something going amiss there? And I guess there is
something that we missed. Here’s what I think about it, that philosophers also
think about it, we have two basic kind of mental states, mental orientations
to the world and they are the orientation of belief and the orientation of
desire. And when I have the orientation of desire then I have in my head some
representation like say, I want a cupcake. And I have a cupcake image in my
head and then I’m trying to make that real like maybe I go to the cupcake
store or maybe I bake a cupcake, right? So I try to make the world, the world
is sort of soft and malleable and I try to mold it so that it has the shape of
That’s how desire works. That’s how the pursuit of good works. Belief goes the
other way. It’s that say, I want to know like is there a cupcake in that
store. I want to know that. Well then, my mind is like a blank and I want it
just to reflect whatever the truth is. So first, the world is fixed and then
my mind is supposed to mold to whichever way the world happens to be, so
sometimes philosophers call that direction of fit. So the direction of fit of
belief is different from desire in that the direction of fit of desire is a
world to mind, whereas the direction of fit of belief is mind to world.
And that’s explained by the observation I made in our podcast that within the
standard expected utility framework, preferences are the part about you and
beliefs are the part about the world. So you’re trying to vary the beliefs to
match the world but not vary your preferences to match the world. The
preferences are varied to match what you are. I wanted to observe that even
though we mostly think we want to acquire more truth, I think I found out in
the process of writing Elephant in the Brain that we too easily make that
assumption. So The Elephant in the Brain is a project whereby we look for the
hidden motives in life and it’s easy beforehand to say, “Yes, I want to know
what the hidden motives in life are for myself and other people,” because you
presume that, “I’m a truth-oriented person and I want to know the truth.” And
a heroic scholar has that as their task, and I am a heroic scholar of course.
And then you dig into the hidden motives of human behavior and you find out
what they are, which is a mixture of pretty and ugly, various things. And you
find that maybe you didn’t want to know as much as you thought. You liked the
idea of learning the truth that reified your sense of glory and heroics but
what the truth is, is that you aren’t as heroic as you’d like to think and
neither is anybody else and they don’t want to hear it. And they aren’t going
to reward you for finding out and telling them so much. You made a presumption
that you want to know the truth but I think in fact, many people find out
through their lives things they presumed they wanted to know the truth about
and then found out about, that they didn’t actually want to know. But the only
way to find out is to realize that there it is, you know it and you’re not so
Yeah. This is I mean something to worry right in the car when I was saying
that I was very loyal to our ideas and you pointed out that this can often
mean you’re loyal to groups. So in the kind of – it has panned out to what you
are just saying in that you may say you’re loyal to the truth but in fact,
you’re loyal to something that might be described as your artwork of the truth
where you accept – if truth is like a color palette and you’re making a
painting of the real world, you might choose, “Actually, I’m not going to have
any black in my painting,” rejecting certain truths.
And you might not think that is who you were ahead of time.
So one of the most disturbing truths you will find out is how much you
actually care about the truth. When you go, “I’m just trying to discover the
Robin, you were saying that like politically, everyone likes to paint
themselves as being …
For their group, yes.
Their group is being on the side of the truth. And I sometimes wonder why. I
mean that is – you say, we have this heroic idea that we want the truth and
why any of that – why not have the idea about yourself, “Well, I mean I want
the truth and so forth only if it’s good for my group?” Isn’t that – wouldn’t
that show loyalty weighed better if you are like, “I’m willing to lie for my
Well, I think people have in mind an audience. So, an audience is hearing two
groups and one group is saying, “We just care about the truth,” and another
group might be saying, “Well, we believe what we find comfortable to believe.”
And I think they both kind of believe that the audience wouldn’t find that
second position as persuasive.
But why not? I mean unless people really fundamentally care about the truth,
which would – that seems circular to us.
They might care about the truth of which group to join.
Right. But like if I join this group, it’s going to be like really – we will
be really loyal. We will always be on the same page and stuff. Why not join
I think people often do make that appeal. They just don’t make it in the truth
They make it in another more indirect term, so I think that appeal is in fact
I guess I’m just wondering like it seems to me very striking that we in
politics, universally make this appeal to truth and that has to then speak to
something in the audience and it has to speak to something in the audience
about what group to join. For instance, it speaks to something in Will like
when the vouchers and stuff and say, these are our ideas and we care about
truth, that makes him want to join their group. And he doesn’t just want to
join because they are very groupie or very joinie or something, he wants to
join because they appeal to this thing that appealed to him independently of
his being in the group.
So that’s what I believe and I’m also very convinced by Robin’s kind of group
level way of thinking. I was going to say maybe this is on the similar line.
It’s kind of like an answer to your question. Just walking around DC, I see
loads of signs advertising different candidates for Mayor, so I was thinking
how would you make the best kind of like or how do you make the best campaign,
I should say. And I think you have to say things which the voter is going to
be confident you will actually do. So it’s like actionability. And if you are
saying things that aren’t true in your campaign, everyone is going to be –
there was somebody, a couple of years ago, who said that everyone was going to
get a horse or something or like that.
Yeah, a pony.
Oh, is it a pony? I can’t remember. But like let’s take that and make it
stupid. So if I said, I was a candidate running for mayor of DC, I think it’s
mayor, and everyone is going to get a rainbow-colored novel(?) then it plays
this credibility issues on all my other things because the actionability of my
promise is very low. So in a sense, truth at least when it comes to this
utility when we are getting someone to join a group is important because it’s
linked to the actionability of what a group is fighting for.
I think often, when people are trying to persuade an audience, and to take
positions that appeal to an audience, a big question is what kind of
constraints or limitations can the audience actually notice and take into
account? And when the audience can’t really notice some of them then the
speakers are induced to ignore them as well. So imagine there are ten
different categories of social spending by a government and you have a limited
budget, so you couldn’t increase all of them. But we have a separate topic –
debate about each topic and in each topic, each candidate says that they will
increase that one.
And the voters – the readers never noticed that they made all these different
promises which aren’t compatible together. But if they don’t bother to notice
those inconsistencies then the incentive is to go at it because otherwise
you’d be – so say, I’m going to increase the first three but I can’t increase
the rest because they weren’t on my budget. But now, on the rest of those
debates, the other candidate says, “I’m going to increase that.” And I say,
“Sorry, I can’t.” And they sound like better candidate because look, look at
all the things they are going to do.
Yeah. And this is a problem on the voters not being able to assess the
Right. And so, I’ve made this observation that an awful lot of the flaws of
our current public debate in political systems, they are often laid at the
feet of politicians or various intellectuals, but you could say, there really
– should we lay it at the feet of the audience who can’t make various subtle
distinctions. So for various kind of signalling games, I might complain an
academia that people do all these over the top efforts to make complicated
models or complicated statistical analyses that aren’t actually that helpful
but they impress audiences. If the audience isn’t smart enough to notice
whether they are actually useful or not, how can you blame these people for
doing what the audience is rewarding them for? And so in some sense, we will
have a better world when we have better audiences that appreciate the pitches
being made to them.
So another way in which you kind of don’t want to tell the truth.
When your audience can’t tell the truth either, yes.
So it is being paradoxical though about the idea that you have to make these
really, really complicated models to impress a group of people who aren’t
smart enough to figure out whether or not the models are useful but they can
follow the models? How’s that?
They can just see that they’re complicated. They can see that it would be hard
to do that. I would find it hard to do that. If they can do it, they must be
able to do more than I could.
Like in philosophy, when people give, they tend to have like complicated
papers or complicated arguments and then people make objections. So they have
to understand and presume when someone comes and gives an econ talk, some
people make objections, right? So they are like, “Hey, you made a mistake
here. This seems wrong. That seems wrong.” And you couldn’t do that if you
Of course, no. But the question is if a much simpler model would have done the
same purpose, they will still do the complicated model?
And wouldn’t the audience notice that though?
That’s the whole point. The audience may not care because the audience may
also be mainly interested in showing that they are clever to find these
objections. And the ultimate audience who is the consumer of all these things
is just going on the fact that these people have been selected out of this
competitive process to be the ones who can make the complicated ones and find
the holes in the complicated ones. And so, those people have evidence to play
this complicate game. And again, if the audience were smart enough to be able
to see, isn’t that more complicated than you need? Then maybe this process
could be deflated when the audience doesn’t notice that or doesn’t bother to
ask that then the game goes on.
OK. So how about this concept of a useful fiction? So I think that’s kind of
what we’ve been approaching here with this idea of debate can do this, each
promising something which isn’t actually actionable, at least when viewed as a
group. But for them, that’s a useful fiction. So, how do we – assuming that
there are useful fictions in the universe, I’m sure there are, even if it’s
something simple like a story about, I don’t know, a child story that late
helps that person become a more kind person. Let’s say that was a fiction that
was useful. But kind of across the spectrum from child story to political BS,
we have useful fictions. I’m trying to think about how they fit into the
puzzle piece that concerns …
I mean I would suggest the word fiction connotes a much more specific, much
more structured thing than merely misleading statements. The world is full of
usefully misleading statements but fiction connotes a very particular subset
Yeah. I mean misleading statements, something that’s a falsehood essentially.
So I mean I think that even a statement like, fictions are pretty much the
only things that are guaranteed to be useful, because somebody had to make
them up for some reason. Whereas like, many facts aren’t useful at all. So
fiction is designed for a purpose and a misleading statement is designed for a
purpose, right? You want to mislead people in one way rather than in another
way because it’s useful.
So all fictions are going to be useful. The question is just to whom? And are
they net useful for everyone which maybe that’s the interesting question. I
think that like – I mean there definitely some fictions that are net useful to
everyone or like not definitely. It’s too strong. But like plausibly, the
fictions that have withstood the test of time like Homer's Iliad or the Bible
or whatever. Its likely, those are not useful to people given that they have
stuck around for so long.
Let’s just point out that models are false. That is in almost any area of
intellectual inquiry, a common thing to do is to produce models. They are
simplifications of reality that can be easily – more easily followed. Their
implications can be traced out more easily. And we quite often use models
because we can do those things and we know that models are false. That is,
they are oversimplifications.
Very nice. Yeah, OK. But then …
Like all fictions, they are useful.
So maybe the fictions are useful because like a model that correlated with
They can be useful for many purposes, sometimes to mislead people, sometimes
to help people see the truth. So you could say that a model literally helps
people see the truth better. Yeah, because otherwise it’s just its this
opacity of cloud of un-understandable mess. And at least a model shows them
some patterns they can make sense of that otherwise they couldn’t see.
How about for Homer’s Iliad and the Bible? Do you think it’s because in a
weird way they are model, that they’ve survived?
I think that – so I have my own theory of fiction, of artistic fiction, which
is that they are designed to show us the bad side of life. So Robin was
bringing up the fact that like knowing which world you are in of all the
possible worlds can be useful for making decisions. But there are actually a
lot of stuff you might learn that’s not that useful for making decisions, like
how profoundly are you suffering right now or something? Where are all the –
get a really fine grain and detailed understanding of all the ways in which
you are unhappy and all the ways in which the world is structured so as to
make you miserable. All of that might not be that useful for you and guiding
your life but it’s like a bunch of truths. You might be interested in those
truths. And I think you systematically turning your attention away from them
because they are not useful, they don’t give you guidance. And my own view is
that the blunt instrument view of art, of what art does for us is just allow
us to see that like dark side, the dark side of the moon, the dark side of the
human phenomena, the bad part that we tend to look away from to whatever
extent we can. And so you will just find that when you look at fictions, they
focus on bad stuff, on happiness, on suffering, on betrayal. The good stuff
tends to be highlighting or it has a role, the secondary role relative to the
bad stuff. And so then, which fictions survived is ones that really bring home
to you a certain set of evils. Now, what are the evils that Homer’s Iliad is
trying to bring home to you? I think – I’m not sure if it’s going to be the
same for every audience, but like one is just the problem of having a side,
like a military side, the Iliad is about this dispute between Agamemnon, the
leader of the Greeks and Achilles, the best fighter, for power, right? And
it’s like how does the Greek side hold together if the person who is in charge
and the person who is best at killing aren’t the same person? And you’re
always going to get this problem with groups or they have trouble co-hearing.
And really, if I think about Genesis, the first book of the Bible, it’s a very
similar problem. You have this problem of a group which at first, it’s just
Cain and Abel, I mean it’s just two brothers but like which one is going to be
in charge? Right? You got to get one kill the other. And what you have is
like, if you think of like Jacob and Esau or you think of Joseph and his
brothers, I mean it’s over and over and over again, it’s this story of, we are
trying to get like the juice going, right? We are trying to get people going.
But you have to start with the family and it’s already going to be war between
the family, between the brothers, which brother is going to be in charge. And
so, that’s like a fundamental human sort of like evil or problem of group
organization, right? And I think, yeah, I think that is what these stories are
focusing our attention on. I think it was worse off in Iliad like about just
the way in which the human body comes apart at the joints like that’s a really
– I mean that’s a lot of – the Iliad is just descriptions of tendons being
unstrung and different places a spear can go to tear different muscles. And
that’s looking inside the human body in a way that you don’t and you’re not
allowed to do in most other contexts or just because you saw a corpse on the
street, right? If you see a corpse on the street, you would look away from it,
right? I mean you just wouldn’t feel like you’re even allowed to look at it.
But you see corpses in movies all the time and we look at them, right? And so,
there’s a kind of yeah, I do think fiction is meant to ship. But here, it’s
just truths. It’s not for the guidance of our life. It’s just because we care
about the truth I think.
So let me give a different but perhaps complementary perspective.
Then after that, maybe we should let Will have the final words.
Yes, that sounds fine. We are running out of time. In general, human world is
full of all sorts of structures and things that are curious and different from
the rest of the animal world we’ve ever seen, not just stories, hospitals,
traffic jams, hallways. There are just all of these things in our world. And
people often ask, “Why is that thing there?” And the fact is that for almost
all those things, because they are touching on so many parts of our world and
lives, there are great many social and formation structures that make those
things be there and form how they are. And so – but we are often asking, but
why is it there? And so, in principle, you can imagine lots of different
social pressures that could be structuring those things. And the hard part is
the disentangle, yes, but which are the most important ones? But often sort of
the conversation about these things tends to go, people tend to look for
optimistic or idealistic functions, we want to tell a story how our world is
good and how it works well. And so we might tell about how traffic jams could
be time to think before you get home or the fact that it takes time to cook
means that you can pause to savor the smell. I mean we make up all sorts of
stories about why things have various purposes, why they sell pre-habit spots.
And the problem is that we are prone to sort of find pro-social reasons that
make us all look like good people working together to achieve things or even
to make it look like it’s a good thing that it exists like say, a traffic jam
or maybe it just shouldn’t be there and there is not good reason for it. And
people want to show their creativity about making another explanation. And so,
I think what you really need to do to be honest is to go through for any one
of these things. Collect a list of all the possible explanations you can come
up with. And then make a list of sort of key observational facts about this
thing that you might need to explain. And then try to do a match where you
say, “Well, which of these explanations can explain the most of those puzzling
observation of facts.” And at the end of that, that’s where you would get a
best guess about the theory. But what you mostly see is someone just saying,
“Here’s theory A and it fits some facts” and they are done. And they go, on to
the next topic. And so that’s a way too much sort of haphazard theorizing
where people are just happy to have named any theory that could explain
something without doing the systematic comparison of different possible
theories and what are the subtler facts that could explain. And so, I just
think, if you want to explain something like stories, that’s the process you
have to go through. I’ve gone only partly through that sort of process but
there’s a lot of subtle things that you would bring in and there’s a lot of
theories you could invoke. And so, what Agnes’ mentioned is certainly one of
the plausible theories but there’s a lot more.
Got it. Oh, as for final word, I don’t think I have anything too profound to
say or maybe a little humble refection on the relationship between art and
truth. Just kind of potentially linked to what you were saying about if it
wasn’t truth, this is – something you were saying, Robin, if it wasn’t truth,
it would be like a jumbling cloud with not much identifiable within it. I
think if I recall, my favorite pieces of art or my favorite stories or movies
or to some extent, music. I think it’s because they have been realistic that
they’ve become my favorite because they have this link to reality, this
recognizable flavor that this is true that they become my favorite. And maybe
to link back to the ancient stories that hopefully lingered throughout history
such as the Bible, it has because these best encapsulate truisms about human
life that they survived perhaps. We can only guess.
And that’s our podcast.