'Inadequate Equilibria'
Robin:
Hello, Agnes.
Agnes:
Hi, Robin.
Robin:
Today we're talking about a reading that I picked called Inadequate
Equilibria. It's a whole book, which is longer than the reading you picked so
I apologize for its length. And I have to disclaim that it's written by an
associate of mine. He was a co-blogger with me for many years, and someone I'm
fond of. But nevertheless, we both read it. And now, I should ask you for your
immediate summary and take.
Agnes:
Yeah, I had a one sentence summary I wanted to give, which is, our systems are
broken so I'm allowed to trust my reason. That's the argument of the book as I
see it, right? So, there is some kind of prima facie problem with reasoning
your own way to problems, like trying to think through problems and come up
with theories. There are some thought that you're not allowed to do that
unless you're living in a broken world. And then– so then the first part of
the book is like you are living in a broken world. And then here are the ways
that it's broken. And the second part of it is, that's why it's OK to trust
your reason. That’s more than…
Robin:
So, I wrote a blog review of the book years ago. And one of my main points was
just it's basically two different books, which aren't very connected. The
first– but those two little clauses of your sentence. The first book is the
book of the title, where and how civilizations get stuck? And the second part
is, why then that you should be allowed to trust your reason, or to disagree
with other people, why you're justified and disagreeing with others. And both
of those parts of the book have a lot of discussion, but there's not actually
much connection between the two. Because you’re merely saying why.
Agnes:
OK. Let me say the connection.
Robin:
Yes.
Agnes:
So, if you were living in a society that had, like, optimized all of its forms
of coordination, so that everybody on every purpose was like coordinating as
well as possible, that would include informational coordination. And so, in
that world, you wouldn't be allowed to think for yourself, because instead of
trying to think through something, what you should think is, the total product
of your society thinking through that thing would be better than yours.
Robin:
Well, so close, I'd say, that is you would inherit a consensus on a wide range
of topics that would be pretty good. And then you would be able to make minor
changes, but you would be able to make. So, he discusses financial markets as
a prime example of a place that you should trust. And that means you shouldn't
think the financial market prices are wildly wrong, or that you can easily
double your money in a year via your clever insight, right?
Nevertheless, the way that entire world functions to improve is that
individuals contribute to it. That is, individual traders do find a little bit
of information and make a little bit of trade and making minor adjustment to
it. So, each of them needs to believe that, in fact, they are making a small
improvement on average. And the net effect is improvement, collecting all
these small contributions. But you just couldn't believe you, you had a– they
had a huge mistake and that you were– it was very wrong and you had a big
update.
Agnes:
I think actually with– I think that's right. But I think the thing he's even,
he's more interested in is not– it's both can you make a large correction, but
also, can you kind of make corrections all over the place? Like, you have to
specialize, right?
Robin:
Right. So their financial people will specialize, like, they will find one
very particular area, and I try to improve that and leave everything else to
everyone else. So, you're right, that if we were adequate, then you'd only be
able to make narrow, small improvements. But conversely, it's not true that if
the world is broken, that you need to disagree with people. That is the world
could be broken and everybody could just know that. And when you claim that
you had a new insight or a deviation, everybody would say, “Well, of course
you can do that. Of course, that's possible because the world is broken.” So,
but that's not the world we live in.
So that's to me, the– to me, the interesting connection is that in addition to
the world being broken, we believe it's not, or we think it's not, or we say
it's not. And that's the lever that we would force him to then in order to
take advantage of a broken world or to improve on a broken world, at least for
himself, he needs to reject that large shared consensus.
Agnes:
So, the shared consensus is that we believe that it isn't broken?
Robin:
That isn’t very broken. Yes.
Agnes:
So, I think that's impo– so…
Robin:
So, we could take one of his favorite– his favorite example of the book is
when his wife was having trouble with dark. And, they try the simple fixes to
winter being dark and made modest light improvements. And then they moved to a
light continent for a while. And that helped, but the small lights in their
apartment didn't help and so we thought, “Well, let's just make a lot more
lights in the apartment.”
Agnes:
Yeah.
Robin:
And that worked. But he says the literature doesn't seem to represent that as
a possibility and you might say, “Well, there's all these doctors who
specialize in this problem. They don't talk about it. What makes you think you
could come up with some improvement here where they haven't?” So, that would
be this widespread perception that he couldn't come up with such an
improvement, because they're presuming that the world of doctors recommending
how to deal with dark problems is not sufficiently broken, right? It's the
believing that the world is broken, that then emboldens him to try this
radical fix. Whereas most people wouldn't do that, because they wouldn't
believe that.
Agnes:
Sure, I think that's right. But that establishes the connection between the
two parts of the book. That is…
Robin:
Right. So that's what I'm saying what I think the strongest connection is, is
that we’re living in the world where people…
Agnes:
OK, so right. We both…
Robin:
Where people don't believe it's broken. They don’t talk as if it is broken.
Agnes:
Right, right. So, it's both that the world is broken and that because people
don't believe that everyone is not going around trying to fix it.
Robin:
Right. They're not as aggressive or eager or willing to try all these things.
So that's, to me, the key connection between the two sides of the book,
although he didn't play this out explicitly. And so, I think that helps us
think about the second part of the book so this where we had to focus on that
is when are you justifying and disagreeing? It’s when…
Agnes:
Sorry, can I just ask about that?
Robin:
Sure.
Agnes:
Suppose that people did generous– lots of people read his book, right? And
lots and lots of people are convinced. And so, they all do their own, like,
you know, he has a kind of like, do-it-yourself ethos, like, do-it-yourself
medicine, do-it-yourself dieting, do-it-yourself. Right?
Robin:
Right.
Agnes:
But suppose the people adopted this do-it-yourself ethos, it still could be
the case that it wouldn't be so easy to sort of like aggregate all the
information that they got from their do-it-yourself trials, so that…
Robin:
Right.
Agnes:
This would remain broken. And it would remain true that it was reasonable for
people to trust their reasons. So that shows that your premise, which is that
most people don't believe that it's broken isn't actually doing work.
Robin:
So, there's– the difference between sort of being willing to explore something
and disagreeing with other people that that. So, you could say, you know, as
soon as we heard that he had this idea for lights, we could have all said,
“Yeah, why don't you try that? That sounds like a good idea.” And then we
wouldn't be disagreeing with him. But we would be still supporting his attempt
to reason for himself. So, thinking for yourself isn't the same as
disagreeing. So that's the key distinction to make here.
Agnes:
When do you think you count as disagreeing with someone?
Robin:
Well, so I've had a bunch of papers that I've published on the topic of the
rationality of disagreement. And so, there's this large literature, and a lot
of which has been done by philosophers. And so, one of my papers’ title is You
Can't Foresee to Disagree, or the title of a blog post at least summarizing
that. So, people ask, can you agree to disagree? And there's a famous
economics paper from long ago, a Nobel Prize winner on that.
And so, agreeing to disagree would be some situation where we're all fully
aware that we disagree and nevertheless we disagree. And my variation is, you
can't foresee to disagree in the sense that if you're about to have a new
opinion, I don't know what it is, I can believe that it will be different than
mine, but I can't predict in which direction it will be different. So, I can
completely accept that you will go learn something and that the thing you
learn will change your mind, and therefore you will disagree in the future
with what I believe now.
So, it's not– I don't have trouble with expecting that difference or
disagreement, but I can't predict it. I can't foresee it. So then, in this
context, you might tell Eliezer, to go ahead and think about it and come up
with something. And you might think that whatever he came up with was
reasonable. And it would differ from what you think right now and there'll be
that disagreement, but that wouldn't be a problem, that wouldn't be agreeing
to disagree.
Agnes:
That didn’t really answer my question, which is, when do people– what I mean
is, what does it take for two people to count as disagreeing? So, suppose
someone on one content believes P.
Robin:
Right.
Agnes:
And someone over on the other part of the world believes not P, right? And
they're never going to meet. In fact, the one guy has never asserted P and the
other guy has never asserted not P. It’s just that they would be disposed to
assert those things under the relevant circumstances. You might think that's
not really disagreement.
Robin:
Right. You're right, it's not disagreement. So that's the key point about
agreeing to disagree is the puzzle not to disagree. So merely having two
estimates isn't that puzzle, it's being fully aware that you have the two
different estimates. That's the puzzle.
Agnes:
Let’s start with this. It's possible for people to disagree. Do you agree with
me about that?
Robin:
Of course, of course.
Agnes:
OK. So sometimes people actually, in fact, disagree. It's not just when
they're on two different continents, but presumably, when they're engaged in
some kind of shared intellectual enterprise, like they're trying to figure
something out and they don't think the same thing about it and they didn’t
know that the other thing is the different things.
Robin:
The word disagreement is a bit ambiguous, which is why we're using these more
precise versions. So, I would take disagree, in a broad sense to just mean,
you have different estimates, even if you're not even aware of them. So, but
you might just say, well, that's not a disagreement, that's just a difference
of opinion, fine. Difference of opinions exists when two people have different
opinions. And then you might say they disagree in a context where there's some
sort of conversation between them about it. And then we could– that's still
not quite strong enough for these theoretical results to apply. And so, the
more precise thing we talk about is common knowledge of difference of opinion.
Agnes:
OK, but I guess the thing I'm interested in is disagreement, not difference of
opinion.
Robin:
Or common knowledge of it.
Agnes:
I am interested in common knowledge of it. And I think– because that is I
don't think it counts, really as a disagreement unless there is common
knowledge of it. And I think that that's – it seems to me that's both possible
and rational to disagree in that way. And not only is it– not only is it
rational for people to disagree, it's rational for people, it seems to me
because I do it all the time and I think I'm rational at doing it.
Robin:
Right.
Agnes:
You intentionally disagree that is to deliberately adopt the opposing view,
simply because it's the view that is opposed to the person who you're talking
to. And there’s something that– there's something that Eliezer talks about at
the end of the paper where he talks about this danger of not wanting to rush
the process of agreement.
Robin:
Right.
Agnes:
You might think like the actual real-world problem is that people agree much
too easily with one another. And that the rational thing to do is to
intentionally disagree that is to adopt the opposing position so as to inquire
into the truth.
Robin:
So, I'm going to take the philosopher stance here, in this sort of
conversation, which is to say, well, let's pause and make sure we're clear
about our terminology.
Agnes:
OK.
Robin:
Because that is the issue. So again, there's a difference of opinion, where
people have different– but we even want to distinguish opinion. So, one kind
of opinion would be the thing you would say to prompt and further a
discussion. The thing you might say that get would people to react that would
get people to respond, the thing that you might defend. That's one kind of
opinion. And then another kind of opinion is the thing you would act on when
substantial stakes were in play, right?
And so, this– about difference of opinion in this book is primarily about that
second concept about the actual beliefs you would have to take substantial
action. And it's not primarily about what you might say to further a
discussion. So, the puzzle, I mean, it's completely understandable why, in
order to make a discussion go forward, you might take different positions and
explore their conflict. But you don't necessarily have to be inclined to take
different actions outside the conversation, because you took that conversation
strategy.
Agnes:
OK. I mean, I think that in a really honest disagreement of the kind that I'm
describing, you would be committing yourself to that position until it gets
refuted, right, so. And you would take action on it if it never got refuted,
like that's what it is. You're exploring this idea, and you're like, you're
going to stake your claim in there until it's proven wrong, which presumably,
if you can think of arguments against it, it easily will be, but still.
But in any case, it seemed– the reason I brought up this whole thing of
disagreement is that you were talking about how, you know, the question– I
said, even if everybody believed the first part of the book, the second– it
would still be rational to trust your own reason, because we wouldn't
necessarily be able to aggregate all the information that people got from
their individual trials. And then you were saying, “Well, the issue was about
disagreement. And whether or not you disagree agreed with what most people
thought.” And that's why I thought, “OK, well now we're talking about arguing
rather than mere differences of opinion.”
Robin:
So, my actual favorite example of the broken world is nothing that Eliezer
mentioned in his book but it's really easy to grasp, which is on the freeway,
when you're driving past a city, there are often signs which say which exit to
get off at to reach a particular destinations, like a stadium or a convention
center or something like that. And those signs are often just not the fastest
way to get there. There are some signs are official advice, right up there on
the road, but your GPS, or your map will quite often tell you another route is
better.
Agnes:
OK.
Robin:
Now, if we all know this fact, then if I'm sitting next to you while you're
driving, and you don't follow the advice on the sign, I don't have a criticism
for you. I don't disagree with you because I figure well, of course, those
signs are wrong, often, and you are taking a different exit, because you have
another reason. And you are in a sense, disagreeing or being in odds with this
advice that civilization is giving you on the sign.
But if we all know that advice is not very reliable, then among ourselves, we
won't be disagreeing about the actions you take defying the sign. So that that
would be my concrete example of how in order where we all accepted that the
world is broken and the signs are wrong, then we will, you know, be fine with
somebody else defying and ignoring the sign. So, there's a lack of
disagreement there. And then you’re thinking for yourself.
Agnes:
So, we're talking here about if you’re being within my sense, that is verbal
argumentation, that's what's important there. Because I think you keep going
back and forth on whether that…
Robin:
Well, I mean both, but I guess I sitting next to you won't disagree with your
choice to pass– I won't have a difference of opinion regarding your choice to
pass the exit that had the sign saying we should get off there.
Agnes:
I see. So, your thought is in the world where we all read the first part of
Eliezer’s book and we're convinced by it, there would be a set of verbal
disagreements that didn't happen. Whereas in the world where we don't, we have
those disagreements if somebody – so the person who trust their own reason, in
the first– in the world where we haven't read the book, they have to have a
bunch of arguments that the other person doesn't. That seems to be a trivial
difference and insignificant difference between the two. Like so is it just
like, well, just like having those verbal arguments.
Robin:
Well, I mean, arguments resulting from their sincere– I mean that their
sincere beliefs differ, and then they argue as a result, so I'm equating those
two in this context. I'm saying, in one world, we think the signs are right.
We assume that usually, and if so, somebody defies the sign, they're likely to
get pushback from other people around them, saying, how do you think you know
better than the sign?
Agnes:
They're having verbal disputes.
Robin:
Driven by differences of opinion.
Agnes:
Well…
Robin:
Not just an attempt to discuss it, right?
Agnes:
It could well be differences of opinion in the world where none of us believe
the sign. We could have different ideas about how to get there, right? It's
just the we don't know about them. We don't have the verbal disputes about
them.
Robin:
Or, they’re a different character of disagreements. So again, you know, he's
focusing on this kind of disagreement where there is some sort of authority,
and the authorities have a certain stance, and that you're supposed to go
along with the authorities, because the authorities know better. So, a great
many of his examples are of that form, you know, the doctors recommended one
thing or the bankers’ monetary policy people set another, etc. And so, I mean,
he often – and in the latter part of his book, he talks about it as sort of a
status, you know, policing issue, that we often frame these things in terms
of, who do you think you are to put yourself up against these authorities?
Agnes:
Yeah, I mean, to me, this is just – it's very odd that it would come down to
this like, some kind of allergy to verbal disputes or something. It’s like
what it comes down to, like, what if you like having verbal disputes? Like
what if that's a plus for you, then you would be more likely to do the
do-it-yourself lifestyle in the world where nobody read the first part of the
book.
Robin:
I'm not sure we're disagreeing, so I guess we should just make sure we go
methodically through the possible positions here. You know, so we agree, I
think on what it means for world to be broken.
Agnes:
I actually don't know that we agree about that. We haven't really discussed
part of the…
Robin:
We rough… no, we've been presuming that, I guess, that a broken world is one
where the sort of messages you will get from the usual authorities are not
very trustworthy in the sense that if you have substantial stakes and are
willing to spend some time thinking about it, there's a substantial chance
that you will be able to come up with something better.
Agnes:
But can I ask about that? Because I don't – we probably don't agree about it.
Robin:
OK. OK.
Agnes:
Because I don't agree with the first part of the book.
Robin:
All right.
Agnes:
So… or I've just been, I've just been holding on the argument [0:20:31]
[Crosstalk].
Robin:
Right, right. Well, no, no, that's fine. Yeah. OK.
Agnes:
So, I thought that was your big problem. So, I thought I would it solve that
for you, the connection between the two. But… so, it seems to me that what he
notes is that, like, with his wife, you know, when she had this seasonal
affective disorder, he was able to improve on the medical establishment. But
he doesn't tell you like all the cases, when he wasn't able to improve, like,
I bet he's been to the doctor a lot of times, and I bet a lot of times doctors
said the right thing. In fact, he even gives an example of that.
Robin:
Right.
Agnes:
When he thought he saw– his girlfriend saw these spots, and he thought he
could diagnose them in the doctor, and he didn't. So, it's like, I think that
what he shows you is that our system has like systematic failures, like it's
going to fail, in many cases, but it's going to succeed in many, many, many
cases. And maybe it's just succeeding in many more cases and so like, it's a
teeny– teeny tiny little bit broken.
Why not think it's– just like with the Bank of Japan, like, OK, that was
crazy. And that was like, it was like a big mistake. But like, think about all
the banks of all the countries in all the world's most and think about even
Japan at other times, right? So, it failed for a little while, but like,
mostly, most of the time, most of the system succeeds, it just that there are
these little bits, and you can focus on those and then say that system is
broken.
Robin:
So, he's definitely not claiming that, you know, you should just typically
disagree with what the authority said. Right? He's just– he’s certainly not
recommending the contrary of position and just believed the opposite of
whatever they said, that he thinks that would go very wrong.
Agnes:
Or not– he's not even saying believe what they say 50% of the time. That is,
he doesn't think that they're worse than… that is there’s only as good as
chance.
Robin:
So, I mean, he thinks that being wrong 20% of the time is a really big
problem.
Agnes:
Does he think– do you think he thinks that medical establishment, in terms of
what diseases it diagnoses you with is wrong 20% of the time? Because he
doesn't give evidence for that.
Robin:
But so, he doesn't give evidence for overall rates at all. Although, I mean, I
think we do have evidence for overall rates. But he doesn’t really have that.
Agnes:
OK, but we're looking at the argument of the book. And what I'm saying is, it
seems to me a substantial flaw of the first part of the book that he doesn't
give evidence for overall rates.
Robin:
So, he gives a lot of compelling examples.
Agnes:
Yeah.
Robin:
And in his dialogue, he sort of goes through some pretty big examples. But of
course, I'm not sure how much it matters for his thesis, the exact percentage.
I mean, what he's trying to address someone in the position of himself there
with his wife, with she has this problem and he wonders, how much might it be
worth to go think this through and come up with my own alternative? That's
what– that's the person he's trying to address. And…
Agnes:
Yeah, but you don't know that you're– when you're that person, you don't know
you're in that case, right? So, look, imagine my kids, somebody interviews my
kids, and they're like, “Tell us all the worst things that your parents did.
Tell us all the like bad examples of their parenting.”
Robin:
Right.
Agnes:
And my kids list all our terrible examples of parenting. And my kids are like,
this system is broken, right? My parents just make bad decisions and I got to
do this parenting thing myself. I got to do my own upbringing, because clearly
my parents, I'm living in a broken system. That would not be a good argument.
Robin:
But he's not making the analogist argument that you throw away the world you
live in and go live in the woods and try to reconstruct your own civilization,
right? He is– he does recommend, I think that you continue living in this
civilization, which means that you continue to accept most everything in it.
But nevertheless, the question is, if you have a particularly large problem in
front of you, how willing should you allow yourself to be to consider that the
usual authorities answer might be wrong. But he goes into a lot of discussion
about how you should just definitely not, you know, authorize yourself to
figure that whatever you think of is probably better.
He talks a lot about that you need to test yourself that is you need to find a
hypothesis for something about why the authority’s advice is wrong, and then
find a way to test it quickly. And, you know, throw it away when it fails the
test. But if it passes the test, then you've got to win. So that, you know,
it's not about the initial full confidence that you have an answer. It's more
about being willing to just try things out.
Agnes:
But I mean, I think he thinks like if his view is “Look, when you are dealing
with a major calamity in your life, you should be like kind of holding the
world to much higher epistemic standards and be less inclined to trust and
must be more inclined to do your own research.” That's common sense and
everybody already does it. That is these parents that he's describing with the
babies that have, you know, don't get that like special fluid, like, clearly
plenty of them actually did their own research, even without reading Eliezer’s
book, right? And they did it exactly – and they did it for the same reason
that he did it in the seasonal affective disorder.
Robin:
But apparently, most of those babies’ parents didn't do that and they died, so
that is a minority of those parents.
Agnes:
Right. But you might think, look, the issue is that when you're in one of
these cases, like the seasonal affective disorder case, you don't know that
you're in that case, right? And so, you could well be in the case where you
will drive yourself insane, trying to find your own solution to the problem.
So, one thing you have to do is make an assessment of which sort of a case
you're in. And I'm sure the parents whose babies died did put effort into it,
right? They maybe didn't put as much effort as those other parents because
it's the hard judgment call. And I guess I think I don't see how reading this
book would tell one of those parents how much effort to put in.
Robin:
Oh, he's making a judgment that I roughly agree with him, that when he's
talked to a lot of people in the world over a lot of years, as I have, that
when he brings up a suggestion for something that goes against the authority’s
advice, people at that moment, bring up this authority deference issue.
Agnes:
That makes sense, because they're not in a calamity at that moment. And his
advice is only take the serious– so far is only take this seriously if you're
in a calamity. So they’re given the right…
Robin:
I don’t think calamity is the right word but it's something important enough
to you that it's worth thinking about.
Agnes:
Right. But it's not important to the people he's talking to probably.
Robin:
No. So, I think in many of these contexts, it was important to the people he
was talking to. So, he talked about someone who had a startup, for example,
he's talked to several people that had startups and their strategy and his
advice that they should try something out and test it. I would– I mean, he
didn't mention it, but he is like me a cryonics customer so he would share
that contrarian position on that topic, which to, in some sense is important
to everyone, or plausibly important to everyone. And his main research area in
his life was this AI risk area, which he thought was very important, but
important to everyone. And that people sort of shut him down initially by
saying the authorities would have been on– or if there was really a problem
here, the authorities would have been all over it so you must be wrong. So…
Agnes:
I get that it's once again, like, as I say, this kind of rationality base
rates thing is more of your thing than mine. But it still seems to me I'd want
to know, well, how often are people overconfident as opposed to
underconfident? I grant he can find me cases, maybe many, many cases in which
people are underconfident, right? But I bet you can also find cases where
people are overconfident. And then like, you know, is the idea here like,
well, he's going to give you meta-advice about how to navigate that in the
abstract, whether to be overconfident or underconfident. Like I doubt it,
like, I think it's going to be a matter of the specific case.
Robin:
So, to support your position, that is, if we accept the description of this
book, as these two books, the first on the world is broken and the second is,
therefore you can agree to disagree. I wrote a book review, as did Scott
Alexander, as did Scott Aaronson, and all three of us said, well, that first
part of the book is much better than the second. That is, the world is, in
fact broken often, and that's an important thing to understand. But they– all
three of us took more issue with the second, connected conclusion that
therefore you are more authorized to disagree with people.
So, Scott, for example, discussed how a relative of his, you know, was, or I
mean one of the Scotts actually, you know, was tempted to buy timeshares and
that they would go listen to a timeshare pitch. And, they would seem, it
sounded like– sounded good. And then people would say, like, “Yeah, but look
at all the past times when people were fooled by this stuff and have lost
consistent money and, it looks like a pretty bad track record. Maybe you
shouldn't trust your judgment here in this case.” And, you know, they were
saying, “Trust this wider judgment.” And there are lots of– that's basically
the situation here is that most of the reviewers, these three reviewers at
least agree that the world is in fact broken and that's a strong message, but
how much that authorizes you to defy it in any one case is much less clear.
Agnes:
I mean, I guess, as I would prefer to rephrase the first part, the world has
some imperfections. Right? That's all he's argued for. But I am sorry, there's
another thing, the imperfections are somewhat stable. Right? So, there’s…
Robin:
Systematic, they’re system– and they have understandable causes.
Agnes:
But I meant to say stable, which is to say, that they are not going to get
easily self-corrected, which is not the same thing as saying they're not going
to get self-corrected, because every single one of his examples is one way
either did get corrected, or I predict will get corrected, and like, people
are going to figure this thing out about seasonal affective. Like, either it
was just a fluke case and his which is totally possible, right? That this
slight thing didn't actually care her or whatever. Or like, it'll eventually
get figured out, and then we'll eventually have those special light bulbs,
maybe we even have them now. Or the thing with the babies, like, eventually, I
think just like the Bank of Japan eventually figured it out, it will just like
take longer. But nonetheless, if these inefficiencies are in the system, they
could stick around for quite a while before they get corrected. That would be
what I think he's sort of shown.
So, the world is imperfect, and the imperfections are somewhat sticky. That's
what the first part of the book says. How prevalent the imperfections are,
like relative to all the ways in which it works. There's nothing about that,
right?
Just like in the second part, all he says is some people are underconfident,
not, how prevalent, like how much underconfidence is there relative to how
much overconfidence is there. So, to me, neither of those two theses as such
are that strong. The thing that's most interesting about it to me are the
specific mechanisms of the stickiness of the imperfections, right?
Robin:
Right. I think what you see those mechanisms, you see the possibility that
there's a lot more problems that he mentions, right? You see the possibility
that this is a much larger problem than these mere examples
Agnes:
Sure, but like, I don't feel like the basic inference in the book, like here's
another way and here's a kind of psychoanalytic way to read it, there’s
something like, it's like, there's like a little boy who wants to take his
toys and go home. And he wants to say, “Yeah, that I have a good reason for
doing that because this place sucks.” Right? So, it's like…
Robin:
Yeah.
Agnes:
You might think, look, what you're supposed to do is take these systems, which
mostly are awesome, and produce great results, and try to work within them to
improve them a little bit. Right? So like, you and I, we're stuck in the giant
system of academia that has like tons and tons of flaws, right? And maybe one
day it will collapse, because of all of its flaws. But until that day, we're
in it. Right?
What do we do? Well, we give students the best advice we can, we teach as well
as we can. We… like we don't just like opt out, right? And in fact, you might
think a big part of why people want status, right, within a system, why
there's such a quest for status is that once you have status, you do have some
ability to change a system.
Now, it's also true that the quest for status turns you into the kind of
person who's not going to change the system. So that's a big problem, right?
But it's possible to fight it somewhat. And I'm sure you see, as I see around
me every day, people who are trying to fight it and improve it, right? And
look, if somebody doesn't want to do that, if he doesn't want to improve this
because he thinks it's too broken, even though he has given us statistics
about you know, how broken it is relative to the whole, that's fine. I think
you're just allowed to take your toys and go home and think for yourself and
come up with theories.
And like, if you don't want to have a disagreement with me about that, that's
OK with me. But it's some– there are some weird thing where it's like, yeah, I
want it– I don't want to work within this broken system. I want to think for
myself. And also, like, I somehow, I'm worried you're going to criticize me
for that. So, I want to produce this justification, that I should be immune to
criticism or something for that. It's like, there's too much self-justifying.
Robin:
One of the reasons that I suggested this reading for us is that we have
previously discussed my many attempts to diagnose and prognose various big
social problems.
Agnes:
Yeah.
Robin:
And so, this author clearly believes that there are big problems that could be
fixed. That is that there's a lot going wrong. And so, if we had better
institutions and mechanisms, we could get a big gain from that. So, he agrees
with me in that sense, but he doesn't go numerically, you know, as far as you
or I would like to sort of argue that things are substantially broken.
And so, I would quantify as saying, yes, things are slowly getting better and
eventually maybe most of the problems will get fixed, but the rate at which
things are getting better is plausibly a factor of 100 less than it could be.
If we had better institutions, we would be getting better 100 times faster.
And the opportunity cost of that is enormous. That is, you know, had we been
getting 100 times faster from, say, a thousand years ago, we would be a lot
further along now. Or even from 10 years ago, right? If we had done a thousand
years’ worth of progress in the last 10 years, then we would have solved so
many more of these things.
So now, you know, I haven't proven that to you, but I could give some
plausibility arguments. But fundamentally, if that's the level at which things
are broken, then there's enormous potential payoff in figuring out how to fix
that. And looking at the larger structure.
Agnes:
That might be true. I agree that it's useful to bring that into connection
with this book, because in a way, right, if you take, like, his examples, the
parents and the… we should probably say that with this example, because some
people might not read the book. There are some babies who are born with some
rare disease, where if they get this special kind of formula, they're fine.
But if they don't get that kind, they die. And that special kind of formula is
like not FDA-approved.
So… but those parents, some of the parents, right, did tons of research,
probably all of them did some research, and most of them did some kind of
research. And some of them did enough research to learn this fact, and go and
drive to Philadelphia in the one place where you can get this special formula.
Anyway, so those people are really motivated, right? They're motivated to buck
the system. And he was motivated when his wife had seasonal affective disorder
to like, you know, reinvent the wheel, so to speak. But most people, most of
the time are not motivated to do that. And it's rational for them not to be,
that is they're taking the correct approach, at least as far as like, as far
as he's argued, right?
And so now, suppose you are like, look, I have this recipe for how we can burn
this system down and have a new system, right? And then you'd bring this to
someone, right? And then the question is, what's the rational response of that
person? And suppose they find your arguments convincing, I usually find your
audience pretty convincing.
*OK, problem number one, I don't feel motivated. That is, I feel like I don't
care. And I'm not saying– I'm not even saying that’s rational judgment. I'm
just telling, I'm just reporting that phenomenology. And I think part of it is
that it doesn't really seem to me that there's anything that I can do about
this. And it doesn't seem like it's a problem for me. And you know, it may be
a problem for everyone together.
And then if you were in some way addressing me as part of everyone together
that might move me. But addressing me as an individual, it sort of seems more
like what you're trying to do is get me on your team of people who
ineffectually think that this would be a better way to do things. And I sort
of don't see the point of that.
Robin:
So, in the chapters where he has the three characters doing their dialogue.
Agnes:
Yeah.
Robin:
You know, he has a spokesman for what he calls a cynical economist explaining
to an alien, all the processes that are causing these things to go wrong. And
the alien is presented as indignant or incredulous that things could possibly
be this bad. One of the mechanisms he talks about, which is easy to get your
head around is the Overton window. He says, “Well, there's just a certain
window of policies you're allowed to talk seriously about as a serious person
in public.” And say, “The New York Times decides what those are based on their
reported judgement about what you would be laughed off the stage, if you said.
And that that system sort of limits what kind of things we can consider.
And he says, meta discussion that is a discussion about how the system itself
should change is outside the window. That's beyond the pale for serious policy
discussions. And he gives examples of how when something was finally let into
the Overton window, it might take over very quickly, because a lot of people
thought it was plausible, they just weren't allowed to talk about it. And he
has hopes that say drug decriminalization will come within that window,
sometimes soon.
And so that's a– not only a concrete example of an institution that's
problematic. It's an example of exactly what you're talking about, where
you're saying, well, if other people were talking about this, then I may be
motivated to join them. But if you just want to talk to me privately, and it's
not in the New York Times that I want to hear about it. I mean, that's the
Overton Window, you're applying the Overton window to your own motivations.
Agnes:
But let me explain why not. So, you've argued with me about cryonics. And I
would say you've shifted me over, like, I'm at least 10% more likely to do it
than I was before talking to you about it. And the reason is because that's
something where I could just choose to do it. Or, I can just choose to pay
this money, and then my brain will be frozen. I don't view that as like
outside the pale of consideration. Like, if you were like– and I sort of think
that if you wanted to propose to me anything that I could do and give me like
a good reason why I should do it. I think I'd be open to considering it.
But instead, what you want to do is have me consider something that where I
say to myself, as you're beginning to try to convince me, I say to myself,
“Suppose that Robin were to fully convinced me of this, and I were to become
100% persuaded of the truth of this claim, what would change?” And then I
answered myself, “No.” Then I'm like, “OK, I don't care.” Right. So that's how
I feel about these things. So, what I'm saying is because of because of the
Overton window, as you say, I don't think that my becoming convinced of this
actually matters. Whereas my becoming convinced that my doing, getting my
brain frozen would be a good idea, that would matter because I could do it.
Robin:
So, one thing is that some institutions we might consider would be ones that
would break the Overton window obstacle, that is, we might be able to overcome
this Overton window obstacle by some of the institutions…
Agnes:
You mean, once we instituted the institution.
Robin:
Yes. But…
Agnes:
Not beforehand when we would need that in order to institute it.
Robin:
Well, so I mean, think of Eliezer’s things with the lights, right?
Agnes:
Yeah.
Robin:
If at that moment, he hadn't thought of the idea that he could just do it
himself.
Agnes:
Yeah.
Robin:
Then he might have had your reaction to say, what can I do? And so, the key
idea is that many people are motivated to like, figure out how they can
improve the world and they are wondering what to do. And for a lot of these
ways that we might be able to improve the world, the limiting factor is
actually just doing small scale trials, like this small-scale trial. And that
is the sort of thing people can do. But they have to do it in part out of some
altruistic motivation that they're trying to contribute to innovation that
will then spread farther.
So, I mean, you could participate in small scale trials of things that if they
worked out in small scale trials could spread farther, and then make a bigger
difference. And that would be– that's what – that's major– Eliezer’s main
recommendation as well is consider yourself able to disagree enough to come up
with something you could test. And he really emphasizes that, you know, find
things you can test and test it often and even test yourself by betting,
right? And he says, you know, just think of things when you think of
something, try to make a bet on it, because that will really make it make you
remember when you were wrong and when you were right.
Agnes:
Yeah, so like, I mean, there are now to sounds like you're sort of giving me
advice for like living in a certain way. And at this point, I'd be like, well,
let me look at the people who have taken Eliezer’s advice, right, including
Eliezer. Right? And let me look at how they live and like what giant
contributions they've made to humankind and see if that looks like plausible,
like, that sounds like a good– and I don't actually– I don't know about him,
right?
But he does say like, you know, he says something towards the end of the book.
Like, I want to know how far has this modesty approach gotten people. Right?
Like, have they gotten like great performance from modesty? Right? And it's
like, and I wonder, well, how much performance have we gotten from
rationality? Like, if I look at, say, imagine, like, I don't know, making bets
on whether the they're going to find the Higgs boson and like, making my own
meal supplements and like, it seems like concerning myself a lot with things
that are not that interesting or important, and wasting my time doing these
experiments, where like, I want to devote my life to like, the pursuits that
are of significance to me.
And I think, and I also, I already feel free to think about questions that
lots of other people have thought about, in fact that people have been
thinking about for thousands of years, right? And I have the– I feel like I
have some kind of hack to get around his problem, right? Where like, I think
it's OK for me to think about the question like, “What is being?” or
something. Where, like, so many people have been working on this question.
I mean, surely, if you can make progress, somebody already would have before
you and what kind of hubris, I guess I just have that hubris. So, I have some
like giant, you know, self-hack or something where I don't have to go through
all this. I can just ask the questions that interest me and pursue those
questions, because I think they're interesting and important. I don't need
like an excuse at this time to do it.
Robin:
All right. So, at the end of the book, he talks about who the audience of his
book should be, and what could go wrong if it got into other hands.
Agnes:
Yeah.
Robin:
So, in some sense, his target audience might be defined perhaps topologically
as the people who are too shy and relatively smart or confident. And if those
people who are relatively shy, but also confident if they were to try to think
for themselves more and try to do more things that would go well. And the
people who are not too shy, they don't benefit much from his advice to be less
shy. They might even push them too far in that direction. And the people who
are not smarter, confident enough, their trials won't go very well. And so, by
this description, well, if he selects his audience, right, he's guaranteed to
help.
Agnes:
Right. There should be a toolstatic [0:45:51] [Phonetic] at the beginning of
the book. Right? And I would fail that like, on the first question and then I
would know, don't read this book, you're already going too far in the other
direction. So, there should also be a book for people like me, like, stop
thinking, you can answer all the big questions, like be a – be modest.
Robin:
So, we started talking about this group, and then we went to talk about
inadequacy.
Agnes:
Yeah.
Robin:
But we come back to disagreement. So again, the key question is, when are you–
should you feel free to agree to disagree or foresee to disagree? And there is
this sort of modesty argument he outlines the very beginning of the book, and
then in more detail later on, which basically says, how could you know that
you are better than someone else at thinking about something? If they're
apriori equally qualified to you, and you come to different opinions, then
what justification do you have for picking yours over theirs?
And he is trying to resist that argument. And so, he offers a variety of
arguments like one that you mentioned that just merely abstractly trying to
disagree left is the wrong way to do it. You should do it organically and
authentically by thinking through all the details. But I think his strongest
arguments are the ones that just give particular cases and say, “Look, most
everybody disagrees with you on this, so you're really going to go with them?”
So, or like, for example, he says, “Look, a third of the time we're sleeping.
When we're sleeping, we don't usually know we're sleeping.”
You ask someone when they're sleeping, are you sleeping? They would say no.
And so, you at the moment can't be more than two-thirds confident that you're
not sleeping, because hey, you know, and of course, most of us are going to
say, “No, I'm a lot more than two-thirds confident that I'm not sleeping. I'm
pretty sure I'm awake.” And then he’ll say, “Well see that you're willing to
disagree.” Or he says, “You know…”
Agnes:
I mean that's how most people think. The way I think is, hey, Descartes gave a
really good argument that we should be pretty worried about that. And I'm
willing to engage that argument. And in fact, I hunt out people who are like,
you know, I want to keep looking and be skeptic or if they won’t, I’ll take
that position.
Robin:
OK. But… But do you take that two-thirds position right now that you're
sleeping right now?
Agnes:
I think I'm awake right now.
Robin:
OK.
Agnes:
But…
Robin:
OK. But at what confidence? Is it more than two-thirds?
Agnes:
So, like, this is where like, that's just that's not a natural way for me to
think about myself. Like, if you want to know, what would it take to push you
to assert that maybe you're sleeping right now? It wouldn't be that much.
Right? All you have to do is be like, I'm sure Descartes wrong. And I’ll be
like, “OK…”
Robin:
So clearly, the sleeping example is not the one for you.
Agnes:
OK.
Robin:
The next example, though, is God so, he says look, you know, clearly most
people in the world and certainly even more people in history that I believed
in God. So therefore, you…
Agnes:
OK. But I do believe in God. So that might not be the one for me either. So
maybe let's go to the next one.
Robin:
Well, so, for those two are more persuasive examples for me. I mean, well, you
might even– so I might say, you know, the religion issue isn't is there some
abstract God out there somewhere doing something? I would think the religion
issue is more, do the people around us who claim to have some direct
connection with God, through prayer or sort of telling them things, is that a
real, actual communication phenomenon such that, that's an actual explanation
for a lot of things we see in the world is that people are talking to God, and
He's answering and doing things for them. So that's the position on which I
would be much more skeptical.
I'm happy to grant that in a vast universe, there could be vast powers out
there. And maybe they even created the universe, or even created our life on
earth or something. But the more skeptical claim, the one I have much more
trouble is the claim that all of this huge number of people, they claim that
their actual lives are being affected daily by direct communications and
interventions.
Agnes:
Well, I mean, so those are two different claims, right? So like, I think my
life is affected by communication with God in that I tried to communicate with
God and if I would to stop doing that my life would be different. Right?
Robin:
But you don’t get communications back that is you’re actually getting messages
back. Right?
Agnes:
Not that I know of. And I like, like God might well intervene in my life, but
I don't– I think that, like, it's very hard for me to understand God, or to
know what they do. So…
Robin:
So, your position sounds much more reasonable to me. So, I'm not– but the
position that people think that they are seeing reliable evidence of
interventions. That's the position I would be skeptical about. And there are a
lot of people like that. I mean, even a majority of people will say…
Agnes:
Yeah, I mean, I guess I wouldn't dismiss the possibility of that, like, if I
met such a person. I've never had a conversation with such a person about that
very– no, I had one. Once, Liz Bruenig talked about this sort of like
religious experience that she had in a Night Owls that I did with her. I found
it pretty compelling, actually. But… so I guess like, I don't, I don't… but
it's not, it's not so much about intervention as a kind of like a kind of
awareness, right, like a kind of awareness of the presence of God.
Robin:
Well, that would be caused in part by there being a, you know, making
themselves available to be aware of, right? I mean, they would participate in
that process somehow. Right? That would be an intervention.
Agnes:
Right, somehow.
Robin:
Right.
Agnes:
But in any case, I guess I don't feel… to me, that's not like beyond the pale
or something.
Robin:
But you still… I mean, I would still disagree in the sense of it to say…
Agnes:
Yeah. Yeah.
Robin:
You know, do most people have substantial evidence that they themselves are
seeing interventions of the sort? I would have to say, no. That sounds quite
unlikely.
Agnes:
Right. Or, like, yeah, I mean, I guess I feel less sure that that's so
unlikely. But I think let's take a thing where I just reflexively believe it
without thinking about. Those are always the best cases where you don't really
put much effort, like, I think that vaccines make you less likely to get
COVID, right? And I have put, like, no effort into thinking about that. I'm
just trusting, you know, people around me. But there's lots of people who
don't believe that.
Robin:
Right.
Agnes:
So then does my belief go down, you know, my, like, slightly less credulous?
And I think the answer is no, not at all.
Robin:
Or most people think their nation is substantially more moral than the rest of
the world. Or, that more justified in its actions on the international stage
in terms of morality or something. That would be another sort of claim that I
would find implausible that still most people do seem to believe it.
Agnes:
Right. I mean, but that's like a little bit like the claim that like most
people think like their family members are like better than other people,
which I'm very sympathetic to, because I think that too.
Robin:
Right. But it's hard to believe it's true on average.
Agnes:
Right. Exactly. It's impossible that it’s true on average.
Robin:
Right.
Agnes:
But I'm sympathetic to the claim. Like I get how that people think that.
People don't like, like, so I don't think people are crazy. I guess I just I
don't think people are crazy. I don't even think like, I guess I think it's
possible, like the vaccine case, I less know what to think about. The other
ones are more comfortable to me. Like, how can people…
Robin:
But what we're looking for, for this rationality of disagreement issue is a
case where, in some sense, most people disagree with you, and you don't, you
can't really claim some authority relative to them, but still you're going to
disagree with them. And so, that would be the sleep or the evidence of God
example that they might give. And– but if you don't accept those that, of
course, you aren't finding counter examples to the modesty argument he calls
it.
Agnes:
Right. But I look, I accept that with the vaccine case, I said, I think for
me, it's– the problem with this argument is that you slide back and forth
between disagreement and difference of opinion, in ways that, for me that is a
very, very significant difference. So, if I say most people have a difference
of opinion with me about something, like I'm virtually indifferent to that
then. But like, if somebody disagrees with me in the sense that I'm having a
conversation with them, and they think the opposite thing of what I think then
I have to take them seriously. And I have to argue with them, and I have to
convince them or be convinced or whatever, as long as the argument…
Robin:
So, the theory that I worked on, is in terms of mutual awareness. So, it
doesn't require a conversation actually starts. But it requires that you each
be aware of the other person's opinion and mutually aware of these
awareness’s, etc. That's what common knowledge means. And so, the question
isn't, are you arguing at the moment with somebody who believes they have
evidence of God intervening in their lives? It’s, are you aware that they
think that or they aware that you think what you think? And is there a mutual
awareness such that if you started the conversation, you would both start off
with this mutual awareness? You wouldn't be surprised to find out what they
think and they wouldn't be surprised by the way you think, then you would, but
you would start discussing.
Agnes:
So let me just tell you something that’s not going to have a big impact on you
persuasively. But I read, years ago, I read some of this disagreement
literature, not the part that you contributed to, but… and my instinctual
response, and this is non-argument to my instinctual responses. All of these
people have framed the problem wrong. It's like people– it's like people going
out on a date but neither is willing to like, really give much of a signal
that they're interested in each other so they're just like doing these little
like signs. And then you're like, yeah, that's dating or something.
Like, there's such a thing as arguing. That's the real phenomenon, arguing,
where people are talking to each other. And the one guy says, “No, you're
wrong because of this.” That's the thing. But somehow people don't want to
study the actual thing. They want to study this weird, hypothetical case where
like, what if two people sort of knew of each other that they– but they didn't
actually have a conversation about it and how would they start out. But the
real phenomenon is arguing, and it's essentially stretched over time. So
people– it's important to the phenomenon that people change their positions
over time.
And they do, in fact, change like that is in every argument that I've ever
seen, they change in the sense that people say different things over time. So
their position is changing, its dynamics. It’s a dynamic process, right? And
so, you're talking about this static thing, disagree– and you're calling it
disagreement. There's this other thing, which is where our rationality really
sits, which is in this dynamic process of disagreement. And if you want to
understand it, that's the thing you're supposed to be talking about.
Robin:
So, we’re running out of time for today, but we can at least set up the issue
here for our future conversation.
Agnes:
OK.
Robin:
So, my understanding in talking to you previously is you have substantial
reservations about the usual decision theoretic framing. But the usual attempt
to understand argument, even in philosophy, as I understand it, is via
decision theory. That is, you imagine agents who have actions they need to
take. And they have beliefs and values. They combine the beliefs and values
into actions. Beliefs are updated on information. And beliefs can also be
updated on the information of talking to other people, arguing with them,
reading about arguments, and knowing what other people think, etc. And I have
contributed to that sort of literature using a sort of standard decision
theory framework to analyze how someone will update their beliefs upon
learning that someone else has a different belief.
So, you seem to be planting the flag of saying that whole framework is
misdirected, or mis– inappropriate for the analysis of disagreement.
Agnes:
Well, in that framework, you've talked about everything but conversation. It's
like, look, you could find out what other people think, not because you're
talking to them but somehow a little slip of paper falls back down that you
know… And oh, I learned what Robin thinks and now I could update that.
Robin:
No, but the question is, when people talk, I mean, we do analyze conversation
in these theoretical frameworks. And conversation is analyzed as alternating
things people say, where each time they say something, they are picking some
symbols out of a space of possible symbols. And when you hear those particular
symbols you update on their mental state, having learned that they said the
symbol versus others.
So, it is the standard statistical inference framework in the context of
conversation. And in fact, there's a lot of analysis of conversation, exactly
that form like people have calculated how many bits per word there are in a
typical conversation that is, you know, how much can you anticipate what the
next word will be, out of this basic, all possible words, and, you know, that
framework is quite useful.
Agnes:
So, I mean, I think when I'm thinking about, when I'm speaking is like how to
persuade you, right? And what I can get you to think, but also what I can get
myself to think.
Robin:
Right.
Agnes:
And, I have a kind of vague goal of being guided by the truth, while
maintaining the– as long as I can possibly maintain it, the stance of
disagreeing with you, as a tool to that end. So, I'm trying, as I said, that's
something I said in beginning I think it's when I'm trying to disagree with
you just is it rational for me to disagree with you, but the thing I'm doing
is disagreeing with you.
And so, it's not a question of like, I'm not trying to like, have, you know,
be figuring out my beliefs by bouncing them off of you. I'm like, and thus the
disagreement would be like an accidental upshot. I’m actively maintaining it.
I think we should stop soon. And there's something else I wanted to say. But
so, I'm going to let you respond and then I'm going to say everything I want
to say.
Robin:
I want to acknowledge conversation is complicated, but we should just
distinguish sort of the beliefs you have in your head from the words you say
and the stances you take, and the conversational moves you make, and your
overall conversational strategies. Surely, conversational strategies consist
of anticipations of the moves you will make, the stances you will take, and
the responses other people will have to all that. But that's nonetheless
inconsistent with you're also having beliefs and that we could talk about
beliefs.
Agnes:
Yeah, I think the beliefs are [1:00:19] [Indiscernible] uninteresting. That is
people believe all sorts of random stuff that's like not very consistent. And
the more interesting and more stable thing is what they will say, or even
more, so what they will write.
Robin:
So that's certainly in defiance of the usual decision theory framework that
says that when you go, when you leave the conversation to go make decisions,
it will be the beliefs you take with you. And that's what will influence who
you vote for, what job you take, and all those things. And so, when we're
trying to influence somebody's future actions, we're trying to influence the
beliefs they will have after the conversation. But anyway, your other point.
Agnes:
My other point is just I just wanted to note that you are, in this
conversation, this last conversation, like playing the role of systemic
policeman where you're like, “Look, we have a system, we have a usual framing
for dealing with this problem.” And you're trying to do the do-it-yourself
approach, so this recapitulates, the framework of the books.
Robin:
I mean, I think that whenever you're defying the usual system, you should
notify people that facts, and I try to do that consistently. That is…
Agnes:
You often notify me of it.
Robin:
Indeed. So, if you know that you are defying the usual wisdom on something, I
think you should still pursue your argument and try to persuade people, but
you should acknowledge that fact people, you should try to trick them into
thinking that what you say is the usual view if it's not. So yes, I want – and
that the fact that something is a contrary view, sort of sets up what kind of
conversational moves or arguments you might make regarding it. Now, you might
expect a criticism of the status quo, if you're going to be arguing for an
alternative, that'll be one of the things you expect.
Agnes:
So, there's a certain kind of, like, kind of like deferential act that you
think has to be done, where it's like, where you pay your respects to the
status quo.
Robin:
It's more of the fundamental argue– the fundamental argument rule, which is,
when you argue for something, you shall identify some of the best arguments
against it and tell people what those are too. That's the honest way to argue
for something is to both give the arguments for and against it. And so
certainly, if somebody is not aware that your view is the contrarian view,
then you should tell them that that's something that would inform them, they
might change their mind.
Agnes:
Not necessarily against it.
Robin:
In most people's minds it is and therefore, again, you shouldn't necessarily
just give all the arguments you think are good arguments, you should also give
the arguments that other people might think are good that you might think
you're bad, and then tell them why they're bad,
Agnes:
Then you'd have to talk for a super long time before you let the other person
talk. So I think that's a bad plan. I think you should give your best argument
and then see what the other person says, and not think that you can anticipate
what would be the best arguments for them.
Robin:
Well, it's about the timescale. Of course, you can give all arguments in a
short time, but in the course of a longer conversation you should at least
point to and give someone the chance to pursue it. So, I often like try to– at
the end of even my talks on prediction markets, I give a list of here's a
bunch of objections, you know, you can you can mention one of these if you
want, I don't go through them one by one, but I say, “Here are some common
objections, and you might pick one of these and ask about.”
Anyway, it's been nice talking.
Agnes:
Yeah.
Robin:
Till we meet again.
Agnes:
OK.