Good morning Agnes!
Today I’d like us to talk about paternalism. I think you’d be good at thinking
So here’s the question: Our society has lots of policies whose main
justification, at least given, is to protect people from themselves. So pretty
much all professional licensing—you might hire a doctor or a lawyer or a
contractor—and we limit who you’re allowed to hire in those roles and who’s
allowed to advertise for those roles. Securities are regulated, that is, we
live in kind of— stocks or investments you’re allowed to buy, that people are
allowed to advertise to you. Drugs, medicines: We limit what you’re allowed to
buy. Building codes and regulations limit what kind of buildings you’re
allowed to build and where, and that sort of thing. Even we have censorship
sometimes: What kind of books, movies, etc, you’d be allowed to see.
of course, we also have this in other areas of life, right? By analogy, that
is, paternalism is supposed to evoke how parents limit their children’s
behavior. So you might not just say to a child, Don’t go into the street, it’s
dangerous; you might prevent them from going into the street and say, You’re
not up to that task of making that choice yet. So— And the teachers, of
course, in some sense, we have a syllabus where we require things on the
syllabus, not just recommend them.
So across a wide range of areas in
life, including in government, which many people especially, you know, attend
to, we don’t just give people advice; we limit what they can do. And— But we
do it for their own good. And so the key question is, Is this sufficiently
respectful of the people themselves? You could just, you know, put required
certifications on things. So sometimes you might, you know, on a restaurant
you might have a code that just says their inspection level, though you also
have rules about if they break some rules, that the restaurant is not allowed
to be open and can’t sell things. And so this is sort of the beginning puzzle.
Why don’t we just give advice?
Okay, so I mean, maybe to start in the context where paternalism seems
appropriate: With children, we often—like my children relatively often put
themselves in a position where they’ll say something like, If you do this,
I’ll never play another video game again.
And now I know they don’t actually want to bind themselves in that way, right?
That they’ll regret having made that agreement. So relatively often they offer
me agreements that I believe that in the future, they’ll regret.
And— Or they offer me deals where I think they can’t follow through on one
side of the deal. And so I just don’t let them make a deal. And I say, No, you
just have to do this. Right? And now is that disrespectful of them? I think
Well first of all we should ask, Are you doing this for their benefit or
yours? So you might think that they are hurting you or the rest of the family
by trying to make these deals they can’t follow through. Or are they just
I think in most of these cases, I’m— My attention is first and foremost on
themselves, though usually there’s also some harm to their brother or the
family in general, yeah.
Okay, so that—
For their own good.
So then the hypothetical is to ask, What would happen if you try to explain to
them the harm that you see resulting from this behavior?
Right. And I do. Like I’ll say, But you’re going to want to play video games
again. And sometimes I’m not sure I’m right. You know, one of the deals was if
my eight-year-old— if he could get a dog he would never play another video
game again. And like, I think it’s unlikely that the dog would satisfy him in
that way. You know, and he understands that him not playing video games is of
high value to me, right? So that’s like a thing he can trade. And I don’t feel
sure about it. And also I have other reasons for not wanting the dog.
So it’s complicated.
But I do think that he won’t believe me, like, because he can’t imagine how
boring the dog is going to be after like the second week.
But you’re an adult and you have great care and interest in him, and you know
a lot more things than he. So the question is, Why won’t he believe you?
Yeah, it’s interesting, because like, you know, my oldest, like when he was
five we’d go to the ice cream store and he just be like— I’d be like, What do
you want? And he’s like, Which of these am I going to like the best? And I
could just tell him and he’d order that, you know? So he’s like very trusting
in that way.
Well, there you go! But not about dogs.
Well that was my oldest. He, I think, actually is quite generally pretty
trusting, even now. But I guess I think that he must think that I don’t see
things from his point of view, right? And that I can’t imagine how much joy
the dog is going to bring him. Plus, I think he thinks I don’t appreciate
animals and how great they are. And so probably I’m not weighting it
sufficiently high, with enough value.
Okay. But now he must realize that you know that you don’t know everything, so
he should be expecting you to take into account your awareness of your
limitations when giving advice to him. And I presume you do take that into
account: Sometimes you are not sure enough of your advice to give it, or
certainly to put it into a rule here. So is he failing to credit you enough
with your uncertainty? Or are you in fact insufficiently uncertain?
I think that there’s a strand here of him suspecting that I have other
motives—like, You just don’t want me to get a dog.
So that a strand. But that doesn’t capture all of it.
But it’s an important element to ask, How far would a small degree of
pollution of your motives here produce a level of distrust in them?
So if it was very potent it could be that a small degree of distrust about
your motives would lead to a large degree of not listening to your advice.
Yeah, that’s quite possible. But I think there’s something else, which is
something like, you know, there’s stuff that I know that he doesn’t know. And
the kind of mental act of him making room for that possibility doesn’t
sufficiently compensate for the reality of the difference, right? So that it’s
a little bit like if, you know, the Grand Canyon is big or something, right?
Just how big.
And you might even know, like you might even have measured— you know, have
measurements or something. But still you might show— I’ve never been to the
Grand Canyon, but I can imagine that I show up at the Grand Canyon, and I’m
like, Whoa, it’s so big! Right?
And now like I knew it was big, right? But there’s a way in which when you’re
faced with the reality you learn something, right? So I feel like there’s the
thing— The fundamental thing that he’s missing is the sort of reality of the
actual stuff that I know. That he’s merely making room for the likelihood that
I know more than him isn’t sufficient.
So I’m an economist, and I spent a lot of my career doing formal
game-theoretic, information-theoretic models of these sorts of processes. And
in our models people take all the information they have and they do as much as
they possibly can with it. Including, you know, when somebody else gives them
advice, trying to think through, you know, their biases, and have that thought
So one question about this is, Do we have to move out of that
space of models to account for the phenomena here? Or will we be able to
understand these sort of things within that space of models. So within that
space of models I will just make the statement that if the advisor is
completely trusted by the advisee in terms of having exactly their interests,
then the advisor can simply give advice and the advisee will simply take it
into account, and all is well.
So you know, things that go wrong— As soon
as the advisee starts to perceive that the advisor has somewhat different
interests, then, you know, as a continuous function of that difference the
advisee listens less to the advice. And you know, the bigger this difference
gets, the more the advisor might prefer to just enforce their advice rather
than give it through recommendation. And in fact this turns out to have been
the basis of my job-talk paper 22 years ago, when I applied to get a job here
at George Mason, that I was working out a model of that.
And the key
story here is that, for example, if you’re a child who’s living at home and
your parents, you know, tell you that if you, you know, go out and party with
the wrong sort of people and the wrong sort of drugs, bad things could happen.
You know, you may expect they’re exaggerating. And therefore they may need to
kind of put a limit on you. And then interestingly, as soon as you’re living
on your own and you think about, Okay, so I want to go to one of these
parties, you can pause a bit more. And you realize, Well, if I make a mistake
here there’s not somebody correcting me. I could be, you know, at risk.
in some sense when you have somebody who’s protecting you, you’d be more
inclined to take risks knowing that if you move too far, they would put a
limit. And if there is nobody there, then you don’t. So the key idea is
because I could ban something—like, I could tell you not to go to the party,
and just enforce that—if I instead warned you about, This is a really bad
party, and these are really bad people, but I gave you the option to do it
yourself, you would listen to me and hear that I didn’t ban you. I didn’t
forbid it. And you would use that as information to say, It can’t be that
So in fact my colleague Bryan Caplan told the story, you know, soon
after I gave the job talk long ago, that he had just bought a house. And when
he bought the house they asked him, Do you want to do a radon check? That is,
check for radioactive gas coming up from the ground. Because in some places
it’s high, and then if there is, you want to put in air-pressure things to
push out the radon gas so it doesn’t seep in your house and give you
So he said, Is this required? And they said
“No” right after having told him twelve other things were required. And he
said to himself, Well, you’re not requiring this one so how bad could it be?
And so he inferred that even if they had strenuously warned him, We really
recommend you get the radon test, the fact that they didn’t require it
probably meant it wasn’t that bad. And so this would be my story about how
some kinds of paternalism happens, because you can stop them. If you
recommend, in some cases, they just can’t believe it’s that bad.
Then it seems like if you somehow had a policy of never preventing things—
That would change the whole dynamic.
And in fact, in my analysis—it’s a one-dimensional analysis and so there’s a
one-dimensional spectrum of like, how bad things could be, and then what
action you take—in that one dimensional analysis, there’s two cases: One where
the bias that they have relative to you is to do more of it, and the other is
to do less of it. So if on average they would want the advisor to want you to
do more of something—like exercise, say—and they can prevent you from doing
some kind of exercise—say running in the street—then the fact that they can
prevent you makes you and them both worse off. Because the times when they
prevent you are sort of producing less of the activity than you would get on
average if they couldn’t prevent you.
So there’s some spectrum of how
much you do it. The assumption is the prevention just moves you all the way to
one end of the extreme and just says, No, nothing. And that is worse for both
of you. However, if, you know, on average they would want you to do less of it
than you might do for yourself—say, stay up late watching TV—then this is no
longer true. I mean it’s not necessarily better, but it’s not provably worse.
Hmm. I mean, it seems to me that in terms of the question of like, How much
can this be explained doing— using this framing? Like it seems like with
children—and I actually think with adults too—a really big interest that is at
stake here is the child’s interest in making decisions by themselves.
That is just one of the interests that they have. And I think that that can be
shared with the parent diachronically. That is, of course, the parent does
have an interest in the child’s eventually coming to be in a position to make
decisions by themselves, right? But at a given time, right, the parent like
doesn’t want the child to completely make the decision by themselves. They
want the child to make the decision within certain constraints, right? And so
in terms of, you know, you were saying this is a function of how much the two
share an interest. It seems to me that that interest— The child’s interest at
the time in making the decision completely on their own terms, which as I say
I think different children differ—I think my eight-year-old has a much
stronger interest in that than my other two children, right?—and that the
people who— Kids will be described as like obstinate or stubborn or
whatever—it’s kids from— That’s a really important preference, right?
And just to be clear, there are many parents who have a stronger preference
for running the kids’ lives their way.
So you might say that both sides have degrees of that element.
Right. So it seems to me that like mostly what happens with kids is this sort
of barrier of, When would you be prepared— How bad would it have to be in
order for you to actually prevent them from doing it, sort of shifts outward
as the kid grows up. You know like for a little kid, you’re basically going to
control everything they do. And when they’re like, you know, a teenager, you
would only prevent them from doing it if it was really, really bad. And so if
you don’t prevent them from doing it, like that gives them— It doesn’t give
them as much information, right? And over time they get less information out
of the fact that you’re preventing them because it still could be pretty bad.
Right? So I guess I think the bit that seems not really well able to be
captured is this interest in deciding for oneself.
Well I mean, actually, in the formal math of this model that’s a trivial thing
to include. You just, you know, in the choice between, you know, restricting
the behavior and letting them make the right choice, you just add a value for
letting them make their own choice. And that just shifts the numbers over a
bit, but the model works through just fine.
Right. I guess I think the sort of strange thing about that is that it’s not
exactly, like, this divergence of your interests, right, on this point, is not
like— It’s one where it’s not a sort of function of how much they care about
you or something.
But you were saying this is the same interest. That is, you and they both have
an interest in them slowly making their own decisions.
Oh, right. But that’s not the problematic one. So there’s that interest which
you can share, and there’s an interest that you can’t share with them. Which
is— That is, insofar as you’re like, you know, inclined to give them advice
and prevent things and all of that, which is they’re making the decision
completely on their own. Right? So like my eight-year-old has a very strong
preference for making the decision completely on his own.
So you’re saying you think you disagree about the strength of your preference
for them making the decision on their own?
No, I think that my preference for him making the decision on his own is not a
preference for him making the decision on his own right now. It’s a preference
for him learning to come to make decisions on his own.
Okay, but that’s pretty—
So we don’t share the preference because he isn’t—
They’re really close.
I think they’re actually really far, because I think—
If you want him to learn to make decisions on his own this case is a good
example of a place he could learn to make decisions on his own. I mean, well
why not? Why not now?
So you might— So I mean, he might say that, okay? And what I might say is,
Well part of what it is to make decisions on your own is sort of to learn how
to take in the relevant sorts of information.
Right? And suppose that you are— You have a kind of like, very narrow and very
blinkered worldview, and suppose that I— if I leave you alone, you’re going to
learn to make decisions that are in some sense going to be like a function of
these very few factors, right?
And so like I want you to make decisions that are more richly informed. Right?
And for me, that’s part of what it is for you to like, you know, be in a
position to make decisions on your own. Right?
Right. So this is a way in which you think their decision-making is defective
or not fully developed, right? And so that you are not trusting their
decision-making on the basis of this not-fully-developed system.
And they are presumably aware that their system is not fully developed, and
they would be aware that you would know things about the lack of development,
and then you would include those things in your advice. So still there is the
question, Why not just give them the advice: Hey, I know this seems like the
best decision to you at the moment, but I know things you don’t about how your
development hasn’t fully matured. And on the basis of those things I’m telling
you, this is not a good idea.
If their point of view is, I placed a super high value on making this decision
on my own right now, right? Then your advice is falling on deaf ears.
Okay, so let’s move away from the child.
But we want to practice the ideas here in the context of a child because it’s
a familiar context. But now we’re talking about the adults.
In a modern society, right? And we’re talking about, say, a drug regulator,
you know, telling people, you know, you can’t buy that drug, or telling, for
example, a person who wanted to be a subject of a virus-vaccine challenge
trial that no, they may not do the trial. When we’re talking adults who
presumably are relatively as developed as they’re going to get, and we’re
talking similarly compromised, perhaps, or developed or under- or
super-developed regulators who also have all their flaws. And now we’re
asking, you know, the rest of us to endorse somehow this restriction whereby
the regulator will tell the customer what drugs they’re allowed to buy.
Yeah. So there’s a weird way in which you might think adults get less and less
developed over time. It’s the opposite of kids. But I mean, historically— So
take medicine, right? Suppose you were dealing with medicine 100 or 200 years
ago. Like, there’s a lot fewer options, right, that someone might be in a
position to choose among, and they— maybe they have a local doctor or
whatever, right? And the local doctor can explain to them the various choices,
which are not many, right?
To the extent that the doctor understands them, of course.
Right? Exactly. To the extent that the doctors understand them, which is not
very much, right? So there’s not much information at play in the situation,
right? Relative to much later, right? When they’re— Like as time goes on, as
there’s more and more medical research and medical knowledge, as medicine
specializes more and more, right, there’s like tons of information. And so you
might think that the project of acquiring and evaluating the information for
yourself becomes less and less feasible, right? And so that even an adult
human being is like a child relative to that information set, right? Where he
would have been like an adult 100 or 200 years earlier. And so the demand for
regulation should increase over time. And we should become, as knowledge
increases, more and more paternalistic about, say, medical regulation.
So this is a key common framing that I’m going to challenge. And we’ll see if
you buy my challenge, but—
But accidentally hit upon a common framing!
That is, you know, if we say that the difference between the advisor and the
advisee is merely the amount of knowledge they have then we might say we only
want advisees to give advice— advisors to give advice when they have a lot
more knowledge. And not when they have just a little bit more knowledge,
perhaps. But this whole analysis that I was talking about is actually
independent of that difference. All that really matters for the analysis is
that they know something more that’s substantially relevant, and not how many
pieces there are or how many years it would take to learn all that, etc. I
mean, the key issue is I give you advice, and do you believe me? Versus I just
make you not do it.
And so the key point is, when I give you advice you
can take into account your estimate of how much more I know than you. And so
as long as we agree, roughly, on how much more I know than you know, then your
willingness to take advice should in principle go up, as you realize that I
know a lot more than you do. And so the key puzzle here is, Why don’t I give
you advice instead of telling you what to do? And in the—at least the
rational, something close to a rational-agent model—that the key issues have
to be either that the advisee just does not respect the advisor to take into
account, you know, their issues, or not to have other agendas, or that the
advisor does not respect the advisee to, you know, respect their authority and
their information, their knowledge, or to respect their ability to take into
account advice, or to not be too arrogant and proud about it. It looks like
there’s disrespect going on on one or both sides in the situation where
somebody gives advice and the person doesn’t sufficiently use the advice,
which forces the advisor to instead force the advice.
So here’s a thought—I don’t know if this is true, but maybe you’ll know—you
might think that the greater the knowledge differential between two people,
the harder it is for the more ignorant one to grasp just how much the more
knowledgeable one knows. So my baby has a much less grasp of how much more I
know than them than my 17-year-old, say.
So I mean, this is invoking a systematic bias. Now, you know, our simple,
straightforward attitude toward biases would be that, you know, when you’re
trying to estimate a thousand different parameters and you have limited data,
you know, you’re just going to overestimate some of them and underestimate
other ones. And that’s just going to be the nature. But maybe averaging over a
full thousand of them you’re about right, say, in terms of sometimes over-,
And that’s the sort of situation we should
expect, and there’s just— And if it’s changing rapidly, say: Say you get data
on these things, but at the same rate at which new data comes in the world
just changes, and your old data isn’t very relevant. So now you’ve got all
these parameters you’re trying to estimate and not so much information, and
again you’re gonna have to do a thing.
But the more that a situation is
just repeated over and over again, and the more you just keep estimating the
same parameter where that— you know, your data is relevant for that, the more
we expect you to get roughly calibrated at that. Right? So here are the
positive biases, just people who know a lot more than I do, how much more than
I do they know? And that’s a pretty robust situation across social contexts
and people and history. You would think that parameter doesn’t change that
much. It’s harder to believe people would just be so consistently wrong about
that one, and in a particular direction we can predict.
Now, you might
predict any one person: Some person would underestimate it, some other people
will overestimate it, you know, in some topics, you know, that we’ll go with.
But your story here is, No, everybody has good reason to think that everybody
underestimates, you know, how much more people know when they know more. Which
is kind of harder to believe. Another— So we have a lot of interesting data we
can bring to bear on the issue about this, you know, the paternalism. So far
we’ve just confronted the fact that it exists. But we can point to other
In particular, we often have, like, Who will we
let—give an exception to? Who will we allow to make their own judgments? And
you know, that gives us some clues about what’s going on. So for example for
investments, we say that ordinary people like you have to be limited to
certain kind of regulated investments that are available. But somebody who has
apparently over a million dollars in investments—or maybe the number’s gone up
more recently—they’re excused from that. They can invest in anything they
want. And so there’s basically a rich person’s exception for investment
restrictions, which, you know, you might argue is letting rich people get
richer and preventing the rest of us from getting as rich as them because
we’re not allowed to invest in what they invest in.
Certainly some kinds
of things we regulate for children differently. That goes back to when we were
talking before, right, like children aren’t allowed to drink alcohol, perhaps,
Yeah, I mean I guess I think what we must think is that the very same sort of
cognitive defect that is constituted by the person that’s not having the
relevant information, right, is going to have bearing on their receptivity to
advice. And your framing suggests those two things should be quite independent
of one another, right? So take the million dollar people who can invest,
right? I guess the thought is supposed to be that like they know what they’re
doing, right, more than us, right? Which may or may not be true, right?
but the point is that if someone doesn’t know what they’re— If you know that
someone doesn’t know what they’re doing, right, then you might restrict their
activity for their own benefit. And it— I guess it may be that that person
will sort of intrinsically perceive that as disrespectful. So that like when
you were saying, Oh well, they’re not—
Exactly! Right? I mean, so you know one reason you might think it’s
disrespectful is just, Well, the advisor is making a mistake, right? Namely,
they could just be doing the advice thing and they’re switching over to the
other thing. But I think it, like, it seems to me it’s not always a mistake
because the value that the— Well, it’s both that the ignorance, the
substantive ignorance has bearing on the person’s ability to navigate the
advice situation, and that people place a synchronic value on making their own
decisions; not on learning how to make their decision better, but on making it
right now. And both of those things could justify the, in effect the use of
force, which is what this is.
Well so again, the most naive analysis of the situation is to say the advisor
knows more, and the advisee knows less.
Therefore put the advisor in charge. And so the next level of analysis has to
say, But the adviser could give any advice, so why don’t they just give the
advice. And so now we have to have something beyond a knowing more, because we
know that even if you knew more, if they believed you knew more and respected
that you knew more, then they could just take the advice to heart and
therefore you wouldn’t have to force them. So now we seem to be dealing with a
situation of disrespect on one side or the other.
You know, you were
positing perhaps like a systematic bias whereby people just don’t appreciate
how much other people might know more than they do. And you might ask, Well
then, we would then give an exception for the people who know a lot about
anything, right? So say you’re an auto repair person, you always have all
these idiot customers coming in who don’t appreciate how much you understand
about auto repair, right? Well, now you should be allowed to pick your own
drugs because you will see that, yeah, the doctor can know as much more about
drugs than you as you know about your auto repair customers. Right? And so now
we might exempt anybody who’s just really good at something.
That doesn’t seem transferable. It seems wrong to me. That is, I think that in
fact if you’re very, very good at auto repair, and you’re regularly in this
context where you know a lot more about it than the people that you’re around,
that might lead you to think you know more in other contexts.
So, but that’s positing a mistake now, right? And so yes, we want to move out
of just assuming everybody’s rational, and consider mistake theories. But we
want to walk through it carefully. Because we have to ask, well, who knows
about the mistake?
Yeah, I mean, I’m not actually even sure that it’s irrational, like that it’s
underestimating of the difference.
It’s both, right?
Well, so that’s why I gave the case— It is wrong, right? But some mistakes are
not irrational. So like, that’s why I gave the Grand Canyon case, right? Where
like, is it like wrong that you were surprised by the size of the Grand
Canyon? Was it like a mistake? You should have predicted that it would have
looked surprising, and therefore not have been surprised? And it’s like, well,
there’s something—there’s a reality that you couldn’t sort of create all this
space for in your mind before encountering it, right?
And I mean, maybe
this is relevant to the question of like why can’t the advisor just convince
you that they know more, right? And you might think the only really certain
way for the advisor to convince you that they know more than you is by
teaching you everything they know, that’s the 100% clear way. And if they do
that then you should acknowledge, right? Then you’re rationally required to—
It sounds like you’re positing a consistent overall bias in ordinary people to
not respect other people as much as they should, other people who know more
than they do. That is, people just refuse to believe it.
I mean I don’t think it’s a bias to in effect refuse to believe that the Grand
Canyon is so big until you see it. That’s not a bias.
But I think when somebody says, Ooh, the Grand Canyon is really big, they
don’t mean I misjudged its size beforehand.
Well, sure they do if they’re surprised. They’re like, This isn’t what I
But they’re saying like, The details of how It impresses me with its bigness
are details I couldn’t anticipate. And so they’re telling you just the
richness of the experience of the size. I don’t think they’re telling you
it’s, in fact, bigger than they thought.
I mean, I don’t know whether what they’re saying— I mean, what they’re saying
is, “It’s so big.”
And the knowledge of it—
Right. This seems like a whole— I mean, this is a questionable enough case
that it really shouldn’t be the basis of like thinking about these other
So you know, we have a clear, standard idea of what bias would mean. And it
seems like— I mean it does seem like many people do believe that other people
are biased in many ways. And that’s interesting. But there’s other interesting
data here, right? So let’s imagine the customer of the drug regulations,
right? And so now we think the drug customer doesn’t think this drug looks
that dangerous in their own judgment, and then the regulator/advisor role we
imagine could say, No, this really is a very dangerous drug, they’re not
getting much benefit, we definitely recommend that you do not do this. And
we’re positing that this listener is just not taking this seriously enough.
They just are not putting enough weight on the advisor.
And that that’s
what would force the advisor to be tempted to enforce the rule if they could.
And now let’s add a third player, who is the voter who authorizes the
regulator, who is actually pretty similar to, and often the same person, as
the customer. And so what we see is that the voter authorizes the regulator to
restrict the behavior of the customer. So therefore the voter seems to believe
that in fact the customer misjudges the advice of the regulator, but they’re
the same person.
It’s like the picture on your blog of Odysseus asking his sailors to bind
himself to the mast, right? So that he can listen to the song of the sirens.
So people want to be bound in certain ways, and I think that, you know, you
might think— I mean, you could view it as bias but I’m not sure you have to
see it that way. So the kind of respect that you’re talking about where you
respect the authority of the person who knows more than you is real— it really
is always going to be a matter of, to some degree, giving someone the benefit
of the doubt, right?
Because like suppose that like I trust you, you’re
an economist, you’re an authority about certain things that I don’t know
anything about. Right? And so you have this expertise. But imagine that I were
an economist, and imagine that I were an economist like in a similar— worked
on similar stuff than you. I could grasp the nature of your expertise much
more precisely, right? I could know exactly what kind of knowledge you have,
which I can’t know given my distance from you. And assume that I was not an
academic at all, right? I’d be even further— I could still respect you in all
the cases, but this kind of respect is a respect that’s filled with doubt.
It’s the benefit of the doubt, right? And so in a way, what you’re— What I
feel you’re asking of respect is too much. Which is to say, respect to me as
though you were the fellow economist who could see everything I knew even
though you’re far away.
Okay, but I’m losing track here. Let’s connect it back to the case.
The customer, the regulator and the voter.
So the voter says, I want to authorize the regulator to control the customer.
The customer in that role says, I’d rather you didn’t control me. I’d rather
you let me make my own decisions.
And now, in some sense, they’re often the same person—you say this is a
self-control problem. But we have other ways to solve self-control problems.
So for example, the customer ten years before anticipating the self-control
problem could have authorized a regulator to limit their behavior. We didn’t
necessarily need government for this. We could have just had that
So a similar thing happens with like saving plans or
something like that, right? If you think you’re too tempted to spend money and
not save it, sometimes what you do is you authorize your employers, say, to
withhold some of your money and put it in a place that you can’t get to for a
while. And we have many other, you know, people have gambling problems—there’s
a way they can put their name on registries at casinos that won’t let them in
unless they’ve applied to change this. Oh, and they have to wait five days or
So and there’s like, there’s websites actually out
there where you can have a thing you want to make sure you do, and if you
don’t do it, there’ll be a penalty you pay. You sign up for that and then if
you don’t do it, you actually end up paying a penalty to somebody else, and
you can use that to force yourself to commit to things that you didn’t— might
not want to do at the moment.
So the government forcing you to do what
the drug regulator says isn’t the only mechanism available for you to have
self-control over your future self, who you think will make bad judgments. And
so that’s another part of the puzzle, is that it looks like people want to
control other people, not themselves.
Well it may be that government is our way of coordinating that activity with
one another, so that—
But if we don’t need to coordinate— That is, you can just do this yourself.
But it might not be so easy to do it yourself. Like a lot of people, for
instance, choose to, you know, get involved with groups of people when they
want to control their drinking or their eating or whatever. And like maybe
there would be mechanisms for them to control this in some sense by
themselves, but they find it easier when there are— like, they’re— In effect,
the social forces are powerful forces that people want to make use of.
We’re not just using your friends’ norms, we’re using government here, right?
And we’re using it on strangers that you have very little personal connection
with. Another set of interesting issues here is the fact that we are very
reluctant to do this for many other things. So we might say, you know, and our
society is relatively libertarian about, say, sex and relationships.
And— But you might think, look, you know, some 14-year-old, or 18-year-old
even, there’s so much they don’t know—why should we allow them to choose their
Without, you know, having a regulator say, No, this boy isn’t good for you.
Right, so my theory about why we do this, namely when there’s a knowledge
differential, would predict that we would not do it as much in areas where we
don’t feel we have that much knowledge. So even if the 14-year-old or
17-year-old has very little knowledge, we might think, Yeah, but looking at
it— Getting more experience, whatever, I’m still lost.
A lot of older people think they have a lot of knowledge relative to the
mistakes young people— I mean, young people make mistakes, not just, say,
regarding their relationships. Say their careers: Young people try to go
become an actor, musician. And many older people in their world tsk tsk
privately and say, That’s a mistake. But they have no power to stop it.
Although in previous ancient societies they did. So another interesting data
point here is we are different from ancient societies, who more often did
allow, you know, other people to restrict your choice of mate and your choice
of job and things like that.
Right. But so to add another element to my theory: So I think people do
sometimes try to restrict their children’s, and whatever, choice of mate at a
local level, right? But in order to get the group restriction going you
actually need more than just individual people thinking they have a lot of
knowledge about this. You need the whole society coordinating on a shared
sense that there’s something we know, in the way that we do with medicine,
right? We have this shared sense that we know the answers even if like few of
us actually know what they are. But we have for some reason the sense of we
know a bunch of answers, and I don’t think we have that shared sense with who
you should have sex with, who you should marry, or what career you should
have. We don’t have a shared societal sense of the answers to those questions.
We used to.
And so, there’s a switch, right? And we used to let you make your medical
Right. So one way that societies change is in what there is a consensus about.
And so like, where there’s going to be an epistemic consensus, there you’re
going to find regulation.
So if I move back again to the situation of the voter versus the customer, I
want to make the argument that typically the— in many cases the customer has a
way to directly constrain themselves, and the reason why the voter wants this
constraint through government policy is that they are thinking of other people
than themselves. They are trying to protect, perhaps, other people, and they
think other people are not as wise as themselves, or willing to invoke
self-controls on themselves.
And so there is an element of disrespect
there. Where in some sense when the government limits things it’s because
we’ve all decided that at least a big fraction of the population has a problem
listening to doctors’ advice, or investment advice.
Right. And so it could be true that the person in question—so the person who
wishes to restrict the behavior of others…
The voter, here.
That this voter, right, they actually think, I’m fine. I don’t— I know what I
need to know. Right? But they think other people are very defective and don’t
know stuff. They could be right about other people and wrong about themselves.
And that this is a way—
Right, it could be vice-versa. Right. But I think you’re right that people
like— In fact what I was saying is that people overestimate their own
knowledge, right? And this voter knows that about other people correctly,
right? That those people are overestimating their own knowledge. But of course
that doesn’t make the voter immune from making the very same error. Right? So
the voter overestimates his own knowledge, and so by protecting others he in
effect protects himself. It’s that protection that he needed, though he’s not
able to see that he needs it.
Right. Now it’s— In this conversation we’ve focused on potential defects in
the customer, and not on potential defects in the regulator.
So you know, we also find that not only are people more defective than they
like to think they are; regulators are also more defective than they like to
think they are, and more defective than voters like to think they are.
So we could get big mistakes in that other direction.
Yes. You’re much more likely to get me to be sympathetic to the project on
those grounds. That is, I think the customers have— probably their knowledge
is very defective, and they’re not– That is, I’m inclined to think that
ignorance is a substantive and genuine epistemic defect that cannot be
remedied by rationality. Like it’s sort of like you— What you want to do is
say like, Okay, you don’t know this stuff. But let’s throw in the rationality
sauce, so you can know that the other person knows more than you and
thereby—right? And I think, no, that’s not going to work. If you’re ignorant
you’re ignorant, and that’s going to screw you up until you know.
then you have the other— Then the question is, Do other people know any more
than you? Right? Do the regulators know any more than you? And if they
don’t—and if they only believe that they do, and in fact if their belief that
they do is the product of this very same bias where people tend to think they
knew more than they do—then there’s a big problem.
And more fundamentally, the problem would be the voter’s bias regarding the
So, and with more levels of indirection. And so now we want to ask, Okay, yes,
people like are more inclined to just make their own decision for themselves
and listen less to others. So I’m very— I pretty much believe that most people
don’t listen enough to advice. Just as a general rule in all sorts, right?
People would rather assert their independence and their autonomy, and they
think they— Basically people seem to think, and probably correctly, that you
look lower status when you take advice.
And so, you know, a standard
thing is often that a leader like tries to show they’re decisive and
strong-willed and make decisions, and they take advice privately but not
publicly. So if some sort of rival offers advice, then they’re just going to
go out of their way not to listen to that rival’s advice. And sometimes rivals
can undermine someone by tricking them into not following advice they give
So it just seems to be this large dynamic of people wanting to show
their independence and status by not taking advice, at least from people at a
similar or lower status level. And that’s just a defect I see in people in
general. But, you know, there could be matching or similar defects regarding
who we decide we think are able to overcome those problems by being advisors.
Right. I mean, you would think that it would be symmetrical in the sense that
if it’s a source of status to give people advice, then people are going to be
very inclined to give advice, because in giving you advice, they are
positioning themselves as being above you and as being higher status than you.
And that’s a big argument against taking advice.
Right. So then regulators might get full of themselves.
And not account enough for the ability of customers to make decisions.
Right. And so people may give too much advice as status-seeking.
And so then people may be rational not to take advice because a lot of the
advice is coming from the status-seeking.
So why do you think people are too unlikely to take advice and not that people
are too likely to give it?
I would say both.
Oh, I see. Okay.
But now we have to weigh the two problems when we’re thinking about
paternalism, right? Because they’re— The fact that people don’t listen enough
to advice is a reason to have more paternalism, but the fact that people give
too much advice is a reason to have less.
But when you said that people don’t listen enough to advice I took that to be
a net point, right? So on net, right, people don’t listen enough to advice? So
that suggests you’re pro-paternalism because you think that the problem of
people being too likely to give advice is not as big as the problem people
I didn’t mean to have that implication. I just meant to say people don’t
listen enough to advice. But I also mean to point out to— people give too much
And in fact, if we look at advice-giving around us, among—
So you just think there should be less of the whole advice…
Well, I mean I want to like frame the paternalism question in that larger
That’s, to me, a strongly relevant factor. Right? If we just assumed both
sides were rational, we’ve got this puzzle: Why does it happen? Now we need to
correct both sides for realizing that they are too, you know, reluctant to
take advice, too eager to give advice. And now ask, you know, How does that
change whether we think paternalism is a good idea or bad? That would be the
key question to ask because now we’re no longer in the simple rational-agent
model. We’re in a model where people are reluctant to take advice and eager to
And so you might say, Well you know, if the advisors, you know,
their advice, you know, just goes wrong just as badly as the advisees’
unwillingness, maybe it cancels out. So we’re looking for, you know, does it
cancel out or is one side a bigger problem than the other?
Now, but the
regulator advice is a two-step advice problem. It’s interesting to think more
about like, it’s not just that the regulator would give advice, it’s that
people are empowering them to give advice by— or force their advice,
basically, right? That’s the key thing: The voters says, I approve this
regulator to make those people not take the drugs or take the drugs, as the
regulator requires. And the question is like what sort of biases could the
voters have with respect to other advice givers.
So one frame might be
they see this regulator as an extension of themselves, for example. So for
example you might think, those poor people, one of their problems is they
drink too much. Or they take too many drugs, right? Or they have too much, you
know, that they get divorced too often or something, right? So you’re a
person, you see this population of other people and you see that they aren’t
doing so well. And you have an explanation for why they’re doing badly, and
that comes down to kinds of advice they’re not listening to.
And so you
empower the government to go fix those people, and in doing so, you not only
get to show yourself as like a person of concern with altruism, but you also
get to raise your status. Like you’re declaring, I know better than them. And
you are embracing that, and without necessarily having to sort of spend the
years to learn something. You are getting it on the cheap, right? So the
doctor may have to spend years to get their status of being someone that can
give advice, but you as the voter can just believe those doctors and declare
that they know better than the poor people who aren’t taking their advice, and
then you get take on the status of being the advice giver.
I mean, I guess one thing that I don’t feel very sure of is that there is some
kind of very firm boundary, actually, between advice and regulation. Like if I
think about parenting, right…
There’s a wide mix of things—actually you can do mixtures of forcing and
Well, what I’m saying is advice is a kind of forcing.
Sure. And it’s less extreme.
Like, it’s forcing that appears to the person experiencing it to be less
forceful. So you’re in effect going to be more likely to deceive the person
into not seeing the pressure you’re putting to them.
It comes down in part to like your ability to make them listen. Like so I
think you were telling me a story the other day about some doc— You went to
the doctor to give you a drug, and before they would do it they just made you
listen to the speech.
Because they had the power to make you listen to the speech, and they had a
rule about making you listen to the speech. And in some sense as a parent you
can make them, your kid, listen to your speech even before you give them their
And— But in our larger society of adults, if sources of advice want to give
long speeches, they kind of have to get a buy-in unless the government forces
those speeches. So on the side of packages there’s nutrition labels, and
that’s a forced speech. I mean it’s advice, but you have to, you know, have to
see it because it’s on all the packages, right? So we sometimes force you to
listen to advice, and sometimes we give you the option to go for advice. And
obviously, giving you the option to get advice is going to be less forcing on
you than forcing you to listen to the speech, which is less than forcing you
to follow the advice.
I mean, so I wonder whether the differences in force there are real or only
apparent. That is, so say, you know, you might have like a nudge or something,
right? Where like there’s the healthy food in the front of the supermarket or
whatever instead of the candy or something like that, you know? And that’s
like the lightest touch of advice, right? (In one way of seeing it, right?)
the person who responds to the nudge, and who buys more— starts buying more
fruit, right, is going to experience that as being very unforced. Right? In
fact they’re going to experience it as just their own decisions, right? And so
what you’re doing is you’re trying to influence people’s behavior without
actually teaching them why the way that they’re behaving is the right way to
behave in all of these cases, right? So you’re trying to, you’re trying to
create influence without knowledge. And I guess I would describe all of that
as a use of force.
However the use of force can be very nonviolent. In
fact it can be quite pleasant. Flattery is a way to use force. Right?
So the real question for you— There is this very, very important dividing line
between one kind of force and other kinds of force: Namely, the kind where the
government backed by, you know, the, whatever, criminal justice system, right,
gets you to do stuff, versus other ways that we use words or packaging or
nudges or whatever. And we often call the stuff that I would say is the use of
pleasant sorts of force: That’s like persuasion, right?
But from my point of view, there’s like yet another thing, which is like
teaching or the pursuit of knowledge, right? That’s what it would be to get
someone to behave in a new way not by using force. And so I really draw the
I don’t really care where we draw the line defining the word force. To me it’s
more interesting to just ask counterfactually about particular policies, What
if we took them away?
And is the world better or worse under that scenario, right?
Well I think you want to know another thing, which is, is the world less
paternalistic? And my thought is, it’s— That is, the distinction you want to
make isn’t a distinction between more and less paternalism, it’s just a
distinction between how apparent that paternalism is going to be to the
Okay, but if you’ve got somebody who, say, wants to take a particular drug
because they think it’ll help their cancer situation or something, and the
government says, No, you’re not allowed to do that.
Threatens to shoot them or something if they don’t. If we took that government
thing away, we also know that their doctor might give them advice about it,
and their family or friends might pressure them and shame them or praise them
depending on what they do, and that they will be in a larger social world;
they won’t be able to make this action completely independent of social
But nevertheless, they might see the taking away that— the gun threat as an
They would see it that way. But my question’s like, Is there really such a big
difference? So take my kids, right? Suppose that I set up a situation for them
in which like I know how to structure—and sometimes I can do this—like I know
how to structure a choice such that it’s just going to appear to them that the
thing that I want them to do is the thing they most want to do. I’m very bad
at this with my eight year old because he can see through attempts to do it,
right? But sometimes I manage it, right?
Let me— Can I give an example?
Let me just try and think of an example. Okay, here’s a good example. I don’t
like when my kids say, How much longer till we get there? So we created like a
sort of a game where you get a certain number of
how-much-longers-till-you-get-there. Each has a certain number, and they can
trade them with one another, they can give them to one another, they can
decide when they want to use them. They’re often like looking at the clock to
And so sometimes they’re very annoyed because they get to the
end and they didn’t use them, you know? And then they want to know whether
they can bring them over to the next trip. Right? But somehow, through this
game, right, I manage to make it fun for my kids not to ask, How much longer
till we get there. That’s also paternalism.
So this is related to a concern many people have about overregulation, which
is that if the government has enough regulatory levers they can get pretty
much everything they want by threatening to use some while recruiting others.
So for example, the government isn’t allowed to officially censor Facebook or
Twitter, but if the government has other regulatory powers over Facebook or
Twitter and threatened to sort of break them up from— because of antitrust,
and it makes clear, We would sure like you to censor these things, and we’re
thinking about breaking you up by antitrust; then they can use their other
powers to get the censorship they want.
And so you as a parent have
enough powers over your children that if you gave them some official freedom
in some area, you could compensate for that through other kinds of things you
control to basically get similar outcomes. And so in a world where the
government has enough broad powers, they can get pretty much everything they
want. And so, but if we’re not in a world where they have such broad powers,
then in fact they would— it would be different, right?
So a great many
people, for example, are worried about cryptocurrency being regulated. And
that may happen, and if it is, you know, that will be because people think
that regulation makes a difference. That is, if we forbid investing in some
kind of cryptocurrencies because they don’t meet regulatory requirements,
which they don’t, then people won’t invest as much in those things. And the
people who offer those things won’t make as much money. And the government
doesn’t in fact, in our society, have a whole bunch of other levers it can use
to basically get the same outcome. And that is true for a lot of our— We have
a lot of regulations that actually matter, because if they went away, you
know, people would act more freely and do more things, because we aren’t in a
society with a big parent like you who can move other things around to
basically get the same outcome.
Or I mean, it may not be that they would act more freely, but that there would
be like a lot of noise in how they act, let’s say, right? So there’d be a lot
of different factors that would shape how they end up acting. I mean, so
really isn’t just the question like, Well, how important is it that people act
in such-and-such a way. And if that’s quite important, and if the value of
that outweighs the cost of the regulation, then we should do it. And none of
this stuff about freedom or paternalism or respect is really that relevant to
the question of what we should do.
So I want to bring in another data point that I think is relevant, which is if
we think about free speech, like an abstract argument for free speech is to
say that your listeners should be entitled to make their own choices about
what to believe, right? So regulating free speech in some sense is a
regulation of listeners saying you’re not allowed to listen to this.
from that point of view it would be a question of the ability of listeners to
judge what is safe to listen to and the ability of regulators to restrict what
to listen to, and whether they would do that for political or other, you know,
biased political, you know, government-administrative reasons, rather than
just trying to help you out and preventing you from listening to things.
if you look at what we actually do for free speech, we give people free speech
on the basis of who they are and not of who’s listening. So for example we say
that people in the United States have a free-speech right to say things to
other people in the United States, and foreigners don’t. But it’s the same
listener: The listener in the United States is not allowed to listen to the
foreigners, but they are allowed to listen to the domestic— other people from
the US. So in that sense it looks like the free speech right is a sign of
respect given to the US speakers, and it’s less about protecting the listeners
and more about respecting the speakers.
Well, you could think that shared cultural context puts a person in a better
position to assess what another person is saying.
You might, but then there’s all these other cultural-context variables that
are not being included in this discussion, right? We don’t say you can’t
listen to somebody from another profession or another ethnicity or another
gender. I mean sure, there’s all sorts of variables by which people might be
more different from you, and we might forbid you to listen to them. It would
be kind of crazy.
Right, I mean, but it might be, for instance, that we’re just like— We somehow
think that that’s like, you know, given that we can—given that the regulation
can be that fine-grained, like that’s the number-one issue.
Right, but that’s crazy. I mean, come on. Among all the differences between
people that might make you like misjudge whether you should listen to them,
them being a foreigner seems way down.
It doesn’t seem that way to me. I mean, that is, when I try to communicate
with foreigners there’s often this like, time— this like, delay in me
understanding their sort of signals and the way they’re talking, and all that…
Okay, and yet we celebrate travel, right?
I do not. Don’t—
So the same argument. Okay, but some might say, We should prevent people from
traveling to foreign lands because when you travel to foreign lands, then you
can listen to them, right? Well, we don’t let you listen to foreign people,
but we let you travel to foreign places.
It’s interesting though that like, if you studied how people travel, I wonder
how much listening to foreign people actually goes into that, right? There’s
something very regularizing, and—
Oh, which might suggest that you wouldn’t listen to them at home either,
That’s fair enough. And I’m sort of— I’d be sort of a little bit surprised to—
Maybe it’s also that the regulators themselves don’t feel that they can easily
tell what is dangerous in the foreign person. Like the foreign person could be
speaking almost like in a kind of code that they themselves are not able to
see through. Or maybe just in general, we think foreign people are more
So I’m suggesting a set of ways to think about this as to say that it’s on the
surface supposedly about protecting people. But in fact it’s more about the
regulators and voters and other people’s status. And what we do is we empower
people to regulate people who are low status, and we empower high-status
people to regulate them. And then the voters gain status by affiliation with
the high-status people regulating the low-status people. And so handing out
this power— The freedom to make choices is a way of handing out status.
you know, for example in health regulation, basically poor people drink about
as much alcohol as rich people on average, and about as much cocaine on
average, but people are really worried about it for poor people and not very
worried about it for rich people. And in fact they go out of their way like to
ban or limit liquor stores in poor neighborhoods but not in rich
neighborhoods, or things like that. So there’s a number of these.
example, single moms. You know it turns out, the literature as I read it, that
being a single mom may hurt yourself in terms of your thing, but your kids are
actually about the same as if you hadn’t been a single mom, because the people
that become single moms are relatively poor anyway—that sort of thing. So, but
people are very concerned about poor people being single moms, and trying to
restrict that and to regulate that, and they’re not so concerned about rich
people being single moms.
So you might— It might be like you mentioned, that you have several kids but
one of them was like sick when they were little or something, you know? And
you’re always more protective of that one. And you’re always— and even in
situations where it doesn’t make any sense for you to be more protective, I
think it makes sense that people in general have more of a protective instinct
with respect to poor people.
But to go back to the status thing, so— and
we probably, we just have to have another conversation about status because
we’re almost out of time. But you know, I think status is equivalent to a kind
of reputation, right? So it has to be indexed to something. I mean, there’s
just generalized status, there’s status within a community for it, right? So
it’s a reputation for something. And so like— And I think you always have to
ask, like, Well, what is the reputation for? It’s like not a thing, it’s a
And in this case, right, they want the reputation for
protecting and caring about other people, right? And I think in general,
though societal organizations can become sort of so pathological that these
things become dissociated from one another, in general if a society is
functioning like reasonably well, there’s going to be a strong correlation
between x and the reputation for x. Right? There won’t be wild— Like, it’s not
like the physicists are the people who know no physics, right?
That would be super weird. And so in general, I think the people who are
seeking status for something, what they’re seeking is a kind of coordinated
recognition of the value that they’re actually pursuing.
You’re assuming that the purpose here is to protect these people. So I’m—
That’s the assumption that I’m challenging. I’m not— I mean, I’m not saying
people don’t deserve their status, and that there isn’t something behind
getting status. I’m just wondering whether it’s actually for— about protecting
Well if it were about protecting people, I would predict it would focus on
poor people. So the fact that it does, for me, reinforces my idea that it is
about protecting people, just as if your attitude towards your children and
your restrictions were about protecting them, I would predict that you would
have more restrictions with respect to the one who had been sick, say.
Although we do a lot of things to poor people to limit their options and to
prevent them from…
But you might limit that child who’d been sick’s options more than the others.
I would predict this is what you would do.
So I wanted to just mention one— I guess one last interesting point, which is
that regulators could be completely altruistic and completely concerned about
their citizens, and still have an incentive to lie. That’s because there could
be externalities of behavior. So let’s imagine driving and drinking caffeine
or alcohol. So you know, regulators might want to tell you like how much
should you worry about side effects of caffeine? And how much you should be
worried about the effects of alcohol for yourself.
But they might realize
that if you drank more caffeine, you’d be more awake on the road, and if you
drank more alcohol, you’d be less awake on the road. And the effects on other
people are clear, so that it gives them incentive to be biased to tell you
caffeine is better than it is, and to tell you alcohol is worse than it is, to
help protect other people, not you. That is if they’re giving this advice— and
in general, one way to fix any other sort of market failure or a way in which
society just fails to do too much or too little of anything, is, as an advice
giver, to just lie about the quality.
So for example, if we think for society’s benefit it would be good if more
people went to school, then we want to lie about how good school is for you,
and tell you to go to school.
Right. And you in principle don’t have a problem with that, as per our
conversation about honesty. But as I think honesty is a final norm, and so you
should just be honest— but you think you should do what’s good even if it’s
dishonest, except that you also have to worry about group level, and, you
know, heuristic, and all of that, right?
Well, right. But here, it’s not about praising them or not, it’s just about
realizing that this problem of advice givers not being entirely believable is
robust to whether they have altruistic motives or interests.
Right. That makes sense.
It’s a more general problem. So that you should expect, pretty generally, that
advice givers will not be exactly trying to help you, and therefore that you
wouldn’t exactly be trying to take their advice. You might be in the ballpark
of the advice you want to take, but you’d be worried and suspicious about this
difference between them and you.
Right, that makes sense to me because I see advice as belonging generally
within persuasion, which is a mode by which people use force. So the fact that
there’s deception involved is— helps them use force while making you less
cognizant of the fact that they’re using force, which is a big function of
advice, as I see it.
Say you have an uncle or something who like has an issue with how you’re
living your life, and you have a choice between them giving you advice or
enforcing their preferences. I presume that you would rather them give you
advice. I mean, yes, you don’t want to have to listen to their sermon, but,
you know, it’ll be over soon, and then you could go make the choice you
wanted. So I think for most individuals’ point of view, they will view the
choice as wanting to— They’d rather get advice. So even if advice has
negatives compared to other things, especially the option of advice, right?
That is presupposing that I— Suppose I know my uncle is just an incredibly
charismatic and persuasive guy and really really good at deceiving people,
right? That is, he’s really good at using the kind of force that is at stake
in advice—I might not prefer that.
You’d rather he just gets his way? Make you do what he wants?
Well, I mean it may be that if he forces me, I can sort of rebel in little
ways that he might not notice. Or I can break the rule and be punished. But if
I let him persuade me and infect my mind with the idea that his way is the
right way that could be worse, yeah.
Of course, what actually happens is that governments who force you to do
things also give a lot of advice.
Sure, that makes sense to me because I don’t think of these things as
difference in kind; they’re just different avenues for the use of force, some
are more deceptive than others.
So fundamentally I guess the key point is, Do you trust the regulators and the
voters who empower them to be roughly trying to help you or not? Right? And to
the extent that you think they have other agendas that are stronger, you will
be more worried about them having this excuse to control you in, supposedly in
your name or for your benefit, where in fact it’s not.
But if you think
they are in fact driven by processes with reputations that are tied to whether
they actually help you, then the more you might be comfortable with this whole
setup where they are empowered to do things they say help you, and that can
constrain what you do.
I trust that everyone roughly is always trying to do the good. The problem is
that they don’t know how most of the time.
They could also disagree about the good.
They could disagree. But even in disagreeing they’re trying to get at what’s
really good, like there’s a common thing they’re trying to get at. Though for
instance, we could disagree about a certain value and whether it’s actually
valuable, right? But if you think it’s actually valuable, that’s you aiming at
Maybe we’re setting up for another conversation about here. But you know I
might, as the usual economist, say like, Most people think that they or their
friends should be in charge. Right? Most people think the good world is one
where they rule, right? And their allies rule.
And that’s a disagreement
about the good that’s pretty fundamental and problematic, right? Because
people then fight over who’s going to be in charge, and that’s an issue for
which I’d like institutional solutions where I’m not going to just say, Well,
apparently you think you being in charge is good, so I guess we’ll just go
with that. I mean…
We should have another conversation about that.
But nice talking!