Hidden Motives
Agnes:
Hi Robin.
Robin:
Hi Agnes!
Agnes:
So I want to ask you about hypocrisy. Let’s start with, What is hypocrisy?
Robin:
Hypocrisy has many different associations. I’m interested in what I’d call
“hidden motives.” So— And motives happen at many different distances from our
choices. So in my book with Kevin Simler, The Elephant in the Brain: Hidden
Motives in Everyday Life, we’re focused on looking at distal motives, that is,
motives that are far from the behavior, that we could say using fundamental
explanations for. So evolutionary psychology, for example: We might look at
kinds of things people do and ask, What’s the most plausible distal motive?
That is, basic genetic evolution, cultural evolution, that would have shaped
those behaviors? And that’s different from the immediate, proximate cause of
the behavior/motive for the behavior, i.e., what’s in your head and what was
your plan?
Agnes:
So I don’t think that’s hypocrisy, but…
Robin:
Hypocrisy might be a difference between the actual distal motives and the
proximate motives that you would present yourself as having.
Agnes:
Right. I don’t even think that’s hypocrisy. But I think— Let me ask you
something about that, then I’m going to get back to, like, what I think
hypocrisy is. So you know, we have… There are kind of, like, evolutionary
explanations for a lot of behaviors that are… I’d say like at a very great
distance from the behaviors as we would represent them to ourselves. And I
take it your view is a reflection on those evolutionary origins, isn’t—doesn’t
stand to correct the self-perception of why I am engaging in the given
behavior.
Robin:
It presents it to some substantial degree. That is we present ourselves as in
control of ourselves and as understanding ourselves. And so when we have a
sense of our proximate cause of our behavior, and we present to the people
around us as having that proximal cause, we do present it as also being a
distal explanation.
Agnes:
Right.
Robin:
We present it as if this is all you really need, the main thing you need to
understand what we did is this particular proximate cause, and that there
won’t be a whole bunch of other subtler shadings of the behavior.
Agnes:
So, like, suppose you— Do you think if you were to do a study of all the
people who’ve read your book and then look at how they present themselves—in
practical context, not when discussing your book, right? Just like, when
they’re like, Let’s go to this movie, or whatever—you think that they would
behave differently in this context, in the sense that they would present
themselves as not really having a sense of why they’re doing what they’re
doing?
Robin:
I would certainly claim that most people who have read our book, it hasn’t
influenced their behavior very much.
Agnes:
Okay.
Robin:
So, most people—
Agnes:
Has it influenced their sort of conception of their behavior? Not their
abstract conception, but their lived perception?
Robin:
Probably not much. It’s just very hard to do.
Agnes:
Okay, and you know that’s what I thought you thought. And so what I want to
ask you about is, if you think there isn’t much hope for, kind of, this
becoming a transparent way of looking at one’s own behavior, why do you think
there is more hope for it becoming a source of, like, policy change?
Robin:
Right. So I think there are scholars and intellectuals who hold themselves to
higher standards than they expect most people to hold themselves to, and who
are willing to devote more energy and attention, and more willing to overcome
reluctance of various sorts to see and believe things. So, I mean, just— Most
ordinary people, when they think about politics, for example, have a somewhat
idealized concept of politics and most professional political scientists are
relatively cynical about politics. Most professional political scientists have
studied a lot of detail about how politics actually works. And, you know, have
seen a lot of how the sausage is made. And they might be idealistic still
about what they hope to do in politics in the party they hope to support, and
what they hope it’ll achieve, but they’re still somewhat cynical about how it
works.
And so I might hope that for other areas of life in addition,
social scientists who specialize in studying some area of life, they would see
a lot more detail than most people would. That detail would be puzzling from
the point of view of the usual explanation. An alternative explanation that
made more sense of those details would be something that stood out to them as
a better explanation, and they would feel more of an obligation as a
professional who studied that area of trying to make sense—to acknowledge the
truth or plausibility of the more cynical distal explanations that explain
more of the details.
Agnes:
But do you think it would change the way—their lived experience? The way they
live their lives, the way they made movie recommendations to their spouse, and
all that?
Robin:
It would to some extent, but to a limited extent. That is—
Agnes:
Even for the social scientists.
Robin:
Right. When a political scientist is hired as a consultant to a political
campaign, their advice is more cynical than would be an ordinary person’s
advice to the political campaign. Right? They are living that experience; they
are, you know, advising the campaign. Nevertheless, in some other modes, like
when they choose their own vote, they may not attempt that. They may allow
themselves to think of themselves as an exception who has idealized motives
for their vote, and in that sense it wouldn’t change that part of it.
Agnes:
Right. I’m not really asking about whether one can maintain this cynical
perspective, but whether it can sort of become a kind of guiding in action.
And so maybe— Can I give an example? Like let’s take, you know, sexual desire.
Like, so we might think that in desiring someone sexually, we want to like
have sex with them, right? But actually that’s naive and idealistic. And what
we actually want is to have children, in some sense, right?
Robin:
Okay.
Agnes:
But that’s not how it seems to many of us much of the time, right? And so now
you might think, Well, could one somehow reconstruct one’s sexual desire to be
less, in your use of the word, hypocritical—which is, to me, a very weird use
of that word—such that one could see one’s own sexual impulses as reproductive
impulses through and through? And I think the answer is that’d be super hard
to do. And now, I think it would also be super hard to create social
institutions that kind of try to force people to see their sexual impulses
this way. It will be hard for the exact same reason. So what—
Robin:
I think it isn’t— I mean, you can move in that direction, and it’s certainly
very hard to imagine moving all the way. But I certainly think it’s possible
to move substantially that way. So a slightly analogous case: Imagine you’re
watching a stripper. Okay. And, you know, from a naive point of view, you like
to watch beautiful flesh, and there it is in front of you in high resolution,
close up, and you might, like, get turned on watching a stripper.
And
then you might notice that she has no interest in you whatsoever and you’re
never going to get with her. She’s just up there on the stage. And that could
take away from the attraction. You could realize that this is almost like
watching a movie, right? It’s much less attractive to me than if there were a
clothed woman nearby, perhaps, who seemed to show interest in me, right? And
you could, by reflection on the fact that the stripper was not actually
interested in you at all, that interaction could become less attractive to
you.
And similarly, I think that if you realize that, you know, a lot of
the attraction, I mean the source of sex is fertility, you might in fact get
turned on more by sex that had more fertility possibilities.
Agnes:
So like—
Robin:
That is a thing that could happen. It would move you in that direction. And
that would be, in effect, in the direction of what you’re describing.
Agnes:
Right. So it just seems to me that that the, like, you know, the distance is
much greater with fertility than with the like, you know,
is-the-stripper-actually-attracted-to-me case. Right? And it’s precisely that
distal thing that I want to focus on as being quite distant. Which is, like— I
mean, like, you know, if one just goes by popular culture it doesn’t seem that
people who are desperate to have kids, that that, like you know, necessarily
like energizes their sex lives, right? In fact—
Robin:
I have to tell you otherwise! So, when my wife and I were first trying to have
kids then yeah, it was a lot more…
Agnes:
Well, I mean…
Robin:
A lot more energy and enthusiasm.
Agnes:
Okay. Okay!
Robin:
[Laughs]
Agnes:
So like on TV there’s always, like, this problem…
Robin:
Well, okay!
Agnes:
So, maybe my data set is bad. But you know, it seems like…
Robin:
But I mean, to acknowledge the basic point is, Yes: Our conscious perceptions
of our motives are quite a difference from the distal explanations we will
have, and it’ll be possible to move some distance, but probably not a huge
amount.
Agnes:
Like, lots of people are really interested in sex and not interested in having
kids.
Robin:
Right.
Agnes:
And it doesn’t seem like it would, like, would be so easy for them to dissolve
their interest in sex by saying, Well, this is really for kids and I’m
not—this has evolved in me—
Robin:
Right, but you could at least understand— I mean the distal explanations help
you understand many of the details of your proximate feelings. So for example,
like, men are especially interested in younger women, and women who seem
fertile and have signs of fertility and health, right? And you might ask, Well
that’s kind of arbitrary. Why couldn’t you have sexual desire to other people?
Right?
And then once you understand the evolutionary origin of this you
could better appreciate where your desire is coming from and shaping the
details. And perhaps more accepting of it, or more determined to overcome it,
perhaps. But you know, that’s— Part of the thing is just to be able to see the
details. That is, your simple proximate explanations of your proximate desires
are actually bad in the sense that there’s a whole bunch of details that
they’re not explaining very well. And this distal explanations will like
explain a lot of those details.
Agnes:
So that’s what I’m— I’m challenging you in that I don’t think it explains
anything. Like if I think about my own, like, sexual desires, I think—and then
I’m told, Well, these evolved in me so that I would have children—I feel that
doesn’t enlighten me at all about those desires. It teaches me nothing.
Because what I feel like I’m after is so different from what the evolutionary
story tells me that I’m after that, like, that I could find lots of stuff out,
but it doesn’t shed any light on my own experience.
Robin:
I mean, there’s a whole literature in evolutionary psychology of mating that
claims, I think correctly, to explain a lot of details of male and female
mating behaviors and sex. So you know—what times of month you would be more
eager for it, which particular people you would be more eager for and what
ways, you know, what ways they need to approach you to, you know, to get you
receptive: A great many of those details are claimed to be explained by this
larger evolutionary framework.
So we could walk through this one by one,
but I mean, the first point is just to say there’s a literature that claims
that a lot of these details are explained.
Agnes:
I mean I guess— So like I guess there’s two different thing you might mean by
“explained,” right? And so like, in one sense— That’s why I was sort of
focusing on whether that first-personal point of view could ever become
transparent, right? And what I’m saying is, like, in the case of, you know,
like sexual desire, is it ever going to be able to transparently collapse into
this fertility thing in such a way that it could be a corrective, right?
Where
like for instance one of your examples that you give is like how people, like,
pay—spend too much money on health care, right? And then the thought, like, I
guess, is that at some level, either individually or more likely as a society,
we’re going to, like, realize that what we are actually, you know, tracking is
giving caring signals, and thus, save some money and spend less money on
healthcare, where that then can become a first-personal guide, right?
Robin:
And I think it can be, particularly in the case of medicine. So I feel like
when you are sick, or, you know, have a rash or something scary that you don’t
understand very well and are feeling badly, then you feel at risk and you’re
scared. And one of the things you are scared about is dying or being disabled,
or all sorts of things, right? And you will be scared for a loved one who had
those things, and you’re honestly emotionally scared. And that drives you, in
your mind, to push for medical care—get them to go to the doctor and get
checked up, get yourself there, make sure you have health insurance, etc.
And
if I say, But what medicine really is is helping each other show each other
that we care, and in fact the medicine itself isn’t doing that much for
health, it’s mostly a way we can spend money to show each other we care. Now
in the context of feeling sick, I could say to myself—and I do say to
myself—Don’t get so stressed with whether I get to the doctor today or later,
etc., because that’s actually not going to help very much.
It’s, you
know, there’s a risk here, and I’m just going to face the risk and the doctors
won’t be able to do that much about it. But I can make sure that I feel like
I’m being cared for, that I care for other people, and I can just focus more
directly on reassuring each other that we care about them, and we’re paying
attention to them and trying to help them. And if I realize that’s the main
point here, I can just get less stressed about whether I get a doctor
appointment today.
Agnes:
So why doesn’t that work with the sexual cases? Why can’t I tell myself, if I
have sexual desires, that like, well, you know, this was for kids, and I don’t
want any more kids. So I can just be less stressed about whether or not I get
sex, because I’m done having kids.
Robin:
Well, I think it can a bit. So it’s, again, it’s about degrees. So I’m not
saying I would never go to the doctor, right? I’m saying I just might be less
stressed about whether I get an appointment today or tomorrow. Right? I’m not
saying I’m just gonna cut out medicine entirely.
I think similarly about
sex. For example, I think a lot of the sadness you feel, and the, even feeling
crushed you feel if you’re not sexually desired, is to see that sex is not
just pleasurable. It’s a sign of status in our world. People who can’t get sex
if they want it are seen as low status. They expect to be seen as low status.
And that’s a big status hit, and they don’t like it. Right? Because sex is,
you know, a big contribution to status, right? Attractive people who can have
sex when they want it with people, whoever they want to: That’s very envied.
It’s a high-status thing.
And if you realize that that’s going on you
might say to yourself, Well, the real long-term game here is kids. And I’ve
got kids, and they don’t. And you might be able to look down on them a bit and
say, Yeah, they’re enjoying the sex game for a while, but that’s not going to
last. I’m here for the long game. And the long game is having kids, and I was
able to have a family and have kids, and so I’m winning over them. In the real
sense.
Agnes:
So you just created a different status game.
Robin:
But one that I’m winning in and that can make me feel better. Or maybe I don’t
have kids, but I can say, I’m going to focus on, like, finding a partner and
having kids, and not on just having as much sex as possible. Because you might
see that as a sort of misleading losers— temporary losers game, right? And
this is a reaction to, sort of, seeing the world as having decadence then,
right? Decadence that— The word “decadence,” as ascribed to a society, would
be a stance where you say, In this society they’ve chosen a set of status
rankings and priorities that do not have good long-term consequences.
And
they won’t be rewarded in the long term. They are, you know, a local way in
which people are enjoying pleasure and, you know, status, while the society
loses, right? And you might think, Well, then I reject that more. I’m less
interested in winning that status game, and more moving to a different status
game that will win more in the long run. And even if I have less status with
respect to this local decadence game, I’m going to do better with another.
So
that’s— Whether or not this, you know, is compelling or not, you can see it’s
at least an example of thinking about the distal causes, and then integrating
them into my perception and framing of sexual desire. So you probably even
know, like, in the Roman era it was considered a weakness of a man not to be
able to constrain his desires sexually, in order to be a strong man who did
the thing that his society and family needed him to do. And it was not
considered high status to, you know, give in and have a lot of sex, right? And
so you can see then how the status of sex can vary.
Agnes:
But it sounded like you were saying there’s like multiple status games, and
then some of them are real, like the one with the kids. That’s, like, a real
one. Is there— Are there real ones and fake ones?
Robin:
Well from your point of view, if you are choosing among status games, you are
choosing which ones you’d choose to recognize. So I think I’ve said this
before, but it’s worth repeating: Many people like to project themselves as
not caring what other people think. They see other people conforming and
trying to pay a lot of attention to what other people think. And they say to
themselves that that’s low. That’s not self-respecting. I don’t care what
other people think.
And that’s kind of wrong. Everybody cares what other
people think. But I think we have some degree of flexibility over who we care
what they think. So you know, whenever you do something, and you imagine
outside observers or critics, you’re imagining some group of observers. And
you do care a lot about what those observers think. But you can actually have
a bit of control swapping in who the observers are you’re focused on. Are they
your parents? Are they your— students that, you know, had as roommates in
college? Are they your immediate coworkers? Are they reporters who could call?
Who are these people you’re imagining in your head?
And different people
you can imagine will push your behavior in different directions. If you
primarily think about “My colleagues, what will they think?” Well then you
will choose, you know, different sets of behaviors to impress them than you
might think “My family.”
Agnes:
It seems implausible to me that people have so much choice with respect to
that group, because that just seems like another way of baking in that they
don’t really care what other people think and that they’re in control of it.
Because if you think about it, if they had the choice, wouldn’t they just pick
the people who would most often approve of them? And yet people tend to pick
people who often disagree with them.
Robin:
Right. Yes. So I don’t think you have complete freedom. But I think you have
freedom to choose among, sort of, high-status people that you would be in—
kind of respect. Who— So for example I tell myself, at least, that I’m
thinking of people like Einstein or Newton or something. That’s the people I’m
trying to impress, right? I want to put them in my mind and say, Would they be
impressed by this? And I’m sort of going for long-term fame or glory, right,
among the really best intellectuals. And if my immediate colleagues would look
down on it, but Einstein respected it, I can go, Yeah, to hell with them, I’m
going with Einstein! Right? Because Einstein is bigger and higher and a more
worthy audience to impress.
Agnes:
I think it might be a function of who you’re in the physical presence of, too.
There was this episode of This American Life, you know, this radio show where—
It was the very beginning, it started with being on a subway platform in
Manhattan and there was kind of a crazy woman who was going to each person on
the subway platform and saying to them, “You’re in.” To the next one she says,
“You’re out.” “You’re in.” “You’re out.” She’d just say this, right? And this
guy is reporting that as she’s getting closer and closer to him, he finds
himself thinking, I hope I’m in! [Laughs] Right?
Robin:
[Laughs] Right.
Agnes:
So like there’s a way in which it’s very hard, I think, not to care. But with
people who are physically around you—
Robin:
But we do choose who we’re around.
Agnes:
Okay.
Robin:
Right? You choose who’s around you and who you hear gossip about you. And who
you pay attention to. We do have a lot of influence over that. And so for
example you could go on Twitter and then hear a lot of people who hate you
there. And if you read those Twitter things a lot, then that could be a big
thing to you. And now your life is, How can I make those people on Twitter
like me better. If you just never go on Twitter, and never see those people
yell that they hate you, then maybe that— You don’t care what those people
think.
Agnes:
But you’re presupposing that I have like the free choice not to go on Twitter!
Robin:
Yes! Right!
Agnes:
And I can just decide not to do that. But the reason I do that is already that
stuff is baked into my psychology that makes me care what people think. Right?
So like, where in some sense I have this deep concern—I must, if I’m on
Twitter—
Robin:
Right.
Agnes:
—I have this deep concern for, like, what people in general think of me. And
this— Once I’m afforded an avenue of finding out about that, like, of course
I’m going to take it because I care.
Robin:
So here I’m sounding more like you about aspiration.
Agnes:
[Laughs]
Robin:
I might say, Look, your future choices are determined by several causal
channels. One of them is your current values as you currently conceive of them
and currently understand them as having been realized in various practical
choices. And another of the causal influences on these future values you will
have are a bunch of relatively low choices about where you live and who you go
to dinner with and what you read, and things like that.
And you might
assume that those choices won’t have much effect on your grand, high values,
but of course you’re wrong. You know, your future values as practiced and
realized will depend on a lot of relatively concrete choices you will make now
and soon about, again, who you read, who you listen to, who you talk to, who
you think of. And that’s part of how you can aspire to change your values: As
you discussed, you can just choose those contexts. And it’s just in a sense
much easier to choose those consciously than to choose to change your
high-level values and ethics.
Agnes:
I mean, I’m just surprised there isn’t a chapter in your book, or maybe in
your next book, that says something like, Look, we think we can make all these
little decisions where we choose to have— That, you know, little decisions
about who to read or what to eat, or whatever, that’s going to make these
changes in our values. But actually we have hidden motives for every one of
those decisions.
Robin:
Of course we do. But I mean, this is your aspiration story, right? You aspire
to be someone who loves music, you don’t love music now, or classical music,
say.
Agnes:
Right.
Robin:
And so you choose to go attend a classical music class.
Agnes:
Right.
Robin:
Now of course, yes, there are hidden motives to why you chose the classical
music class. But nevertheless, the net effect of choosing the classical music
class and exposing yourself to it in certain ways is that your appreciation
for classical music increases.
Agnes:
So let’s just shift for a minute to hypocrisy, because I want to talk about
actual hypocrisy.
Robin:
Okay.
Agnes:
So you know, I don’t think that the fact that I have sexual desires but don’t
represent them to myself as desires for children is what would typically be
called hypocrisy, right? Typically I would think that hypocrisy involves sort
of something like an insincere expression of commitment to an ideal. Or maybe
it doesn’t need to be insincere. Maybe it can be something like, the person
might even believe that they’re committed to it, but it’s— There’s information
readily available to them that they’re not, if they were willing to try to
collect it, but they’re sort of self-deceptively not collecting it. Something
like that.
Robin:
I would think it’s just more about being criticizable. So ways in which you’re
unaware of your motives that aren’t so open to criticism, in terms of norm
violation, we wouldn’t call hypocrisy. But if you are self-deceived in a way
that protects you from criticism of norm violation, then we would tend to call
that more hypocrisy, because we’re going to be open to the idea that you’re
doing this on purpose in order to avoid the norm-violation accusations.
Agnes:
Hmm.
Robin:
And that would be sort of the outrage force. The idea that, No, there are
violations here and you’re trying to hide them, and you know, that’s not okay.
Agnes:
Right. Though, like, you could try to protect yourself from having other
people recognize that you’re violating norms without being hypocritical. So…
Robin:
And there I’m not so sure I care what the word hypocrisy means.
Agnes:
I see.
Robin:
I think it’s just more simple to talk about these, you know, things you’re
aware of, and not— And like, what the causal processes might be. So for some
things—and for a great many things, actually—the reason you’re not aware of
your motives is in part to protect yourself from accusations, and to make
yourself look better and not look worse. And I’m willing to call that
hypocrisy but I don’t much care what the word “hypocrisy” is, but it’s an
interesting insight into many of our ignorances, is that they are there on
purpose, to protect us.
Agnes:
Right, but like I guess I think that that whole system can be relatively close
to, or relatively far from, consciousness.
Robin:
Yes.
Agnes:
And it’s when it’s relatively close, I think, that we’re inclined to use the
word hypocrisy.
Robin:
Exactly. And so that’s, you know, been a criticism of our book talking about
distal causes. Because people, you know, say, I only want to criticize people
who have more conscious strategies like this, and not unconscious ones. And
it’s related in many legal contexts: We hold higher penalties on people who do
crimes through conscious planning than for unconscious reasons. Although I’m
not actually quite sure how much I care, right? If you murder because of an
unconscious hatred versus a conscious hatred, do I really want to discourage
you less from unconscious murder or unconscious rape? I mean if I think I just
want to discourage it then I want to give you good incentives. And then, you
know, whether your conscious or unconscious notices the incentives is less
important to me—I don’t want it to happen.
Agnes:
So when I said closer to consciousness, I meant that. Which is, that is,
there’s stuff that like at least Freud would say is like pre-conscious, right?
Robin:
Right.
Agnes:
So such that were you, sort of, in some sense forced to reflect upon certain
facts, you would recognize it. Versus there’s stuff, like, the fact that my
sexual desires are in some sense evolved for the sake of reproduction, let’s
say, where that’s just not amenable— to me anyway, does not seem amenable to
consciousness. It doesn’t seem like I can reflect on that, and then somehow
come to see it as— and come to be aiming at reproduction. Right? So that’s
just what I mean is, like, is there a prospect for a conscious recognition of
the thing or not?
Robin:
So let me give you an example—and you can tell me whether it’s just off target
for what we’re talking about here—but you know, think of, you know, churches
with priests who are molesting kids.
Agnes:
Right.
Robin:
Right? And so the the accusation has been there that not only did churches
tolerate priests molesting kids; that churches should have known that there
were signs they should have been paying attention to.
Agnes:
Right.
Robin:
That would reflect that— And then you might think, well, they were trying to
sort of be trusting, good people, giving their priests the benefit of the
doubt. Or they could be just, you know, trying to avoid criticism. Right? And
both of those interpretations could be applied, and depending on how inclined
we are to forgive or give them the benefit of the doubt, we could interpret it
as hypocrisy or excess trust.
Agnes:
Yes.
Robin:
Or defensive protection.
Agnes:
Right.
Robin:
Right? But it’s still the same thing, roughly. That is, how much effort did
they put in, and should they have put in? Which cues should they have followed
up on? In order to— And you might say, “Well, you know, the key problem is
they set up a structure, an institutional structure, that did not give people
an incentive to look in very far.” And so it was less that the blameworthy
individuals not looking in, but more the choosing of the structure that
produced the thing.
And so another example here is police misconduct. So
police have misconduct, and they’re— In most police departments there’s a
Department of Internal Affairs whose job it is to look into accusations of
misconduct, and even to do spontaneous investigations. But we put the Internal
Affairs department under the police chief, who’s exactly the person who might
want to cover up the misconduct because it would embarrass his leadership of
the department. And that’s kind of suspicious. Right? Are we just bad at
designing the police departments? Or are we actually trying to let police do
misconduct, because we don’t actually mind misconduct as long as it isn’t
noticed? Or are we just excessively trusting, because we’re trying to be good
people and trust our police because they’re on our team? Right?
You can
see there’s a bunch of different ways of framing that choice to make it
unlikely that bad things will be found. Just like the Catholic Church might
have done, by who they assigned to investigate accusations of priest
misconduct. And I mean there’s several interpretations here but the more
cynical interpretation is that people didn’t want to find out these things,
and therefore chose structures that wouldn’t let them find out, in part,
because that would look bad.
Agnes:
Right, so I’m interested in this fact that with any given norm there’s often
going to be a kind of hypocritical version of the same norm that, were you
following the hypocritical version, you would behave in a lot of the same
ways, right? So say we take like two close friends who like agree not to
betray each other, you know? And they like, you know, one thing they could
have agreed to is I won’t tell your secrets, and you won’t tell my secrets.
And another way to read the agreement is, I’ll make sure that if I ever tell
your secrets, you don’t find out that it was me who told them, and you’ll make
sure that if you ever tell my secrets, I don’t find out that it was you who
told them. And you know, if you and I had the second arrangement, we’d
probably put it to ourselves in the first way. Right? We’d say we’re not going
to tell each other our secrets, right?
Robin:
And that’s in part because if anybody were to ever hear of this agreement or
see the the text written down, we will both look better under the first
version than the second one.
Agnes:
Not only that, but we’re also both more likely to satisfy the agreement,
right? If other people see the first version rather than the second one,
right? And like, you know, you could apply this basic framework to all sorts
of agreements: To marital agreements not to engage in infidelity, etc. Like,
and the question is like, that we have all these contracts with people and
these norms, right, that we agree— about how we agree to behave. But there are
two versions of those agreements that are floating there all the time. The
“I’ll really do this,” and “I’ll make it appear as though I’m doing this
version of the agreement.” And you know, you might want to know which one am I
actually under? Like, which rule am I actually in?
Robin:
And I think we can find that out if we dig farther. So in our book The
Elephant in the Brain we talk about hidden motives. And of course a common
response is like, How do you know these are the hidden motives? Well, what if
it’s just the other one? So a basic principle of hidden motives is, to a
shallow, you know, cursory analysis, it has to be not be able to tell. So when
a child says, “The dog ate my homework,” we might be willing to believe that
because that happens sometimes. If a child said, “The dragon ate my homework,”
we would just not believe it. It wouldn’t be at all plausible.
The reason
why the excuse works is because sometimes it happens, which means that in any
one case, we can’t actually tell. What we’re going to be able to tell is by
looking at a distribution of things, like how often does the dog supposedly
eat the homework? And how many dogs are there out there? And do people with
more dogs have more homework eaten, and things like that, right? We would have
to dig farther into a wider data set which then would tell us about a wider
range of people, but maybe not so much about that case.
And that’s just
going to be the general fact about all this kind of hypocrisy or hidden
motives: They only work to be hidden if like the motive you pretend to have is
sometimes a valid motive and does sometimes apply. That’s what lets it be an
excuse: That in any one case it could plausibly be true. And so that means
you’re not going to be able to look at one case and tell very easily. You’ll
have to look at these larger patterns of behavior. So we’re going to have to
look for other data to distinguish between these theories.
So you know,
in the case of the two different norms, I would say—it is to say—Okay, what
further data do we have? For example, like what kind of monitoring systems do
we have set up? Or what kind of punishments do we have arranged to be able to
apply, or who has what discretion with respect to these other things? Those
are all data with respect to these two choices. Another thing is our knowledge
of evolutionary psychology and cultural evolution in general, which would lead
us to have prior expectations about, Well, what sort of contracts would we
expect people to have?
Agnes:
But like, say we have the betrayal thing, right? And I’m sort of monitoring
you to see whether you are betraying me but I’m not monitoring you that much,
right?
Robin:
Right.
Agnes:
And then we want to say, Okay, is that a sign that we’re in the hypocritical
version of the rule where I just make sure that you don’t find— We just make
sure that the other person doesn’t find out.
Robin:
Right.
Agnes:
It’s like well maybe, you know, it’s a sign that we’re not, because you’re not
trying that hard not to find out. On the other hand it could be a sign that
we’re under the real rule, and you just—and I just trust you, right? But me, I
can’t remember…
Robin:
But we— I mean one— We could distinguish like channels of information you
might get which would be private, or channels which we’ve correlated with
other people finding out. So if, for example, what— the agreement is really
not to let other people find out about violations, then we will pay more
attention to channels which would be correlated with other people finding out.
And channels that would be just us finding out privately, we would not be
trying very hard for. And so that could be a way to distinguish those two
theories.
Agnes:
Right. So but suppose that I’m asking myself, like, with respect to the
betrayal, right? I want to know whether I can betray you, right? In one of
these two norms I can betray you as long as you never find out, right? I’m
permitted to betray you. I could still be following the norm if I betray you
as long as you don’t find out. And then the other norm, I can’t betray you,
right? And I want to know, Can I betray you or not? Which norm am I my
under?
Now I can’t look at my [inaudible]— Of course if what I wanted to
know is which of the two does my behavior conform to, absolutely I could
figure out by whether or not I’m trying to secretly betray you. If I were
trying to secretly betray you, that would be a sign that I thought I was under
the one norm rather than the other. But what if— That’s an expensive way to
find that out, right? And it requires me to pre— have a sense of like, right,
which one I’m under? And so how can I find out if I wanted to follow whichever
norm I was under? Is there any way for me to figure out which norm I’m under?
Robin:
So this is, interestingly, relatively closely related to the question of
whether there really is morality as opposed to custom.
Agnes:
Mm-hm.
Robin:
So one story is that, you know, every society has rules and norms. And they’re
there to produce coordinations of behavior, and that your main motivation is
to learn the norms of your society and conform to them well enough to not be
punished by their violations. And even to project to people around you that
you are the sort of person who embraces the norms sincerely. And often the
best way to do that is to just be sincerely, because you leak in a lot of ways
and it’s hard to lie.
So that’s one theory about what morality is, is
that even though you will talk in terms as if, I think every society should
have the norms of my society, to a certain universality, and everybody in
every society is doing that—in fact you don’t really care about the
universality that much. What you mainly care is to reassure the people around
you that you really do embrace the norms of their society, and that you will
follow them, you know, wholeheartedly. But of course, what you really want to
just do is get them to believe that that’s true, and often the best way to do
it is to make it true. But you don’t actually care about fundamental, you
know, absolute morality, or to actually be moral, beyond wanting to convince
people around you that you will be moral, you know, in their eyes, and that
they should see you as such, right?
The other theory, of course, is the
one that this person is pretending to: That there is— We all believe there is
some absolute morality and that’s independent of cultures, and that other
cultures have it wrong and we have it right, and that we care directly about
being moral. And that we only incidentally care that other people think that
we’re moral, and that we maybe just want to know what the moral truth is and
to follow it, right? So these two theories exactly diverge in exactly the
question you’re asking. Right? Under the first theory, people don’t actually
care very much about the answer to the question you gave, right? When nobody
will see the difference, what, you know— What is the real morality here that I
should be following? is not a very relevant question, right? And so, the
degree to which people are anxious about this question you’re asking is
somewhat of a, you know, cue-indication about which of these sort of, at
least, stances toward morality people sort of, at heart, take.
Agnes:
And like suppose that like if people are sort of wealthier and more
comfortable and better off, they tend to more start to wonder whether this
morality is the real morality. That is, like, they tend to like, you know,
engage in inquiries that— “I wonder about this?” and raise skeptical worries
and stuff.
Robin:
So I mean, this is somewhat related to, say, anonymous charitable giving.
Right? So one theory says that people want to give charitably to help people.
And another theory says that they want to be seen as being helpful. And so
often a claimed distinguishing feature between these two theories is whether
they give anonymously: As on the face of it anonymous giving doesn’t get you
the social credit, it just helps. And so you might think therefore, people who
give anonymously are showing themselves, at least, that they really care.
Now,
what we actually see is not only do very few people actually give anonymously,
but the ones who usually do, usually tell a lot of their associates that it’s
anonymous. Okay? So there’s actually not very much anonymous giving that isn’t
actually, you know, gossiped and told to people around them.
But even
then you might think, okay, every time you try to give anonymously— Say you
actually gave anonymously and didn’t tell a single soul, right? But this
process of giving anonymously has error. It doesn’t always preserve anonymity:
Like sometimes somebody in the process, like, sees something and gossips and
tells other people, right? Now imagine you try to be anonymous in a way that
only 1% of the time would it ever get out. And now somebody sees this 1% of
the time and tells other people: “Agnes gave a gift that only had a 1% chance
of revealing her credit! She is such a good person.”
Agnes:
Right.
Robin:
And now they give you 100 times as much social credit…
Agnes:
Right.
Robin:
…for this 1% thing, which meant that on average you were doing fine getting
social credit by giving in this very channel which had a very rare chance of
being discovered.
Agnes:
Right.
Robin:
Which suggests that it’s actually really hard to give anonymously.
Agnes:
Right.
Robin:
But it suggests an analogy to the other case, right? When you’re in the moment
of asking yourself not What do I want other people to think about my morality?
but What do I think about the morality, and what do I think is morally
correct? If you could do that thinking entirely alone, with not anybody ever
hearing about it, then you could more assure yourself that you were actually
the person who cared about morality.
But if sometimes this leaks
out—sometimes you talk to a few people about this question—then we can all
give you social credit: Well, gee, Agnes is such a good person that she is
actually agonizing over whether it’s actually morally good to do this, as
opposed to whether people around her will will see her as morally good, right?
And we can have the same 1% thing, right?
Say you did an analysis, you
know, all by yourself and you wrote 20 pages of notes to yourself that you
never showed to anybody. And ten years later somebody else comes across this
ten pages of notes, and they say, Wow, look at this: Agnes agonized over ten
pages about this moral question she never told anybody about. Agnes must be a
really moral person. And now ten years later you get this big boost in your
reputation from somebody finding out these ten pages.
Agnes:
Right. I mean, I’m not that— I’m less inclined to think that there’s so much
value to the pure case, right? Because I think you might well, for it— So it’s
perfectly reasonable that I would want to not betray you. And I'd also want
you to know that I wasn’t betraying you, right?
Robin:
Right. And they would come together as a package.
Agnes:
I wouldn’t, you know, I wouldn’t like the situation in which I’m not betraying
you but you believe that I’m betraying you. That’s not—that wouldn’t be my #1
choice of situation, right?
Robin:
Right.
Agnes:
So you know, if I give to charity, I don’t think there’s anything unreasonable
in wanting that to be known and even wanting to be celebrated for it, if you
did a good thing.
Robin:
Right. But if people then think, She only gave to charity because of the
praise she would get. And you don’t want that to be believed about you.
Agnes:
Right. I mean, the, you know, wanting credit has always— There’s a paradoxical
element there, right? Because part of what one often wants credit for is a
kind of humility that then—
Robin:
About not wanting credit!
Agnes:
Exactly, right? Let me— Can I go back to a question about, you mentioned
earlier that, you know, the stuff in your book, like— There might be some
special group of people who are specially committed to not having hypocritical
or like naively false views of, you know, human beings. Where for those people
the avoidance of hypocrisy, at least in theorizing about human beings, if not
in their own lives, would be especially valuable. And the thing I wonder is,
do you pick out that group of people just by way of looking at who does in
fact avoid this form of hypocrisy? Or do you think there’s an independent way
to pick out those people, and then we can get purchase on them and say, Look,
you’re the intellectuals. Therefore we expect non hypocrisy from you.
Robin:
I don’t think I have a strong opinion about the path that would lead to an
endpoint that the endpoint has multiple things in it, right? So that the
endpoint would have both a community of specialists and them being honest
enough about the topic, right? So say for medicine, it’s fine if most doctors
and most patients have the usual naive belief about their motives with respect
to medicine, and maybe even fine if most senators do or something. But if
there are health policy specialists whose job it is to recommend changes in
health policy, and their job is to first, then, understand medicine, and our
social behavior there, it seems like those are the people for whom actually
understanding would be the most valuable. Because then maybe they could find
variations that would be— give us the things we want at a lower cost, because
they know what’s actually going on.
So those are the people you would
want to be more honest. Now, that doesn’t make them more honest. It would have
to be an independent process that tried to convince those people to be more
honest because they might, like everybody else, want to indulge in the usual
comforting beliefs. But you know, but there is this concept of an intellectual
specialist who is trying to be honest about their particular area, and not
necessarily about everything else in the in the world.
So we all might
want to believe that, you know, there’s some grand plan to cosmology, but
cosmologists feel like they need to be kind of more honest about what grand
plan do they see. And if it looks pretty arbitrary they feel like they’ve got
to tell us. And that’s true of a wide range of areas, right? Where we, you
know, if most people think that say a minimum wage would be a good idea, and
then there are people who specialize in minimum wages, those people feel they
have this norm to actually figure out if a minimum wage is a good idea. And
then to tell us about that, or at least to recommend policies that will be
based on the correct understanding of that situation.
So this is just a
general intellectual norm I think that intellectuals share, which is we have
an extra responsibility to be more straight-shooting and -looking, at least
among our colleagues, about the thing that is our specialty.
Agnes:
But there’s different ways you could interpret them sort of actually figuring
out how to, you know, give us what we want, which is like you can imagine the
health policy analysts reading your book and being like, Okay, it turns out
what people want is a lot of caring signals.
Robin:
Right.
Agnes:
And they want to spend a lot of money on sending these caring signals. So we
need to reorient our health care system to send even more of these caring
signals. And there’s a lot of this medicine that we’re doing that nobody’s
even noticing that it’s benefiting people—we can cut that out, because people
don’t even care. We need lots more placebos and—
Robin:
Okay.
Agnes:
And like why wouldn’t that be the direction? That is, I see there as being a
lot of value in the health policy analysts having a kind of naive and
idealistic attachment to health as the goal of health care, which they could
lose by reading your book. And they could think, Actually people don’t care
about health. What they care about is the appearance of caring about one
another. Right? So I— It doesn’t seem clear to me—
Robin:
So this is gonna have to— So we imagine some policy specialists, say, about
medicine.
Agnes:
Yeah.
Robin:
And they acquire a more correct understanding of the actual social dynamics of
medicine.
Agnes:
Right.
Robin:
And now they have some agenda, like, something they want in the world.
Agnes:
Where do they get that?
Robin:
That’s what I was about to say.
Agnes:
Okay.
Robin:
As we can counterfactually imagine different versions of them, right?
Agnes:
Okay.
Robin:
For some of them, say they wanted to install a new fascist regime in America…
Agnes:
The health-policy analysts?
Robin:
I’m just making up a crazy hypothetical here, right?
Agnes:
Okay.
Robin:
But it’s going to be extreme, right? If we imagine the health policy analysts
on average wanted to create a new fascist regime in our country, then we might
imagine they could figure out how to use our, say, emotional sensitivity and,
you know, anxiety about medicine to trick us into supporting their new fascist
regime. That might be a thing health policy analysts would do, right? In which
case you and I might not be too thrilled with them learning better how to
achieve their ends, right?
So in general any group of people, if you
really just oppose their ends and are not at all sympathetic with what they
would do with resources or insight, then you don’t very thrill with them
getting more resources or insight, right? So I think in general to imagine,
like, groups of policy specialists having better information being good, you
have to roughly guess that they have aligned enough interests with you and the
rest of us that they would use those— that knowledge and resources to help us
all better off, right?
Agnes:
Well according to your theory our interest is largely in projecting caring. So
they’re going to try to align with us…
Robin:
Not necessarily. So that is, for example, what if we just care about, say,
with education, for example.
Agnes:
Yeah.
Robin:
What we’re mainly trying to do is get a higher rank score in education, right?
We want to be at the 90th percentile rather than the 85th percentile of
education. We really just mainly care about that. Okay. Well now, like,
subsidizing education for everyone doesn’t help here, right? If you just put
more money into education, the distribution of rank scores doesn’t change.
Agnes:
Unless we want to be better-ranked relative to other countries or something.
Robin:
Right— For example, right, exactly. But like the global budget for education
wouldn’t help.
Agnes:
Right.
Robin:
And so you know, so we can see, like, the details of our preferences about
showing that we care, for example, would matter if we just— If there’s a way
we could all show we care more that’s different than if we’re looking at
relative showing that we care, that we care more than other people do.
Agnes:
Right. But so you could imagine the— Since the analyst who’s saying, Look,
what people really want in healthcare is opportunities to show they care, what
they really want in education is ranking, so we need to design our healthcare
and education system to create lots of opportunities for ranking—showing you
care and ranking. And we can forget about the other goals, like learning stuff
and being healthy. Right?
So I guess the thing is, like, you’re treating
the agenda or the goals of the health policy analyst or the education policy
person as being almost like something that comes in from the outside. Like
they may happen to— Maybe they have a fascist agenda, or maybe they happen to
have the same goals as us. But it seems to me that a lot of what goes in— Like
having a goal is not a trivial thing, and it doesn’t come from nowhere, right?
And you know, a goal and an ideal are not so different. An ideal is a kind of
goal. It’s a way of keeping your goal in view, right?
And so like
there’s— It seems to me there’s an argument that the health policy analyst is
the last person who should read your book! Because they’re— You’re going to
corrupt their ability to have an idealistic attachment to the goal of health,
which is what you want, right?
Robin:
Well I don’t know that it is what I want.
Agnes:
You don’t think we should— Our healthcare system should be devoted more to
health than it is?
Robin:
Not necessarily, no. Maybe we should just cut back on the budget of the whole
thing and just spend a lot less.
Agnes:
Well, that might be a way of being more devoted more to health, right?
Robin:
It might be, but then that would still be the fundamental goal. And the
devotion to health would be a side effect of the more fundamental goal. But I
think the larger thing to say here is that your interest— My interest in
joining intellectual communities, and in helping intellectual communities and
promoting them, and maybe promoting they have good norms, is context-dependent
of my belief about the actual nature of the intellectual world today and the
actual consequences of it learning things. So, right.
So I would roughly—
I mean first of all I would believe that, you know, we tend to have, you know,
literatures where we just talk about what the situation is. And then we have
other literatures where we recommend policies. And that policy-recommendation
literature I see as competitive enough that even if any one person in that
literature is not very well motivated, if there’s a clear policy win then
other people could point that out, and it might then win out in that policy
community. So I could then have a mild faith in the, you know, virtues of
intellectual competition to produce, you know, useful policies from better
facts.
Now part of that faith might be because I know economists exist,
and I know the actual, you know, values that economists at least state among
themselves and somewhat enforce that they are followed. And I tend to embrace
those standards. And a lot of these policy analysts are trained as economists.
So that might give me more confidence than you would have if you weren’t an
economist, or didn’t like the economist standard.
But that’s also, you
know, somewhat separate from the idea that there’s a separate community where
people just compete to, like, give a more realistic description of reality
without necessarily translating that into policy recommendations. And so you
know in some sense if I contribute to that first intellectual community, then
I’m not endorsing particular policies or even particular communities. I’m just
basically giving more information to all intellectuals who have whatever
inclination to go search in different policy spaces to make
recommendations.
So that’s sort of the generic confidence or faith in
progress and intellectual community: That overall, on average, you know, more
insight is good for the world.
Agnes:
So is it that you think that the people in intellectual communities tend to
have especially good agendas? Or is it that you agree with me that everyone
desires the good, and everyone basically has a good agenda?
Robin:
I don’t know that that matters here in this context.
Agnes:
Sure it does.
Robin:
So basically, you know, my concept of the good is more like, you know,
Pareto-dominant deals, like as— If I think, like, if everybody could get more
of what they want, that would be good. And then I imagine a competitive
process where lots of different people are pushing for what they want. Then,
you know, somewhat robustly that tends to get everybody more of what they
want.
And that can also be described in a policy-competition world of
different people, say, have different agendas, and they make policy proposals
that favor themselves and their agendas, but they are competing with other
people offering, you know, different proposals based on other agendas. Then
again the kind of policy proposals that might win out would tend to give many
people more of what they want, which according to my standard is good. And so
there’s a connection between thinking of a competitive process of people
making different proposals, producing something in the middle that gives
everybody some degree of something, and that being the kind of thing I think
is good.
Agnes:
So then, I mean, when I initially asked you about the health-policy person who
you thought would be, you know, better off reading your book, like, you said,
Well, it’s gonna depend on their agenda, right? Because they might have the
bad fascist agenda. But now it turns out it doesn’t depend on their agenda.
Because as long as everybody reads the book…
Robin:
Well, it might— If any one group was the only one to learn it, and they were
the only one we were listening to about policy proposals, then it would be
more of an issue, right? So there might— You might imagine being in a country
where there was just one policy group, and they had some peculiar agenda, then
you might need to worry—and you just cared about that country you might need
to worry about them. But the more it’s a large world of competing policy
proposals and analysts, then the more you might hope that competition would
mean that any one person’s motives wouldn’t matter as much.
Agnes:
And why wouldn’t you think that what would happen is that people would group
together along the lines of their contingently shared interests? And then the
largest group would get their agenda to be satisfied at the expense of the
smaller group?
Robin:
Though there’s a large literature on negotiation—game-theoretic literature on
negotiating in a wide range of negotiating institutions—and relatively
robustly, you know, in a wide range of negotiating institutions, and with a
wide range of different parties trying to push the negotiation in different
directions, what you tend to get is, you know, something toward the Pareto
frontier of everybody getting more of what they want, with some, of course,
weighting of who matters how much, depending on who has what negotiating
threats and leverage. Which— But that tends to be somewhat random and so hard
to predict ex ante who will have what threats and leverage, and giving people
an incentive to make deals that are sort of more safe and risk-averse relative
to whatever they might come out with in the end of some detail. So this is,
you know, the nature of what we know about negotiation.
Agnes:
So I mean, you know, the picture that you’re giving me now—which is that, sort
of, if everyone just has more information and everyone just sort of pursues
what they take themselves to want, then overall, through all of those
interactions, people are going to be led in general to get more of what they
want—seems, at least on the face of it, to be in tension with your basic
picture that people have a whole bunch of hidden motives that they don’t know
they have. And when they pursue things they’re pursuing things different from
what they take themselves to be pursuing. And they systematically ignore
information and sort of, like, just are in some sense at a loss with respect
to satisfying their true desires.
Robin:
So think of two parties who want to feel like “the good people” who have some
sort of conflict, and then they choose, perhaps lawyers or some other
negotiators on their behalf—or maybe an author and a publisher, who choose an
agent on their behalf. Right? So a common element of the situation, or choose—
Imagine an actor or musician with an agent on their behalf, right?
So
what we often have is individual people who have interests and want to
maintain their usual idealistic perception of themselves, and other people who
pick a somewhat cynical and directly financially motivated party to negotiate
on their behalf, who then knows a lot more about the details of their world in
those negotiations. And those agents tend to be more mercenary and tough in
the negotiations, and more just figuring out what will work and get them that.
And they allow the clients to be at a distance from all that, and to not
acknowledge or even be aware of all the murky, messy details.
And that’s
in some sense how our world works. So for example, you know, politicians are
in some degree like that: You know, talking to voters, they praise and flatter
the voters about their grand ideals and what they care about, but then they go
and they make backroom deals to get what they think the voters more actually
want. And this is what agents do. This is what lawyers do.
And so in some
sense I’m thinking of policy specialists is doing that as well. That is, I
want to imagine this world of experts on human behavior, and policy analysts
who read the experts, and they understand better what’s actually going on and
what people actually want. And then they are making proposals on the basis of
trying to get whoever they, you know, feel sympathetic toward to get what they
want, and maybe the rest of us never really have to know.
Agnes:
So it seems to me though, that I mean, when I see some of these people, you
know—and again, much of my knowledge comes from TV, so it may not be right—but
agents and lawyers and whatever, they don’t seem to be hugely benefited by
doing, like, scientific studies of human motivation and whatever. They just
sort of seem to pick up on stuff and to be sort of— They choose those jobs
because they’re kind of intuitive, maybe, at doing these things?
And so
like is there a reason to think that that isn’t the way it should go for all
these other people? Like for anyone in general who is—insofar as manipulation
is a really big part of their job, kind of for the reasons that you describe
in your book, which is that if you were manipulating people, you wouldn’t want
to know you were doing it—you’d want to have the elephant hide that from you,
right? And so why not think that people are actually going to be better at
doing this manipulative stuff if they’re not too aware, either. Maybe nobody
should know.
Robin:
In some sense, you know, there’s three major strategies we could take here.
One is we could decide that, you know, you can’t handle the truth. Tell that
to the world. You know, let’s not go find out the truth about these things
because nobody can handle it and it doesn’t do the world any good. Let’s just
not look at this stuff. That’s one potential stance you could have with
respect all this stuff: Let’s just not find out.
Another stance you could
have is, Let’s find out and tell everybody, or anybody who wants to know. So
that’s relatively simple: You publish the book, you let anybody read it, you
write it in simple language, you feel free to give talks to anybody who
invites you, etc. And you just decide to tell everybody. And you hope, or have
an expectation on average that’ll be good idea. But you’re not that sure about
any one case.
The third kind of strategy is, you have to make
context-dependent decisions about who to tell and who not. And tell them in a
way that they won’t tell other people that you didn’t want to hear. You’d have
to be esoteric, as many of the ancient intellectuals were. So that would be a
lot more work, right? You couldn’t just publish a book, you might have to
publish papers in different journals for different disciplines using different
languages, or maybe send, you know, private essays to different people. And
then make them swear not to tell other people or, you know, coordinate on
different groups who figure out who— You’d have to figure out who is the group
that is going to benefit the world by hearing this, and who is not.
And
you know, that’s a lot of work to figure out who you should tell and who you
should not. And that’s also a lot of work to figure out how to only tell some
people and not others. And that third strategy just seems crazy-hard. And I’m
lazy, so I didn’t take it. And I didn’t want to take the first strategy
because it looks like the second one is better than the first. You know,
that’s my positive guess.
Agnes:
But you know, it makes sense to me that you would say the things you’ve said,
because you’re interested in this topic. I’m just— I’m asking about, as you
put it, how this information might be actionable. And you were first saying,
well, it’s— You had a phrase for this—the action upshot, or some phrase—that
you’re always wanting me to say. What’s gonna, what’s gonna come of this?
Anyway, you know, maybe ordinary people, it can’t change how they live their
lives but it’s relevant for intellectuals.
And I’m like, Okay, but for
the intellectuals, is it going to change how they live their lives or do
anything? And you’re like, Well, you know, maybe…
Robin:
It’s going to change how they recommend policy.
Agnes:
“…insofar as how they recommend policy. And then I say, Okay, how is it going
to do that? And you gave me the analogy of the agent and the lawyer, where
they’re the agent, they’re like, they’re taking on themselves the kind of
deceptive activity and allowing the client to remain naive. And then I say,
But in those cases, I actually think the argument of your book reapplies,
which is that it’s to the advantage of people who do that to be a little bit
unaware of how the sausage is being made inside their own brains, right? Where
they’re able to manipulate, and whatever…
Managers, right? Managers are
quite manipulative, I think often, but they wouldn’t like to think of
themselves as being so manipulative. And so part of what makes them such good
managers is that they’re even more self-deceived about how manipulative they
are. And so I’m not sure if your book is useful for them either. So it’s an
interesting—
Robin:
I think, I mean— Actual lawyers do actually know better how trials go. They
know how legal negotiations go. They are more honest, in fact, about actual
law and legal process. That seems just true to me. And certainly book agents
do actually know more about how negotiations with book publishers will go and
what will be sellable and what won’t, and, you know, what they say they want
but what they really want, and perhaps what clients also say they want and
what they really want.
And they are more aware of these things. And they
do take those things into account more in their negotiations on the behalf of
either publishers or clients, authors. That’s— I think that’s just true about
the world. Yes, agents who specialize in a particular area are more honest
with themselves, at least, about how that works.
Agnes:
You might also think, though, that those people have like a greater need for
preserving at least some idealism about the process. So that they have some
kind of a guide in their own minds as to what direction they’re heading in,
right? As to like, so that it doesn’t, you know, get fully corrupted by…
Robin:
I think pretty much all professionals, at least in their own mind, draw some
lines, and they say, I will not cross these lines. And they kind of need to do
that as a general matter of self-respect. They do not want to see themselves
as someone who would do anything for a client or for money per se, right?
Agnes:
Mm-hm.
Robin:
Nevertheless, they will go farther than their clients think they would. That
is, you know, their lines they draw for themselves are not quite as
constraining as clients might wish, if they thought about it. They are more
realistic about such things.
Agnes:
We should stop there.
Robin:
All right.