Prediction Markets
Robin:
Good morning Agnes.
Agnes:
Hi Robin.
Robin:
Today, boards of directors pick CEOs. And in particular, they fire CEOs. Many
people say that they don’t have a strong enough incentive. Often the boards of
directors’ members were recommended by the CEO. Hence the boards, therefore,
are reluctant to fire the CEO and they don’t do it enough. And there’s some
evidence suggesting that.
So I have a proposal for how to fire CEOs in a
different way, and it’s an example of a larger kind of system that I’d like to
apply elsewhere. So I’m just going to quickly describe this, and then we will
go into discussing it.
So the key idea is that we’re going to have new
kinds of stock markets. So an ordinary stock market trades cash for stock, and
the current price is an average over all the different future scenarios of how
much revenue the company will generate for investors. And now in these new
stock markets they’re going to trade cash for stock, but these trades are
going to be conditional, i.e., they’re going to be called off if the condition
isn’t met.
And we’re going to have two markets for each company: One
market will be called off if the CEO leaves office by the end of this quarter,
and the other market will be called off if the CEO stays in office till the
end of the quarter. So now when investors trade in these markets, instead of
in their minds averaging over all of the scenarios the company could be
involved in and picking sort of the average revenue they expect, they only
average over the scenarios consistent with the condition, i.e., the CEO stays
in office or the CEO leaves.
Therefore the prices will, in that respect—
the prices will be an estimate of the value of the company given that we keep
the CEO and given that we dump the CEO. And if those prices are different,
that’s the consensus of the speculators about the value that the CEO is giving
the company relative to the other CEOs they might put in place instead.
And
so the recommendation is that the board of directors look at this price and,
you know, defer to it substantially in deciding whether to keep the CEO. And
boards of directors have a fiduciary responsibility to do their choice there
in the interest of the investors, and so if we could create an experiment
where we did these sort of markets for, say, the whole Fortune 500, and we
could track these markets, let’s say, over five years, then we can compare the
pool of companies that follow this market advice to the pool that don’t.
If
the first pool has higher returns, that would create a presumption that boards
of directors should follow this advice. And then we could actually change
corporate accountability in a few years with a relatively small experiment,
and therefore do a better job at picking or firing CEOs. So that’s my proposal
for how to fire CEOs.
Agnes:
Do boards of directors ever fire CEOs for moral reasons? Like because they’re
corrupt or violated some rules, or anything like that? Is that like, is that a
thing?
Robin:
Sure, that happens. No doubt they would usually justify it or rationalize it
in terms of their role by saying that such a CEO, when it became known that
they had violated these moral rules, had lost the confidence of customers or
investors or suppliers or something, and that the company would be therefore
hurt by keeping this person in the role. So there isn’t necessarily a conflict
between just enforcing moral rules and following their fiduciary
responsibility to act in the interest of their shareholders.
Agnes:
Mm hm. So companies and boards of trustees do have this interest in making
more money, but they’re not using your system.
Robin:
Right.
Agnes:
Which could make them more money.
Robin:
Yes.
Agnes:
And so in part this is due to regulations that prevent them from using it, but
to some degree they could use it without— internally, right? So what’s your
sense of why people don’t want to make more money this way?
Robin:
Well, one generic story is that innovation is hard. And individual
organizations and people are reluctant to embrace a potential innovation if it
means they are risking their reputation on something untried and unproven. So
since this isn’t done anywhere now, if I ask any one group to try it, they
might be reluctant to say, you know, We would be taking on a big risk here,
and we could trash our reputation by endorsing this thing that we don’t know
if it will work. You know, they would be reluctant to do that. That would be
one reason.
A second reason might be that boards of directors have some
power, and they gain advantages from that power: They are not just acting in
the interest of the investors, they are trying to on the surface appear to be
acting in the interest of the investors to the extent that could be monitored
and checked, but for other— you know, otherwise they are— they form
coalitions, they have agendas… And sometimes that agenda means keeping a CEO
in power who isn’t necessarily the best for the company. And they don’t want
to give up that power.
So unless they’re forced to, they may, you know,
preserve that. Similarly we can imagine a similar system here for say,
choosing the next professor to hire in an academic department. So if you
imagine a hiring committee, we could imagine markets like this which predicted
the number of publications of an academic, or whether we give them tenure, as
a function of who we hire.
And again, the hiring committee might not be
that eager for this system if the hiring committee is not just acting in the
interest of the department, but instead has an agenda, and, you know, they got
their people on the committee this time and they’re going to get more of their
kind of people in the department with their coalition. And they don’t want
this year to be the year that suddenly their power is taken away.
Agnes:
Right. So the thought is that the CEOs—sorry, the board people—aren’t
necessarily just trying to make as much money as possible. They have other
goals.
Robin:
Right. Because they usually actually don’t have that much stake in the
company. There’s a whole story about how there are privately held companies
where the owners have very large stakes and the people controlling the company
have large stakes, and there’s a literature suggesting those companies are
just much better-run. When the owners more directly pick the managers and the
owners are concentrated then there’s less of an agency problem. And therefore,
you know, they make better choices. But most public companies, the boards of
directors have very little stake in the company, at least as a percentage of
the company.
Agnes:
So would those private companies be then a better like audience for this
proposal then?
Robin:
Well, they actually have less of a problem to solve.
Agnes:
I mean, the person has to pick the manager, right? And so they can decide
whether or not to fire that person.
Robin:
Right. So the general idea here is that we have many institutions in society
that are supposed to be making choices on certain criteria. Sometimes they
actually have good incentives to make those choices on those criteria, and
sometimes they don’t have good incentives. And this is a solution for the
cases where they don’t have good incentives, to introduce better incentives to
make better choices.
So in general this mechanism, which is called
prediction markets in general or decision markets for this kind of
application, but you know, good applications are cases where the existing
process seems plausibly broken. And therefore you would— might want to pay the
overhead of some sort of extra process in order to overcome the broken
elements of the existing system.
So you know, if I want to know what you
had for breakfast, I don’t set up a prediction market about it. Why? Because I
think if I just ask you, you’ll tell me, and there’s no need for all this
overhead. But when I might want a prediction market is when you might lie to
me about what you had for breakfast. And so I want to suspect that you might
lie and that you have reasons to do so, and then I might want to give you
stronger incentives to tell me the truth.
Agnes:
So I felt that a big part of it was that you were trying to aggregate a lot of
information.
Robin:
Yes.
Agnes:
And that maybe one person thinking by themselves isn’t going to be able to
easily access all that information otherwise. And so I don’t see why the owner
of the company who privately owns this company, and then he just has to decide
whether or not to keep or fire his manager, wouldn’t want this informational
source.
Robin:
Well that would just depend on the quality of this person’s network and
associates and connections to them. So again, it might be that a private
owner, you know, has a lot of contacts, they can ask a lot of people, they
could ask to look into things, and they have good relationships to these
people, in which case they can ask them and then get the responses and
aggregate that information. And maybe put them all in a room and have a
conversation, right?
Potentially, they could also have a problem where
they don’t think these people they would ask would give them good advice, and
then they might be more willing to consider this. And so then it’s a matter of
whether or not their other system is broken. But I might say I have more
confidence that the usual board of directors for a public company has poor
incentives.
Agnes:
So I mean, if you take the owner now of this private company, right? So
suppose he does have this network with associates and connections, he’s going
to have like some people who are going to be closer to him than other people,
and his associates might be chosen in part on behalf of or due to like their
elite status, right? And he’s going— You know, his taking of their advice is
going to partly be like a signal to them of their friendship. And like, so I
don’t see how you’re going to avoid this problem, this basic problem that the
ways in which people share information interpersonally are woven in with their
establishing and maintaining relationships of trust and affiliation.
Robin:
Right. So the the general idea here is that this is a promising institution
which seems to have very wide-ranging potential applications, because the
problem it’s trying to solve plausibly is quite widespread. If I want to
propose that people pursue this idea, I want to choose application areas:
Initial tests that have the strongest indicators that there’s a problem there
to be solved, and that there would be a straightforward way to test the system
to see if it solves that problem.
So that’s why I might be picking this
public markets case, as a clearer, stronger case for an initial application.
I’m not denying that it might find use elsewhere, but if we all just aren’t
sure how useful this is and we need to do some initial tests, let’s go for the
cases that seem clearest.
Agnes:
Okay, the counter argument would be: You’re trying to propose it to the people
who have the strongest incentives to resist it, versus the person who has the
smallest incentive to resist it.
Robin:
Right, but the strongest incentive to resist it is correlated with having a
problem to be solved!
Agnes:
[Laughs] Right, fair enough. Let me ask a different question: Suppose that we
could somehow, you know, have this process run our legal system?
Robin:
Yes. We might indeed.
Agnes:
Where like, you know, we’re going to try to decide the guilt or innocence of a
person on the basis of a prediction market. Now it’s not totally clear how,
but so do you have thoughts about how— go ahead.
Robin:
I actually have a concrete proposal. So one of the main problems with our
existing legal system is that it’s expensive. And this means that when you— If
somebody harms you and you think, Should I sue?, you ask yourself, How big a
harm was it? And, Is it worth bothering to sue given how much I could
potentially win, given how the harm— the size of the harm is?
So because,
you know, our system is so expensive, there’s just no way you will even
consider suing somebody in court for the gains you would get there unless
there are at least several thousand dollars worth of damages that you’re going
to ask for. Now sometimes a company might like sue a shoplifter not because
they expect to get enough gains in that case, but they’re trying to create a
reputation that they just always prosecute shoplifters. And you might
therefore more often sue people even when you expect to lose because you hope
to make that reputation.
But still we have the standard problem of, you
know, not suing over small harms. So here is a way we could allow a much wider
range of small harms to be sued for. So let’s say you’re in a parking lot and
you come out to your car and you see your car side has been scratched, and you
look at the car next door, and it looks like their car door hit your car side
and scratched your car. And this looks like $200— $100 worth of damages. And
you say to yourself, That’s not worth suing for. I guess I’ll just learn my
lesson and park farther away from the store or something, right?
Under my
new proposed system, what you would do is you would take out your phone and go
to the appropriate government website, and you would file a claim stating that
you thought on this date, this car next to you with this license plate, with
this picture associated with it, had scratched your car, and you were claiming
$100 worth of damage. Okay? Lots of other people do a similar thing.
And
now we’re going to take a thousand cases like this, all of which are claiming
$100 damage. And we’re going to go through a lottery where we pick one of them
as the winning and the other 999 as the losers, and we’re going to increase
the stakes according to this odds. So the 999 losers, they get a response
quickly saying, “Your claim is now worthless. You may never sue for this, you
may never complain about it, you have made your complaint, and you’ve lost.”
The other one case out of a thousand—that person now has a claim worth
$100,000. And now they have an incentive to go to court to sue because if they
win this lawsuit at court, they will win $100,000.
Now before we actually
go to court, what we’re going to do is give a notification to this person on
the other side that they have been sued for $100, and of course give you a
chance for them to say they’re guilty and settle or something. But in
addition, we’re going to make sure they deposit $100 somewhere so that they
have $100,000. And all thousand cases will then deposit their $100, you see,
and then that produces $100,000 ready to be won. And so we know that if you
win the case, you will actually get your $100,000.
And in addition, you
each have the option to put a few more dollars into the pot to pay for a
lawyer. So if you each put $20 in the pot in addition to the $100 that they
put in, now you’ll each have $20,000 to pay for a lawyer to actually go to
court and do this trial.
So now you can see, we could all have an
incentive to sue for $100. It’s, you know, and when the damage— instead of
just only when it’s thousands of dollars, now this would be a way we could
then make the law apply to more things. And interestingly, when I talk about
this in law and economics class to my students, typically they are not very
excited by this because people don’t actually want the law to apply to more
things. People kind of like the idea that for small harms you use other sorts
of informal social mechanisms to deal with, you know, the problems, and that
we only use the law for big things.
And I think people also like the idea
that because they don’t have much money, the law is only ever something they
could use to sue other people; that other people won’t sue them. So they like
the idea: Law is for suing businesses and rich people, and it’s there for you
if you want to go sue them, but nobody’s gonna sue you because you don’t have
any money. This sort of system puts you back as a potential target of being
sued, because you might scratch somebody’s car or harm them in some other way.
And a lot of people don’t like this idea that the law could be used to sue
them. The law is about suing bad guys, not them.
Agnes:
So I think that if I try to sort of get into the mindset of why people are
resisting these proposals, I mean, there are multiple reasons. But like in
your, you know, you have a long paper on prediction markets in which you
consider 25 objections. And the first of the objections is something like—and
I won’t phrase it perfectly—but it’s something like sort of social cohesion.
Like the— Our doing things the way that we do them is important for social
cohesion and for people to feel like they belong to a community. You might use
the word “trust” or something like that.
So somehow these methods seem
less trusting. And like with the case of “Let’s let small harms go,” right,
one intuition behind that is like—and also, no one would sue me, a citizen—is
like, Look, we citizens sort of trust one another, and we belong to a
community and we like one another. And I’m not seeing every other citizen as a
potential evil doer, nor do they see me as an evildoer, right? These evildoers
are these other entities, like corporations and stuff, but we citizens trust
one another.
And you might think about, maybe, if you want to sort of get
inside their resistance, think about trying to apply some of these methods
within your family. You know, like to your spouse or your children, right? Say
I’m going to, you know— I’m not going to, like— I’m going to monitor how
you’re doing the dishes, or how, you know, we’re gonna select, you know,
movies using this system, we’re not just gonna ask you what you’d like to see.
Robin:
Well I can give you a more dramatic example within families, which is you’re
considering marrying somebody and you ask a market like this, If I marry this
person, how long will the marriage last?
Agnes:
Right.
Robin:
So you could ask your friends and associates to weigh in on predicting the
length of your marriage conditional on if you marry this person. The fact that
you offered that question to your associates could be taken as a bad sign by
everyone about your confidence in your relationship. It could be taken as a
bad sign by your intended partner. It just might send the wrong sort of
signals that you were even willing to consider doing this question. And so
that might be a reason you might not want to do it.
And I think that is
related to your sense of that like it would suggest mistrust or some sort of
distant relationship. Although on the other hand, like people do make a lot of
bad choices about who to marry. And often people in their social circles have
advice about that and kind of think it’s not going to work, and you don’t get
that information. So there is a real loss there.
But yes. The example of,
say, medicine is related to your trust thing. And, say, investment pools.
Today we mostly just like to trust our doctor, and we want to have a doctor
that we trust, and that we tell the doctor we trust, and that we don’t
actually want to think about difficult medical questions—we want to just have
our doctor and trust them.
And we could counterfactually have some
systems where doctors had stronger incentives with respect to how well our
outcomes were, and I’ve written some things about that: “Buy Health, Not
Health Care”—that’s a slogan there! But I think one of the obstacles there is
that people feel like if you put your doctor under one of these incentive
schemes, you are now in some sense directly showing you trust them less.
You’re directly distrusting them and sort of taking away this trusting
relationship which might give you comfort, and you might somehow think they
will treat you better if you are in this closer trusting relationship.
And
I think it’s related to say, people invest— There’s a standard result that
index funds give higher rates of return than managed funds. And you might ask,
Well, why do people invest so much in managed funds? And partly you could say
it’s overconfidence, they just believe they’re going to be better. But you can
also say partly it’s wanting to have this relationship with this managed-fund
person. Then if it goes well you can say, Look, I have this relationship with
this good fund manager. And, you know, people like that relationship.
Agnes:
Right. So I want to say that, you know, with “How long will this marriage
last,” like you were inclined to use the language of, Oh, it’s a bad signal
that suggests distrust. I would say it’s slightly stronger than that: It is
distrust. It doesn’t just suggest it. That is— and, but that case, the case
where you’re making a prediction about the marriage, that’s even an easier
case so to speak for your side, than ones where within the marriage you’re
insisting on, you know, your spouse wants to change careers—
Robin:
Right.
Agnes:
—and they’re like, you know, this is the— I’ve decided this is the one I
really like. Well okay, there’s how you feel about it, but I want to have this
prediction market propose whether you’re going to make more money or not in
this new career. Or you know, you want this— you want to change your wardrobe,
whatever. I think we should get, you know, get some facts here. And that I
think that wouldn’t just suggest distrust, right? It would be distrust. And so
like, you know, in our—
Robin:
Well, I mean I would say that, all right, you actually have distrust. That is,
the distrust is there—
Agnes:
Yes. Exactly.
Robin:
You are reviewing it, but you’re not creating it through this thing. You are
acknowledging and showing.
Agnes:
Exactly. Great. Okay. So I think the real question here is about how—given
that these acts, like the prediction markets and stuff, constitute distrust,
right?—how much of that can we tolerate in a society? And one way to think
about this question is—I think I’ve mentioned this to you before—for me the
number one question for economists, especially in relation to the sort of
defense of markets, is, Why can’t markets defend themselves?
That is,
economists like to talk about how markets are so great, and you know, free
association between people brings benefits for everyone. Right? And that— I
mean the argument is powerful, but I’m like, why isn’t— why don’t they just
work then? That is, why do we need economists stepping up to defend them and
making these persuasive arguments?
Why aren’t all the regulators actually
just driven by market forces? Why aren’t market forces everywhere, even in the
incentives of regulators, right? There must be some— there’s this kind of
interesting fragility or vulnerability of markets where they seem to need
people championing them, right, rather than market incentives championing—
Robin:
Right. So the claim would be that there are contexts in which rhetorically
markets don’t win the rhetorical competition even if they would produce better
outcomes, and we’re trying to understand the source of the rhetorical
resources that the opponents have. So let’s take a concrete example of the CEO
firing, right? If you are on a board of directors and you suggest we adopt
this market for your firm, the other boards of directors’ members and the CEO
will see this as a sign of distrust. You are directly expressing distrust in
the existing process and in the CEO, right? And they could take offense at
that.
And your community of boards of directors and CEOs and your
immediate circle of associates will all see this as a threat to sort of the
assumption that you trust each other and you guys have good relationships, and
that you can correctly use informal mechanisms to figure out when you have a
bad CEO and fire him, right? This is all challenging that and questioning
it.
Now from the larger world point of view, most of the rest of us
aren’t terribly trusting of CEOs nor boards of directors. So in terms of
public opinion I would think it’s not a big problem, this— You’re not losing a
lot of trust that we have, because we don’t have that much trust in them. I
think most ordinary people would be fine with these sorts of markets, and the
loss of trust there would not hurt them because they didn’t trust anyway. It
would be the board of directors itself and the CEO that would have this
dynamic of the distrust loss.
Agnes:
Absolutely. That’s right in that case, right?
Robin:
And so that’s why there’s an opportunity for the rest of us to like push them
a bit, and say, We think you need a more trusting, more reliable institution
here, and we want you to try this even if you don’t like it so much.
Agnes:
Right. And so I can see why you’re in a better rhetorical position to
recommend this policy in relation to CEOs because most of the people that
you’re talking to aren’t CEOs or boards of directors, like people like me. And
we’re like, Yeah, let’s kind of screw over those boards of directors who are
fine with that!
Robin:
Or holding their feet to the fire, like they, you know—
Agnes:
Right.
Robin:
Make sure they prove themselves.
Agnes:
Right. We already don’t trust them, and so we’re okay with introducing
distrust into this process. But I think that for that very reason this example
isn’t actually going to bring out very well what the sort of substantive
argument against you is, because we’re not considering the CEO’s perspective.
So let me just sort of try to do that— and the board of directors, right?
So,
and that’s going to be— in a way it’s going to be better to consider that if
we sort of imagine like we’re— the company is like a— well, it’s an
institution, right? It’s almost like a little society—
Robin:
Yes indeed.
Agnes:
—and that little society has a ruler, right?
Robin:
Yes.
Agnes:
The CEO is the ruler of that society. And so then there’s the question, well,
what does it take to be a ruler? That is, how do rulers function, right? And
there we can think about, you know, we’ve all been inside situations where we
were ruled by people. We’re ruled by, you know, our president, or the
department chair, or whatever. And like the question is like how much trust is
needed for rulership to be possible?
Now the CEO isn’t ruling us, we’re
not in the company, right? So we don’t feel the need for this trust. It’s sort
of invisible to us. The people who you want to put pressure on the CEOs are
precisely the people who can’t see the need for this thing. But so what I want
to do is like bring out that thing. And I want to go back to the markets
question because I think this is quite a deep question: When you say, you
know, markets don’t win these rhetorical competitions, right? I mean, they
persistently lose rhetorical competitions. When someone persistently loses a
rhetorical competition, you have to think that maybe something deeper is going
on, right?
Like, you know, you could say, People who believe in the flat
earth persistently lose rhetorical competitions. They argue for their view,
and it’s like, yeah, there’s a reason for it, right? So I think there’s a
reason why markets persistently lose certain rhetorical competitions, and
that’s because the market is actually not a self-standing organism, so to
speak. Right? So if you wanted maybe an analog, something like, it’s less
like, like a living thing, and more like a cell or something in a living
thing, right?
And then the question is, What are the background
conditions that make the market possible, right? And the answer is like some
amount of trust, and we’re not sure how much. Right? And you could think of
the advance of markets as an exploration of how much trust we can live
without, right? It turns out we can live without more and more of it. And so
that resistance gets eroded, and eroded for the benefit of people. But the
fact of the need for that backdrop and the non-self-subsistence of the market
is still just a reality, and so the pushback has grounds.
Robin:
I’m not sure the amount of trust is the right way to think about this. So
let’s just look at an organiz— to see, you know, in the absence of this
innovation we could say that, you know, the best way to trust a CEO is to give
them the job for life, and just give them a salary and no incentives. Because
that’s showing the complete trust, right?
Agnes:
Right.
Robin:
But clearly we don’t go that far. We don’t give them a job for life, we do
have the threat of firing them. And that’s showing distrust.
Agnes:
Right.
Robin:
We give them incentives with stock options and things, where we say we don’t
just pay you a salary, we’re gonna give you stock options—that’s giving you a
sign that’s showing that we don’t trust you—
Agnes:
Exactly.
Robin:
—without the incentives. So we are already substantially in the direction of
having some signs of distrust—
Agnes:
Right.
Robin:
—which is typical for almost all business relationships. There are almost no
business relationships which have no consequence for bad behavior and jobs for
life or anything like that. So this is— Ordinary business relationships have a
substantial element of distrust. So the question is, Is this too much more? Is
that problematic, right? And again, you know, you might think employees can
trust the CEO more when they think the CEO has good incentives. It’s, you
know, it might be sort of trust on faith versus trust for a good reason. And
you might be saying society needs trust on faith, and I might be saying, No!
Trust for good reason is better than trust on faith.
Agnes:
Right, so you might think, Look, the very existence of businesses where
somebody is in charge not for life is like a somewhat new phenomenon, right?
And the idea “the ruler rules for life” is not like something that’s never
been tried among human beings, right?
Robin:
Right.
Agnes:
And so what you— like, so we’ve already moved— like you know, the very idea of
a business moves some distance towards less trust, more focus on outcomes,
right? But— and I’m not saying we can’t move more in that direction.
Robin:
Right.
Agnes:
What I’m saying is we can’t arbitrarily assume that we’re always willing to
move like an indefinite amount in that direction.
Robin:
No. So I was proposing this experiment where we put the Fortune 500 into the
markets, and that we see the companies that follow the advice versus the ones
that don’t. But we could extend this experiment to have three classes: We
could have one-third of the companies reserved to have no such markets. And
then we could ask, Is the prior levels of trust and the sorts there they have
without the markets producing better outcomes for those firms?
So we can
test this question of, Is this introducing too much of some sort of nebulous
distrust that’s causing social harm? That seems to me the right approach here,
is to just acknowledge the possibility. But don’t let that stop you from
trying things out and testing. Let’s just do that test and see.
Agnes:
So I’m not here trying to make the argument that we shouldn’t do the test,
though I’m skeptical of the fact that the people to whom you want to make the
argument and to whom you can most readily make it, in this case with the CEO,
are not the people who would kind of have to agree to conduct the argument.
That is, the place of your rhetorical advantage seems to me slightly off. I
think you’d be better off trying to argue to the owners— to private owners…
Robin:
Actually, one of the elements of this proposal is that it’s a guerrilla
proposal that doesn’t need their permission. That I would propose that
somebody, say, give me or somebody a million dollars, and we just set up these
markets. And we don’t ask permission of these firms.
Agnes:
But it’s illegal, isn’t it?
Robin:
We are asking for legal permission, or we go somewhere where it is legal.
Agnes:
Okay. [Laughs]
Robin:
I’m proposing that we just do these markets, and we get this data, and we
don’t ask for permission. But in fact—
Agnes:
But you have to have permission from someone—
Robin:
Someone—
Agnes:
That is, legal permission. That’s a big deal. And you—
Robin:
Well, on the blockchain we might not need permission from any government.
Maybe we just set it up on the blockchain. But the point is, I just want to do
the experiment and get the data and try to overcome the presumption and
rhetorical arguments that this is a bad thing by just doing it.
Agnes:
Right. So what I wanted to just say was that the function of this trust factor
that I’m bringing in, is not— like, you haven’t appeased me with respect to
that by saying, Well, we can test— we can measure that too, any more than like
if you wanted to set up these prediction markets within your marriage as to
whether your wife should change careers. And you’re like, Well, we can also do
one about our marital trust!
Robin:
Right.
Agnes:
You know, we can measure that too and see whether it— because I’m actually
going to trust you more if it turns out that these markets say that you should
change careers in this way. I’ll really believe that it’s the right answer,
you know, your answer. Right? And I’ll also measure the trust. And like, she’s
gonna be like, No, that’s not what I mean by trust. What I mean is you trust
me, and we don’t do any of this stuff, right? And so the the trust-resistance
to you is a resistance to doing this.
Robin:
So in general, economists often run into the situation where there’s something
happening in the economy, or that could happen, and that there’s some
potential or actual regulation of this. And that we say, This is too much
regulation, or it’s harmful regulation, and we say you should back off or
reduce that. And often what people say is, Well, the government is doing this
because there’s a problem, and I trust the government. And therefore—I don’t
trust businesses, but I trust the government—so I want the government here to
be doing something.
And then we say, You’re trusting the government too
much, and let’s show you data about how wrong that’s going. And then people
often— even if we show them concrete data about how their trust is misplaced,
they still kind of want to go, Well yeah, that was one exception, but in
general we trust, and then, you know, you just can’t get enough data on
everything to like win the case, right?
Agnes:
Yes.
Robin:
And so you know, fundamentally there’s this basic question of, Do we trust too
much? Or who do we trust reflectively that we shouldn’t be trusting? That is,
yes, maybe you should trust your wife. But should you trust your government
always when the government is, you know, say, pushing tech companies to censor
speech? Right? Should you trust them? “Well, they must be— have a good reason
for it.”
I mean, we have a long tradition of reasons for distrust of
opaque, arbitrary central power. And you might say that’s good intuitions, and
that we have long since tried to come up with a compromise between just
trusting an arbitrary leader and having some accountability, and this is about
sort of better mechanisms of accountability. But the fundamental distrust of
large organizations and government is well established.
Agnes:
So I was trying to give you a like meta perspective on that, which is to say
that the fundamental fact about markets is that they are not self-standing
institutions, they are predicated on a layer of trust. And one way to think
about this is to think about just a sort of genealogical story—I’m just making
the story up—
Robin:
Right?
Agnes:
—you know, earlier humans—okay, even before there really were anything like
markets, they were kind of born into communities, and they kind of just
trusted each other like for no reason. Like they probably trusted each other
too much. They had to. Right? And like, you know, people, they trusted each
other, they also violated each other’s trust. Killed each other. Right? And
maybe what, you know, what we’re doing with the sort of increasing scope of
markets in society would be seeing how much of that unthinking trust we can do
without. And in a way it’s progress to do that.
Robin:
So your story sounds plausible, but it’s just historically wrong. Okay, so
what— We have relatively solid data about this, we see cultures and societies
that were relatively clan-based and relatively isolated, and who didn’t trade
that much with distant foreigners. And in those societies people did not treat
foreigners very well.
Foreigners could not be trusted, and were not
trusted, because in fact they were not trustworthy. Unless you had a close
family relationship to somebody you could not trust them, because they would,
given the opportunity, treat you badly. And then as longer-scale trade
evolved, we developed trading norms and norms about how to treat distant
partners that made us treat them better, and then they became more trustworthy
as a way to facilitate long distance trade. It was trade and markets that
produced trust between strangers, not the other way around.
Agnes:
So I mean, one thing is, if a society is clan-based, then you’re born into a
community, right? And you just trust everyone in the community, right? So for
that—
Robin:
You trust people close to you in your clan, but people far away you do not
trust.
Agnes:
So I wasn’t trying to suppose that everybody trusted everybody else. What I’m
saying is that there is a kind of trust that is in some sense not purchased.
Like you just get it for free by being more intimate. And then there’s added—
Robin:
But what we did is we added more trust, we didn’t take it away. That is, when
we added more trust of strangers we didn’t lose the trust of family.
Agnes:
So if I look at, you know, the bits of history that I know, like, better,
which would be like, say, you know, let’s say the norms represented in the
Homeric epic poems, right? One norm that you see there is sort of
guest-friendships. So it— you know, strangers— like, families develop
connections, and like if I am a stranger in a foreign place, I might have like
a host that, as you were talking about, like that vouches for me, and you
know, we give each other presents. And those bonds then make it possible for
people to have like financial interactions, right? More substantive financial
interactions.
Robin:
To a limited degree, but more than without them.
Agnes:
Right. But I guess my like— I’m skeptical at the thought that people somehow
managed these market interactions in a situation of zero trust. And then the
trust developed out of that, because I think they’d just kill each other
rather than trading. So there had to be some trust, right?
Robin:
Sure.
Agnes:
To allow even the interaction. Like to allow just not immediately killing a
stranger.
Robin:
So presumably, initially, they had these trusted connections—
Agnes:
Yes.
Robin:
—and that facilitated trade, but then people were eager for even more trade.
And so the norm spread around these things that you would be, you know, maybe
you had the person who came and visited you and you trusted them, but maybe
their brother you didn’t trust because they had never come and visited you.
And now maybe you trust their brother more.
And so the norms would spread
in terms of a wider range of people that you needed to treat fairly, and
spread around the trading partners, especially the ones you needed to treat
fairly. And again— but the overall long-term trend was that norms of treating
strangers fairly, you know, co evolved with more trade. And so the market, if—
you know, basically promoted trust. The market didn’t take away trust and
replace trust; the market promoted trust.
Agnes:
Right. I mean I think that like the more people that you’re in a market
relationship with, right, the more people you need to be in some kind of trust
relationship with. So that makes sense to me.
But once you have a set of
people that already see themselves as being in a kind of trust relationship
with one another—so this is like your criminal justice proposal, right? Sorry,
not criminal, the suing proposal, right, where people are like, I see my
fellow citizens as you know good guys, basically, right, and they’re not the
kind of people who would sue me—I don’t need to now establish a relationship
to them, I take it for granted that I already have one. Right? And you’re
asking me to now— and, you know, pull back on that a bit—don’t trust them as
much, don’t have that attitude. Have more of the attitude that they might sue
me, I might sue them.
Robin:
I actually don’t think people now believe that, you know, they can just trust
people not to scratch their car in a parking lot. I believe they are correctly
calibrated with the frequency of how often that happens. The question is to
ask, in this alternative, would people scratch your car less? And could you
therefore trust them more not to scratch your car?
Agnes:
Right. And I think you’re right that nobody thinks that other people are not
susceptible to scratching the car, and you’re right that you get less car
scratches with your method. But there’s a different— like when you sue someone
or something like that, and like you know, especially if someone sues you,
right? You think they see you as a bad person? And so what we—
Robin:
I think that’s too strong, right?
Agnes:
Well, you were saying, I don’t want to— I’m not the bad guy, right? And now
you’re reframing the thought about—
Robin:
Among relationships there is this good norm where in your circle of friends,
if somebody harms you, I think the usual norm is correctly that you should go
to them privately and tell them you think they’ve wronged you, and see if you
can’t work this out between the two of you without involving anybody else.
That’s what a trusting community is, right? A trusting community isn’t where
nobody ever hurts you; a trusting community is where if somebody does hurt
you, you expect that using the community will be the first line of defense,
the first thing you should do, and that’ll usually work.
Agnes:
I think that there’s a big difference here between harms and wrongs. So I
think that— suppose that, you know, we are susceptible to harming and being
harmed in small ways by our friends all the time, right? And I think we choose
to overlook a lot of it. That is, like—
Robin:
Right. But sometimes we— rises to the level of a complaint that we will voice
to them. And that’s a perfectly legitimate part of close, trusting
relationships.
Agnes:
Yeah, I think that that’s right. But I think that like if, you know, if you
say— if you complain to someone, and then you are in effect presuming that
they have enough goodwill towards you, that they could sort of adjudicate this
with you—they’re not the enemy, right? And that’s sort of what it is to
complain to someone, is to not see them as the enemy.
Robin:
Right. But you do that implicitly with the threat that you can recruit, share
it out, share people in your social network. If they don’t make good to you
directly you may sort of complain to other people nearby, which may pressure
them to make good. And then you would do that before you went to some larger
circle, right?
Agnes:
Yes…
Robin:
So there’s these nested circles of, you know, how large a pro— formal a
complaint to make, and how large a group to bring into the dispute. But the
usual norms are try the smallest, most direct circle first. And then if that
doesn’t work, move outward. And then the usual norm might be to invoke the
legal system is a thing you do only after failing in a bunch of these closer
connections.
Agnes:
Right. And part of the reason why you move outward—that’s why you start on the
inside, is because every one of those moves out is a step down in trust. So if
I have to sort of, you know, suggest to you, I’m going to get all my friends
together and they’re going to see that we are all right, and you were wrong,
and I’m not just going to go to you one on one, right? That’s like, I don’t
trust you. And so you know, setting up a system that allows the legal system
to play a bigger role in our life, in a sense, is making one of those steps
more available and accessible.
Robin:
Well I mean, say on the one hand, you and I have a relationship but we have no
shared friends, right? We met in a coffee shop once and we don’t know anybody
in common. We just have a relationship, you and I. Versus another scenario
where we’re both embedded in a network of associates, right? In the first case
I don’t have the option to go to this larger community if you don’t make good
with me. And in the second case, I do.
Which situation do you say we
trust each other more? I would say in the second case we trust each other
more, exactly because we each know we have this threat, and that that will
keep us in line. And we feel reassured by knowing that we can trust this other
person more knowing that we are embedded in this larger network that will help
if this one other person doesn’t make good.
Agnes:
So like that would suggest, for instance, that let’s say— let’s take a marital
relationship. And, you know, let’s say, let’s compare two marriages, one in
which the spouses know that each of the other is somehow in principle opposed
to divorce or morally opposed to it or whatever, right? And then the other one
where that’s an availability.
Robin:
That they could get divorced.
Agnes:
They could get divorced.
Robin:
Divorce is easier.
Agnes:
And so like in effect the divorce— The threat of divorce is present in the
second case and not in the first. And the person can invoke it, and it’s
available to be invoked if the person misbehaves. So your view is that the
couple trusts each other more in that second case where the threat of divorce
is present.
Robin:
No, it was in the former case. Well I mean, so it depends on what we’re using
to think about trust. So whether we’re trusting to stay together or trust to
behave well in the relationship. Those are two different kinds of trust you
could have in mind and they’re somewhat opposed, right?
Agnes:
But do you think, in terms of trusting— Because the point is you’re saying the
situation in which we each have an ability to threaten one another is a
situation where we trust one another more, right? And I’m skeptical about that
claim. And so I’m skeptical that the married couple where they can threaten
each other with divorce, trust each other more than the couple where they
don’t threaten each other.
Robin:
So I hope this last set of examples at least makes clear: Trust is
multidimensional, okay? It’s not just this more or less trust, okay?
Agnes:
Okay.
Robin:
There are a lot of different kinds of trust, and they are more or less in many
different contexts. So that makes me reluctant to just say, Don’t add another
institution if you— because it’s going to reduce trust. It’s going to increase
some kinds of trust and reduce other kinds of trust. But, you know, let’s
still try.
Agnes:
And I’m not arguing— so like maybe you hear me as saying, This is why we
shouldn’t do this. And that’s not what I’m saying. Because I don’t think we’re
deciding right now whether or not to do this, so it wouldn’t make sense for me
to say that. I don’t have the power to bring it about. I’m not the people in
the blockchain who might do whatever, right? So that conversation doesn’t make
sense for me to have.
What I’m trying to do is help you access the
resistance to this idea, which I don’t think you— like you’re fully inside of,
because I think if you— It’s just like if you understand the counterarguments
to your view, you’re going to be better at presenting if you understand the
deep source of the resistance; if in a certain sense you can even develop a
certain respect for the source of the resistance, then I think you become much
better at overcoming it.
And so in particular it may be that there’s like
some kinds of trust, right, that are at stake here, that are fueling— that are
pushing back against this. And it’s something like— and maybe it’s a standing—
that that problem is kind of the standing problem that economists are running
up against, where you could say people are inclined to trust too much, but
it’s a kind of unthinking trust that in some— to some measure needs to exist.
Robin:
So but I’m not so sure like emotionally understanding the trust issue is that
important. Because I think we all just immediately emotionally understand the
trust issue. I think the fundamental question is how to calibrate when the
trust is justified, and whether we’re over-trusting and why. So that’s where I
would bring in sort of evidence, widespread evidence that we are just trusting
too much, as sort of motivating people to say, Yes, maybe there’s a little
less trust here of some kinds, but we’re doing that too much anyway, and look
at the big gains.
So for example, in medicine, I would say again, our
main emotional relationship in medicine is we just— we’re scared to death to
think about death. And if that issue comes up we don’t want to think about it;
we don’t want to be a smart customer comparing options, we just want to trust
our doctor. And then that puts it out of our mind, and it means we don’t have
to think about it, and it means we can reassure ourselves that this person
that we are trusting has a relationship to us of trust such that they might
treat us better, because we are trusting them.
And that’s a very
compelling description of people’s relationship to medicine; it makes sense of
a lot of details. And I think, you know, if you reflect on yourself you’ll see
it’s similar. But I think we can show you evidence that in fact your trust is
substantially misplaced. That is, you know, for example, you know, things like
the RAND health insurance experiment which showed that people who, on average,
get more medicine because they pay more for medicine just are not
healthier.
So there’s this enormous failing just averaging over all the
medicine whereby on the margin, i.e., getting more medicine, is not actually
producing more health. Yet that’s the overwhelming perception, emotionally, of
customers who are in a position to choose more or less medicine. So that
suggests that we are just making this huge mistake.
And another example,
as I mentioned before: Index funds, right? People want to have a trusting
relationship with a fund manager. They want to pick out a manager and have
this personal connection to them and have this feeling of a trusting
relationship. And one that they can, in a sense, brag about socially even as
part of their social network. And yet they would be better off financially if
they just invested in the index fund.
And, you know, an anecdote: I gave
a talk at an index fund—I mean, a managed fund—once, and they basically said
the story, Look, we could have a lot more money under management, but the
problem is all these people who want to give us money, they would insist on
that we take a lot of phone calls. People don’t want to just give you money
and have you manage it, they want to then have a relationship where they get
to call you whenever they want and give you advice, and then you have to
answer the phone and talk to them. Because this is part of what they want:
They want this relationship and they want it to be trusting, i.e., you pretend
like you listen to their advice.
And so this is a common thing across a
wide range of areas. And I might say, like, lawyers: Lawyers we are way too
trusting of because we mainly pick lawyers on the basis of reputation in the
sense of what school they went to or what firm they work for. We don’t pick
them on the basis of a track record of success, and we don’t even collect
track records so that we could pick them on the basis of that, and we don’t
want to give them strong financial incentives.
And I would say we should
be more giving of financial incentives. We should more have track records. And
that’s what we should be doing for doctors too. For doctors, we should give
them more direct financial incentives to do— have good outcomes, and we should
have more clear track records of their success and failure with patients. Both
of which might seem to interfere with a trusting relationship, but we are
failing ourselves to large degrees by excess trust.
Agnes:
So I think that maybe one reason why econ— like, markets and economists in
general run into this persistent rhetorical wall, or you know, problem of
persuasion, is that a lot of relationships are more like families than you’re
inclined to think they are. So a lot of the things that you said about what
these people in this index fund…
Robin:
Managed fund.
Agnes:
Or whatever, managed fund—you know, the— Imagine if we said it about our
spouse: Like they want to be able to call you whenever they want, they care
about the relationship, they want you to listen to them, right? That’s just
like what you’re, you know, your best friend, your spouse, your children, they
have all these expectations of you. They don’t want you to measure them using
a track record of success, right?
So what we can do is we can take the
context in which all of that language is most at home, and then we can infer
that people have familial relationships to their doctors, their fund people,
etc. Right? And there— it may be an illusion, maybe economists are
particularly susceptible to the illusion that those relationships are much
more different than family relationships, but in fact people appear to want to
be in families, and that may be like a fundamental fact about them.
Robin:
They make this wrong choice even when they’re representing other people. So
think of pension funds, which are supposed to be done for the purpose of
employees who will then retire and need money when they retire. I gave a talk
at a conference about pension fund managers and learned that, basically, out
of almost every, you know, company, they pay an employee to be the manager of
pension funds. So they take a cut.
And then what they usually do is pick
a pension fund managing firm, who takes a cut. To then pick which investment
funds to invest in, which takes a cut, which then invest in particular firms.
So there’s this huge cut that’s taken out of all the returns that go to
employees because they make this long chain of relationships, each of which
takes a cut.
So those particular people may enjoy their relationship of
feeling this bonding, but the employees are getting screwed. And if the
employees knew this, they shouldn’t be so respectful of the relationship
between these various people in this chain. They should say, You’re cheating
us, you’re taking our money, stop it! And they should just have the company
invest in an index fund right from the start, and hardly even have an employee
who manages them.
Agnes:
Right, but then there’s a question of like, Why did none of those shoulds
actually happen? And the answer is, the world doesn’t, in fact, function in
accordance with a certain set of principles that you might have expected.
Right? So if we go back to incentives, right, your thought is, the seat— the
board of trustees have the wrong incentives. Right?
Robin:
Right.
Agnes:
And, you know, if you think about it, the— I mean this has been kind of a
theme in a bunch of podcasts we’ve done, that economists, like, they’re not
good at like changing people’s preferences or something like that. Right? You
can’t tell the board of trustees, You should care more about this rather than
this. Right? But maybe we can rearrange the world…
Robin:
Yes.
Agnes:
…so that the preferences they do have guide them in a better direction. That’s
the thought, right?
Robin:
Absolutely.
Agnes:
And, but there’s a question like, you know, is there some— I mean, there are
incentives in argumentation and listening to argument as well, right? There
are incentives present in this very conversation. And like what if all the
incentives are wrong, right? That is, when you’re talking, you’re giving these
talks, right? The problem is that the people hearing your talks, they have the
wrong incentives.
Robin:
Right.
Agnes:
And like maybe the econ— there’s a kind of impotence at the end of the day,
right? The economists would have to just change the sort of fundamental
incentives and there’s no way for them to do that.
Robin:
I mean, this is a basic issue. And let’s just— I would reframe it as the
economist sees it, and you tell me if you think this is a bad framing: We
economists often want to rely on the strength of competition to produce good
outcomes, and in many situations we can identify how competition would not
necessarily produce good outcomes, and that— we call those market failures.
And
so we mostly want to promote competition in general, except when we identify
particular market failures in which we would see competition going wrong, such
as, say, monopolies or externalities or public goods, or some particular
things.
So we have an intellectual structure there, right? We can apply
this whole competition analysis to conversation, to the debate of ideas. And I
have specialized in that topic to some degree in my career, and I think when
we just think for a short time about conversation and the competition there,
we can identify a lot of market failures—a lot of ways in which those
institutions would go badly. They would not produce consensus on the truth,
say, they would produce other things.
And so but of course, if this is
the forum in which we need try and persuade people of other choices then, you
know, it would be an especially high-leverage thing to do to fix that. And so
I focused a lot on finding institutions which would improve those worlds. But
of course, you still have the meta problem: In order to get anybody to
introduce a better conversation institution…
Agnes:
Yes!
Robin:
…you will have to persuade somebody via conversation, perhaps under the
existing institutions, to consider an alternative conversation institution.
Right?
Agnes:
Right.
Robin:
And so—
Agnes:
Where they have the wrong incentives in that conversation.
Robin:
Exactly! Right? So, but the hope is in general that one doesn’t have to just
persuade everybody to do a better thing. All you have to do is persuade some
small subset of people to try a better thing and then we can collect data on
how well the better thing works in trials, and then we are in a much better
position to try to convince other people, even in their bad institution, to
accept our claims, because we will have solid data.
Agnes:
Okay, but you know, one theme of your work, like say, The Elephant in the
Brain, is like, Don’t think you’re so different from other people, right? So
don’t think your special subset of people is really going to be any different
from people in general, who all have the same incentives. And why doesn’t the
conclusion of all of this argumentation lead you to the thought: You have to,
at the end of the day, on some questions, in some ways, be able to change
people’s preferences. Some— for some people.
Robin:
So we have a large literature on innovation. This is the topic of innovation,
right? And we can just ask, how does innovation typically happen? So you have
existing practices, and somebody somewhere conceives of an alternative
practice, right? And then how does it typically work out that eventually
everybody, most everybody is adopting this alternative practice, right? So it
should be familiar to you. But to repeat, the usual scenario is somebody who
conceives of an alternative practice gets somebody near them to try out some
small version of it…
Agnes:
Let me pause just right there at that step. Is the way— If we look at the
actual historical cases of it, do you think we’d find that the way they get
that person to do it is like by presenting a really, really good argument
where they consider a lot of like, contrary possibilities, and what it— etc.?
Is that how they do it?
Robin:
So there are a number of ways in which people inspire associates to try
innovations, right? And one of them is greed. That is, in business contexts
you might inspire an investor or business partners to try out a business
innovation or product innovation because you hope to make a lot of money.
Agnes:
Mm-hm.
Robin:
Right? And that’s definitely part of the process by which innovation happens:
somebody somewhere thinks they could make a lot of money with it. Which would—
that needs some sort of property right for the innovator, and so, say, patents
and copyrights and other sort of things that can arguably help there in order
to encourage someone to be greedy there. Because if you just make a practice
that other people can easily copy, then you put a lot of work in creating an
innovation that you don’t actually own and can’t really take that much
advantage of, right? So we could think about, now, this kind—
But there
are other ways, right? So in our society, I think one of the things that
modern society has done correctly somehow is just to give higher social status
and praise and honor and glory to innovators. And so many people are eager to
be part of an innovation process for the hope of glory, that they could be
celebrated later as part of the origination of innovation.
And that’s
true for many kinds of innovation that you can’t really own, including, say,
social-activist innovation. Many people are eager to be part of some
social-activist community that’s pushing for some change, so that they can be
celebrated as— you know, and getting the honor and glory of having been part
of that process. Right? And so that’s another way in which we inspire people
to innovate.
You can also try to inspire people to innovate by the hope
of, say, your community will be celebrated for this innovation, or be able to
gain. So for example you might be a Chinese citizen and be especially inspired
to be part of some nuclear innovation thing, not just because you might make
money on it, or not just because you might be celebrated, but because the
Chinese civilization will advance on the basis of it, and be celebrated for
producing this innovation that the world will celebrate China for. You might
have that attachment to a larger community and see that that community could
be credited for this. And this is actually a common source of inspiration for
innovation.
Agnes:
Right. All that makes sense to me. So suppose you, you know, you were thinking
about things sort of not from a kind of more conventional point of view of,
Let me give you a change that I can defend against like all comers as being,
you know, positive; but like an engineering point of view, where like, here’s
something that could work, and if it did, it would be a big benefit, right?
Robin:
Right. And here’s how you would do a small scale trial.
Agnes:
Right. And, but you presented that the innovation—let’s call the first thing,
the more standard thing, the like positive modification or something, right,
and which is different from an innovation—then the methodology for pushing the
innovation should be different from the methodology for pushing the positive
modification.
The methodology for the positive modification would be,
come up with a really persuasive argument; you can rely on the preferences and
incentives that people already have. But the methodology for pushing the
innovation is going to have to be some of the things you described: greed,
credit, inspiration. And so like why do you use so much argument? Like it
seems like you’re using the wrong tool.
Robin:
Well, I’m just trying to use all the tools I have as much as I can. So for
example, the using prediction markets as a way to make decisions: One approach
is to say, Who is affiliated emotionally with this sort of mechanism? And you
might say, well, libertarians, right? They like markets and they see this as a
market, so they see this as more markets in the world. And initially, at least
from a distance, this works. That is, libertarians are among the people you
can most easily initially inspire to be interested in a proposal like this.
However
once they start to look at the details, they notice that a mechanism like
this, if asked about whether to have more or less libertarian policy, does not
necessarily give more libertarian policy. The speculators could very well
believe that more regulation in some area will give the outcomes asked about,
and therefore this mechanism could approve. It’s like, the more-libertarian
CEO doesn’t necessarily get approved by this mechanism.
Agnes:
Right.
Robin:
Right? And so then they are less interested in supporting this mechanism
because they more, like, just guaranteed things. Like they support open
borders and they see open borders as libertarian, then there’s no question
there. They’re supporting the libertarian option and they tend to want to just
have the guaranteed libertarian option, not an institution that, if
libertarian policies were better, would produce libertarian policies. They
want the institution that just gives them the libertarian policies.
Agnes:
Okay, I think we should stop there.
Robin:
All right!