Alisabeth Ayars (University of British Columbia), "An Explanation of the Essential Publicity of Practical Reasons"
Forthcoming, Oxford Studies in Metaethics
I.
Suppose you can do something that will be great for you, but that would seriously hurt someone else. Perhaps you could get a hefty insurance benefit by hiring a hitman to kill your uncle. Suppose you don’t care about your uncle, and that it’s guaranteed you won’t get caught. Do you have any reason not to do it–a reason deriving simply from the harm the act will cause to others?
The answer may seem obvious: of course you have a reason. Common sense tells us that other people matter, whether we happen to care about them or not. Anyone who thinks otherwise is a monster. And they are not just making a moral mistake, but a rational one: they are ignoring reasons they have.
But on reflection, it’s not clear what justifies our confidence in this answer. Suppose you really want the money. No one will ever know what you did. You’re guaranteed never to be punished. Suppose also you won’t feel guilty. If trampling on the rights and interests of others will make you richer and more powerful and have no negative consequences for you, why shouldn't you do it?
This is an ancient question. In the story of the Ring of Gyges, from Plato’s Republic, Gyges is given a ring that makes him invisible which allows him to act without consequence. Plato asks: Does Gyges have any reason to be virtuous? (There is a movie–Hollow Man, directed by Paul Verhoeven–that mirrors the Ring of Gyges thought experiment. It’s about a scientist who becomes invisible and ends up going on a killing spree.)
Plato says that Gyges should be virtuous because virtue and self-interest coincide. The best life for us is also the virtuous one. But this is unsatisfying for two reasons. First, it's hard to believe. (How could virtue be good for Gyges, when it makes him worse off in every respect he cares about?) Second, it gives the “wrong kind of reason” to care about others. Plato’s argument appeals to self-interest. But we ought to care about others’ interests intrinsically, not just because it’s good for us.
If people who trample on the interests of others are doing exactly what they should, from a rational point of view, this is disturbing. It means that there is an essential divide between people, in the following sense. For my interests to rationally constrain your conduct, they must hook up with your interests or desires in some way. If treating me well is good for you, then you have a reason to treat me well. But if there’s nothing in it for you, and you don’t happen to already care about me, my interests place no constraints on your conduct.
This amounts to a kind of normative isolation. Your reasons matter to you, but from the point of view of everyone else in the universe, they might as well not be there. In Christine Korsgaard’s terminology, our reasons are “private”, having normative force for us alone. This contrasts with the view that our reasons are “public”, having essential normative force for others as well. In my view, if reasons are private, this is an intrinsically macabre feature of normative reality–something that ought in itself to unsettle us.
One way to bring this out is to note that if reasons are purely private, then we cannot justify punishing others on the grounds that they did something they shouldn’t have done, since after all, they may have had no reason to act differently. This makes punishment seem uncomfortably close to mere domination. It also eliminates the possibility of rational persuasion when our interests are threatened. If you’re torturing me, I can beg and plead for you to stop, and resort to violence if I can. But I can’t give you a reason to stop if you have no interest or desire that would be served by stopping. It’s not clear I can even be indignant, since indignation contains the thought, “You should stop”, which is false if you’re acting rationally.
This is a deep philosophical problem. We want it to be true that our interests provide genuine constraints on what other people can do to us, that we have the right to be indignant and to punish when others violate these constraints. All of this assumes that other people have reason to respect our interests. But as we have seen, it's possible to doubt this common sense idea. How should we respond?
II.
Here's one philosophical approach: we might just insist that our interests provide other people with reasons. The world is populated with reasons. Many of these reasons are primitive, in the sense that there’s no further explanation for why they exist. We have reason to pursue our own pleasure. Why? We just do; there’s nothing more we can say. And similarly, we might say: we have reason to care about the interests of others. Why? We just do. Other people matter. That’s all we can say; there is no further explanation as to why they matter.
But this answer is unsatisfying. The problem is that the practical relevance of others’ interests is something that calls for explanation, in a way that the practical relevance of our own interests doesn't. If something is in my interests, it’s good for me, and hence worth it for me to pursue. But why should I care about your good? Since you and I are separate people, what’s good for you is not necessarily good for me. So it’s not obvious why it should concern me; and when this worry is on our minds, it is unhelpful to thump the table and say: It just should, and that’s that.
What sort of answer would satisfy us? Suppose we had an argument for the claim that other people’s interests give us reasons, an argument that could move someone who didn’t already care about others’ interests to care about them. This would be satisfying. It would assure us that caring about others’ interests is rational, that the self-interest narcissist who doesn’t care is making a mistake. Ideally, the argument would rely on uncontroversial premises. But even an argument with relatively controversial premises would be valuable since the conclusion is so important.
To fix ideas, let’s go through an example. The following argument ultimately fails, because the conclusion doesn’t follow from the premises. But it is a useful illustration of what it could mean to argue for the conclusion that others’ interests have normative weight. (The conclusion of this argument is actually stronger–that others’ interests matter just as much as ours.)
You have a reason to care about your own interests.
There is no relevant difference between your interests and the interests of others, from an “objective” point of view.
Therefore, you have as much reason to care about others’ interests as you do your own.
The first premise is uncontroversial. The second premise is also true. It invokes the notion of an impersonal “God’s eye” point of view. From the point of view of the universe, no one is more important than anyone else. If there is such a thing as “impersonal goodness”—the sort of goodness that shows up when we adopt this impersonal standpoint—the satisfaction of my interests must be as impersonally good as the satisfaction of yours.
One problem is that even if that’s true, it does not follow that you have as much reason to care about others as you do yourself. For we might also have agent-relative reasons to care about our own interests. Agent-relative reasons are reasons that particular agents have, that are not derived from the impersonal values we can bring about. As Jay Wallace notes, our reasons to help our friends look to be agent-relative. You do not help your friend because it is impersonally good that friends be helped; you help them because they matter to you. If such agent-relative reasons exist, then you may have much greater reason to promote your own interests than mine, even if our objective equality gives you some reason to help me.
Of course, it is not necessary to show that we have equal reason to care about others. Demonstrating that we have some reason to care would be valuable. And the presence of agent-neutral reasons to promote others’ interests would be enough for that.
But why should we believe in these agent-neutral reasons in the first place? To simply claim they exist is a version of the “insistence strategy” mentioned earlier: we are simply insisting that promoting the interests of others realizes some objective value.
Once we admit the existence of agent-relative reasons, it becomes very hard to argue that we must care about other people. For if agent-relative reasons exist, the following is at least a conceptual possibility: you have your agent-relative reasons, and I have mine. And each of us should do whatever best promotes those interests. This is a version of Rational Egoism. If Rational Egoism is coherent, the prospect of demonstrating that others’ reasons essentially give us reasons seems hopeless.
III.
In my paper, An Explanation of the Essential Publicity of Practical Reasons, I approach the problem from a different angle. Instead of trying to argue directly that your interests automatically provide everyone else with reasons, my argument runs through a premise concerning what it is to judge that someone has a reason to do something. I argue from 1) a view of what it is to judge that someone has a reason to 2) the conclusion that Rational Egoism is incoherent. More specifically, I argue that when the Egoist judges that you have a reason to promote your own interests (as she does), she commits herself to judging that she has a reason to respect your interests. If she makes the first judgment and rejects the second, it is as if she contradicts herself.
If Rational Egoism is incoherent, we don't have to rely on the “insistence strategy” mentioned earlier. Instead of simply asserting that others’ interests matter, we can say that anyone who denies this, is rationally criticizable, in virtue of holding an incoherent set of attitudes.
The simplest example of an incoherent set of attitudes is an inconsistent set of beliefs. Suppose you encounter someone who says, “I went to the store yesterday, but I didn’t go to the store.” You’d be puzzled. Imagine you press him further:
“You mean you went to one store, but not to another store?”
“No,” he says. “I went to one store–Target–but I also didn’t go to Target”.
You would conclude that either this person doesn’t know what he’s saying, or if he does, that his beliefs are incoherent, and hence that he is exhibiting a particularly severe form of irrationality.
Belief is not the only attitude that is subject to coherence norms. Combinations of intentions can be incoherent, as can combinations of desires. Suppose I intend to go to the store at noon and know that to get to the store, I need to take the bus, but don’t intend to take the bus. This is incoherent. To see this, imagine we have the following exchange, after I assert that I’m going to the store but that I don’t plan to take the bus:
“So you plan to get there some other way?”
“No; the bus is the only way to get to the store, and I’m going to the store. However, I don’t plan to take the bus”.
Again, you’d conclude that either I don’t understand the meaning of certain words, or that I’m irrational.
When norms of coherence are in play, we can speak of attitudes committing us to other attitudes. My intention to go to the store commits me to intending to take the bus, in the following sense: It is irrational—indeed, incoherent—to plan to go to the store while failing to plan to take the bus if I believe the bus is the only means to get there.
My strategy for showing that Rational Egoism is incoherent goes by way of demonstrating that judgments about others’ reasons rationally commit us to certain judgments about our own reasons. Specifically, I contend that
the judgment that someone has a reason to promote an end
Rationally commits the judger to
a judgment that they have the same reason to not interfere with that person’s promotion of the end
For example, my judgment that you have reason to avoid pain rationally commits me to judging that I have a reason to not prevent you from avoiding pain, simply in virtue of the coherence norms applicable to these judgments. If the argument succeeds, it means we are all committed to reasons being “public” in Korsgaard’s sense, since countenancing a reason for you commits me to countenancing a reason for me.
In the Possibility of Altruism, Thomas Nagel attempts a similar project: to show that we are all rationally committed to some form of altruism. However, Nagel focuses on reasons of promotion. According to Nagel, if you have a reason to promote an end, then anyone has the same reason to promote that end.
My focus is slightly different. I argue that when I judge that someone has a reason to promote an end, I’m committed to judging that anyone has the same reason to not interfere with their promotion of that end. Some philosophers, like Wallace, find this version of the publicity thesis more plausible. I may have no particular reason to promote a stranger’s ordinary ends — to help you with your stamp collection; but I do have reason not to interfere with your stamp collecting if you have reason to promote it .
This weaker thesis still falsifies Rational Egoism. Rational Egoism says that others’ reasons bear on what we should do only insofar as it’s in our self-interest to take them into account. If others’ reasons always give us reasons of non-interference, even when it is in our interest to interfere, this is false.
Of course, the critical question is how judgments about others’ reasons could rationally commit us to countenancing reasons of non-interference. Rational Egoism–the view that everyone should do what’s best for themselves– may sound false; but it doesn’t sound incoherent in sense in which the incoherent attitudes described above sound incoherent.
IV.
The central idea of my paper is that the incoherence of Egoism follows if we adopt an independently plausible non-cognitivist view of normative judgment.
Normative judgments are judgments that employ value-laden concepts like “right”, “good”, and also “reason”: judgements that seem to go beyond describing how things are, implying something about how they ought to be.
But normative judgment is famously mysterious. The declarative language we use to express these judgments suggests that they are beliefs about a special sort of fact. But could there really be normative facts that make these (alleged) beliefs true? Such facts–causally inert, inherently action-guiding entities–would be “queer”, as J. L. Mackie put it.
Many philosophers, for reasons like these, favor non-cognitivism. Non-cognitivism is the view that normative judgments are not ordinary beliefs but are rather constituted, at least in part, by motivational states such as desires or intentions.
In the paper, I don’t argue for non-cognitivism. What I show is that if a certain plausible version of non-cognitivism is true, then judgments about others’ reasons commit us to countenancing reasons of non-interference.
What kind of non-cognitivism? I won’t get into the details here, but the theory is that judgments about what we should do–the judgments that conclude practical deliberation–consist in (what I call) decisions, and that judgments about reasons consist in states of “weighing considerations in favor” of decisions. I defend this view at length in a different paper: Deciding for Others: An Expressivist Theory of Normative Judgment.
The argument for my main claim—that my judgment that you have a reason to do something, commits me to judging that I have reason not to interfere—is a bit too complex to lay out here. But I’ll provide a simplified version, which shows my judgment that you should do something commits me to judging that I should not prevent you from doing it. The (simplified) argument is this:
To judge that someone should do something is to endorse that they do it.
To endorse that someone does something is to “sign off” on their doing it, where “signing off” is a primitive non-cognitive attitude.
Signing off on someone doing something commits you to not preventing them from doing it.
Therefore, when you judge that someone should do something, you are committed to not preventing them from doing it.
(In the paper, I use the phrase “deciding” in place of “signing off,” but since the former already has a non-technical meaning , I’ll adopt the latter here for clarity.) The argument is valid; the question is whether the premises are true.
The first premise should be uncontroversial, assuming we interpret “endorse” broadly enough. When we judge that someone should do something, we think they ought to do it. This is a way of being in favor of their doing it, which is to say, it is a way of endorsing that course of action.
The second premise is more controversial. It says that to endorse that someone does something is to “sign off” on their doing it, where “signing off” is a primitive non-cognitive attitude. What could this possibly mean?
Suppose we’re deciding what to order at a restaurant, and you’ve asked for my advice. I say, “You should get the burrito”. In saying this, it seems, I sign off on you’re ordering the burrito. If you sign off on an action when you are deliberating about what to do, you will, if you’re rational, form an intention to do what you signed off on. And similarly, if I sign off on your action when I am deliberating for you, my decision functions well if it leads you to form a corresponding intention — in this case, an intention to get the burrito, just as the point of a project manager’s signing off an employee’s proposal is to cause its implementation.
On this view, when I judge “you should X”, I’m not forming a belief about what you have most reason to do (despite what the surface grammar may suggest.) I’m signing off on your doing X — expressing a non-cognitive state the function of which is to lead you to form an intention to do X.
The fourth premise is that signing off on someone’s doing X commits you to not preventing them from doing it. Of all the premises in the argument, this is the most controversial. Someone who agrees with me that third-personal judgments are decision-like states with others’ actions as their object may deny that my decisions about what you are to do commit me to leaving you alone.
But this is also intuitive. Consider a project manager who signs off on an employee’s doing something. She says: “You should implement the proposal”. (Assume she is entirely sincere.) The project manager, in making this judgment, is giving her endorsement of the employee’s proposal. If the project manager then goes on to prevent him from implementing it, the employee would be puzzled. He would say: “I thought you said I should implement it”. The project manager who endorses and then prevents seems to be in “disagreement with herself”.
This means that judgments about what others should do are not beliefs about agent-relative reasons which imply no commitments about our reasons. They consist in states of “signing off”, which commit us to not preventing what we signed off on. This is a significant result!
To review: Judgments about what others should do constitute a kind of endorsement. Endorsement amounts to a primitive non-cognitive attitude of signing off. But signing off has a special property: signing off on someone’s action commits us to not interfering with their doing it.
V.
The more complex version of the argument has more premises, but the essential structure is the same. It shows that if judgments about reasons consist in a non-cognitive state of “weighing in favor”, my judgment that you have a reason to do something commits me to the judgment that I have a reason to not prevent you from doing it.
If my argument works, the result is profound. It shows that whenever I make a judgment about your reasons, I am committed to seeing myself as having reasons to stay out of your way simply by norms of structural coherence. Hence, this commitment is independent of my own interests or antecedent sentiments of sympathy or benevolence.
Recall the original example: you can do something that will greatly benefit you, but that would seriously hurt someone else, such as hiring a hitman to kill your uncle. We asked: Do you have any reason not to do it–a reason deriving simply from the harm the act will cause?
If the view I defend is correct, the answer is “yes”. Your uncle has reasons to carry on with his life and to do all sorts of things that will be good for him. You recognize these reasons. So (given my argument) you must recognize a reason not to prevent him from doing what he has reason to do. But that is just to say: you must recognize a reason not to kill your uncle that derives entirely from the harm this would do to him.
Of course, many questions arise. Is the publicity thesis really true? How plausible is the non-cognitivist view? Are the coherence norms that I claim apply to these attitudes genuine? Objections can be made to nearly every step in the reasoning. I’m still thinking through many of them.
What I hope people will appreciate, however, is the ambition of the project. The paper isn't motivated just by an interest in non-cognitivism and its technical implications. It's motivated by the deep philosophical problem described at the start—the problem of explaining why we are not normative loners, each with his or her own private store of reasons, but rather sources of reasons for everyone who is capable of taking reasons into account.
Essential References:
Korsgaard, Christine M. (1996). The sources of normativity. New York: Cambridge University Press. Edited by Onora O'Neill.
Nagel, Thomas (1970). The possibility of altruism. Oxford,: Clarendon P..
Wallace, R. Jay (2009). The publicity of reasons. Philosophical Perspectives 23 (1):471-497.
>Suppose I intend to go to the store at noon and know that to get to the store, I need to take the bus, but don’t intend to take the bus. This is incoherent.
Suppose I know that I need not _intend_ to take the bus because as soon as I walk in the direction of the store, 1) some large person always grabs me and forces me on the bus, which only ever lets me out at the store (even if I don’t intend to go to the store), 2) someone always remotely controls my movements via a brain implant to put me on the bus and get out at the store, 3) I always go into a trance whereby I find I am at the store having taken the bus, although I never had that explicit intention, 4) etc., etc. Not realistic, of course, but apparently logical possibilities. Hence, it may not be “incoherent” in a pure logical sense (but maybe it is in a practical sense). Does this matter?
>It is irrational—indeed, incoherent—to plan to go to the store while failing to plan to take the bus if I believe the bus is the only means to get there.
It is highly ironic that the very idea of the “irrational” appears to be incoherent: https://jclester.substack.com/p/rationality-a-libertarian-viewpoint