On “tell” and “count” - the words historically actually mean the same thing. The English words “tell”, “tale”, and “tally” are all cognate with the German words “Zahl” (number) and “zahlen” (to pay) and “zählen” (to count) and “erzählen” (to tell). The English word “count” is cognate to the French word “conte” (story). And of course in English you can recount a tale just as you can count a tally.
In addition to score is to mark to scratch to cut, the score is what and when you/one decides to make that mark, and this becomes the account you hope to tell when reckoning up the tally you've told yourself hoping to make an argument where others have failed.
Positive responsiveness seems to me to be the key. One worry I have about giving up positive responsiveness is that it leaves it unclear why I should save Bob or Alice or Carol if I also have the option of saving no one.
Great point, and it's weird that Taurek says so little about saving somebody rather than nobody.
To give the relevant thesis a name:
STRONG PARETO: Doing A is better than doing B if A's better for someone and at least as good for everyone else.
There's actually a bit of a debate in the literature about whether Taurek accepts Strong Pareto. Some say he rejects it, because he holds a "libertarian" view that you don't have to save anybody (using your own stuff). But I think of Strong Pareto as a complete no-brainer, and thankfully there's some decent evidence that Taurek can (and did) accept it.
In addition to some suggestive textual evidence, there's also some testimonial evidence in favor of this interpretation. Kamm reports (in an endnote in Morality, Mortality 1) that Taurek once told her he accepts Strong Pareto. This may give you some sense of how starved we are for clues about what Taurek thought!
PS - as far as I know, the most elaborate discussion of this issue is in footnote 2 of my Ergo paper, "The Many, the Few, and the Nature of Value." Here's the note + a link:
Taurek’s paper does not discuss Pareto, but there are several reasons for attributing the principle to him. First, he is reported to have endorsed Pareto in conversation (Kamm 1993: Chapter 5, n. 12), and his defenders are happy to go along with the reports (see, e.g., Lübbe 2008: 69). Second, Pareto enjoys bipartisan support, being accepted both by Taurek’s nemeses (e.g., Kavka 1979: 292) and allies (e.g., Lübbe 2008). Finally, as I argue in §4, there is a way to derive Pareto from a deeper part of Taurek’s ethics: his concern for people as individuals. (If I am indifferent between saving A & B, on the one hand, and saving only A on the other, that betrays a lack of concern for the welfare of a particular individual: B.) My sincere thanks to an anonymous referee for pressing me to say more about Taurek and Pareto.
...and, of course, Strong Pareto doesn't rule out individual vetos, so it can give us a reason for saving Alice (over saving nobody) while failing to give us a reason to save Alice + Bob over Carol.
Daniel, what makes you think that Strong Pareto is a no-brainer? As someone who is interested in Strong Pareto style principles in a number of different contexts, it seems like something we should want arguments for. Do we have any? I think maybe Brian Hedden has been trying to offer some, but he also points out that there are conspicuously few people who offer arguments in its favor!
In “Parity and Pareto,” Brian does a valiant job arguing for Strong Pareto for dimensions of value (rather than individual welfares), but I’m not yet sure how convincing the argument is. In “Dimensions of Value,” Brian and I say that Strong Pareto is true by definition. If being better with respect to D doesn’t make you better overall, in what sense is D even a dimension of value? (Perhaps it’s a “dimension” in some thin sense—a characteristic of a thing that bears on its value, just as duration is a characteristic of a pleasure and size is a characteristic of a population.) And as you know we also try to swat away a few counterexamples by showing how you can be clever in how you define your dimensions. But none of this quite amounts to an argument. It just sort of clears the way for Strong Pareto for dimensions of value.
I will say that I’m not convinced by Strong Pareto for *individual welfare* (rather than dimensions of value in general). Possibly, A is better for someone than B and at least as good for everyone else, but A might involve some other kind of badness, like a rights violation, in which case A is actually worse than B.
But what do you think? I know you’ve got some views on multidimensionality, and not just in value theory. (Link for those who haven’t read Justin’s paper.)
In my relevant papers on this sort of thing (https://philpapers.org/rec/EASXNM, https://philpapers.org/rec/EASDTW), I actually only endorse Weak Pareto, because in infinite contexts, Strong Pareto turns out to contradict some of the principles that I know how to work with. But I do want to find ways to weaken those other principles slightly so that I can have Strong Pareto (because otherwise I get the crazy conclusion that it doesn't matter whether I cause a genocide, or a generation of world peace, because it only affects finitely many people).
But I do also think everything comes down to individual welfare.
Oh, not yet, but it looks like I should! I was reading his Ethics Without Numbers recently and realized that although the title suggested to me he was thinking my way, there's some ways he's still much more realist about values and numbers than I am.
I guess strong Pareto doesn’t seem as obvious to me as it does to many people, although I agree it is most plausible in the case of value. But consider the following kind of counterexample, which I think is exemplified by a number of multidimensional concepts (including, eg, consciousness).
Suppose that in order for A to have a positive degree of some quantity overall, it needs positive degrees along two dimensions. For instance, maybe having some degree of consciousness requires both awareness and valence. Without positive degrees along those two underlying dimensions, there will be no positive degree of consciousness at all. In this case, Strong Pareto fails, although Weak Pareto still holds.
The adjective “valuable” sometimes behaves in this way. Suppose that in order for an artifact to be valuable, it has to be both sufficiently rare and sufficiently beautiful. Being very rare on its own will never make the artifact valuable, nor will being very beautiful on its own. It seems like rarity and beauty may be dimensions of the artifact’s overall value, but without a sufficient degree of both, the artifact will never be valuable overall. But this seems to indicate that there will be a point on the scale of overall value where strong Pareto fails.
More generally and perhaps simply, suppose we have two “on-off” or 1-0 dimensions, and our aggregation rule is just multiplication/conjunction. That would also violate strong Pareto.
Are there good reasons to think that there can’t be dimensions of that sort? Or these kinds of conjunctive aggregation rules?
I myself think that “dimension” likely doesn’t admit of an analysis, and there are different concepts of “dimension” hanging around (see Kenny Easwaran’s post on pragmatic conceptual analysis).
I should point out that I’m mostly suggesting such counterexamples for concepts other than “value.” But I do think that this means that SP can’t be true by definition, unless its application is restricted.
I like this. There's something very... uncomfortable, about how our moral dilemmas seem to hinge on trading lives. Like, my gut tells me that as much as possible we should just never put a value on a human life. Even to say that two lives are worth more than one.
I wonder if sometimes we seek a theory mainly to get rid of our responsibility. "It wasn't my decision, it was just maths." Like villains in films saying it's nothing personal.
It’s easy to think “5 > 1, therefore save the many.” But like you, I feel the pull towards thinking that lives aren’t so easily swapped out. At the very minimum, there’s got to be some painful moral remainder!
I think a defence of positive responsiveness can be grounded in equality. I think if we endorse a veto rule, we're endorsing the position that an individual's welfare, who happens to be in the position of being opposed to the majority decision, becomes a dominant concern. I think this is arguing for treating the welfare of those who are opposed to majority decisions as more important in virtue of their being opposed to the majority decision than those who are not. I don't think that an individual's welfare becomes more important in these cases, so I don't think we're justified in increasing the importance of their welfare in these cases.
I think a second defence based in equality could be to use the ex ante pareto principle a way of deciding what the decision rule should be in particular cases. I think this requires making a positive case for ex ante pareto being the a uniquely good way of deciding on decision rules if one accepts equal consideration of interests, and that seems pretty doable to me.
But notice that you’re *assuming as a premise* that the majority, if counted equally, would count for more than the minority. That seems to me like precisely what’s at issue!
To put it another way, why isn’t Anonymity enough to guarantee equality? If your answer assumes that more count for more, then you’re not deriving number counting from equality. You’re assuming the numbers count, and then you’re asking what equality requires *against that background*.
On “tell” and “count” - the words historically actually mean the same thing. The English words “tell”, “tale”, and “tally” are all cognate with the German words “Zahl” (number) and “zahlen” (to pay) and “zählen” (to count) and “erzählen” (to tell). The English word “count” is cognate to the French word “conte” (story). And of course in English you can recount a tale just as you can count a tally.
My mind is BLOWN.
In addition to score is to mark to scratch to cut, the score is what and when you/one decides to make that mark, and this becomes the account you hope to tell when reckoning up the tally you've told yourself hoping to make an argument where others have failed.
Positive responsiveness seems to me to be the key. One worry I have about giving up positive responsiveness is that it leaves it unclear why I should save Bob or Alice or Carol if I also have the option of saving no one.
Great point, and it's weird that Taurek says so little about saving somebody rather than nobody.
To give the relevant thesis a name:
STRONG PARETO: Doing A is better than doing B if A's better for someone and at least as good for everyone else.
There's actually a bit of a debate in the literature about whether Taurek accepts Strong Pareto. Some say he rejects it, because he holds a "libertarian" view that you don't have to save anybody (using your own stuff). But I think of Strong Pareto as a complete no-brainer, and thankfully there's some decent evidence that Taurek can (and did) accept it.
In addition to some suggestive textual evidence, there's also some testimonial evidence in favor of this interpretation. Kamm reports (in an endnote in Morality, Mortality 1) that Taurek once told her he accepts Strong Pareto. This may give you some sense of how starved we are for clues about what Taurek thought!
PS - as far as I know, the most elaborate discussion of this issue is in footnote 2 of my Ergo paper, "The Many, the Few, and the Nature of Value." Here's the note + a link:
Taurek’s paper does not discuss Pareto, but there are several reasons for attributing the principle to him. First, he is reported to have endorsed Pareto in conversation (Kamm 1993: Chapter 5, n. 12), and his defenders are happy to go along with the reports (see, e.g., Lübbe 2008: 69). Second, Pareto enjoys bipartisan support, being accepted both by Taurek’s nemeses (e.g., Kavka 1979: 292) and allies (e.g., Lübbe 2008). Finally, as I argue in §4, there is a way to derive Pareto from a deeper part of Taurek’s ethics: his concern for people as individuals. (If I am indifferent between saving A & B, on the one hand, and saving only A on the other, that betrays a lack of concern for the welfare of a particular individual: B.) My sincere thanks to an anonymous referee for pressing me to say more about Taurek and Pareto.
https://journals.publishing.umich.edu/ergo/article/id/2260/
...and, of course, Strong Pareto doesn't rule out individual vetos, so it can give us a reason for saving Alice (over saving nobody) while failing to give us a reason to save Alice + Bob over Carol.
Daniel, what makes you think that Strong Pareto is a no-brainer? As someone who is interested in Strong Pareto style principles in a number of different contexts, it seems like something we should want arguments for. Do we have any? I think maybe Brian Hedden has been trying to offer some, but he also points out that there are conspicuously few people who offer arguments in its favor!
It’s hard to argue for!
In “Parity and Pareto,” Brian does a valiant job arguing for Strong Pareto for dimensions of value (rather than individual welfares), but I’m not yet sure how convincing the argument is. In “Dimensions of Value,” Brian and I say that Strong Pareto is true by definition. If being better with respect to D doesn’t make you better overall, in what sense is D even a dimension of value? (Perhaps it’s a “dimension” in some thin sense—a characteristic of a thing that bears on its value, just as duration is a characteristic of a pleasure and size is a characteristic of a population.) And as you know we also try to swat away a few counterexamples by showing how you can be clever in how you define your dimensions. But none of this quite amounts to an argument. It just sort of clears the way for Strong Pareto for dimensions of value.
I will say that I’m not convinced by Strong Pareto for *individual welfare* (rather than dimensions of value in general). Possibly, A is better for someone than B and at least as good for everyone else, but A might involve some other kind of badness, like a rights violation, in which case A is actually worse than B.
But what do you think? I know you’ve got some views on multidimensionality, and not just in value theory. (Link for those who haven’t read Justin’s paper.)
https://philarchive.org/rec/DAMMAN
In my relevant papers on this sort of thing (https://philpapers.org/rec/EASXNM, https://philpapers.org/rec/EASDTW), I actually only endorse Weak Pareto, because in infinite contexts, Strong Pareto turns out to contradict some of the principles that I know how to work with. But I do want to find ways to weaken those other principles slightly so that I can have Strong Pareto (because otherwise I get the crazy conclusion that it doesn't matter whether I cause a genocide, or a generation of world peace, because it only affects finitely many people).
But I do also think everything comes down to individual welfare.
Thank you!! I’ve only just started reading your stuff in this area, so I appreciate the references.
PS, have you seen the new Nebel paper?
https://philpapers.org/rec/NEBIEA
Oh, not yet, but it looks like I should! I was reading his Ethics Without Numbers recently and realized that although the title suggested to me he was thinking my way, there's some ways he's still much more realist about values and numbers than I am.
I guess strong Pareto doesn’t seem as obvious to me as it does to many people, although I agree it is most plausible in the case of value. But consider the following kind of counterexample, which I think is exemplified by a number of multidimensional concepts (including, eg, consciousness).
Suppose that in order for A to have a positive degree of some quantity overall, it needs positive degrees along two dimensions. For instance, maybe having some degree of consciousness requires both awareness and valence. Without positive degrees along those two underlying dimensions, there will be no positive degree of consciousness at all. In this case, Strong Pareto fails, although Weak Pareto still holds.
The adjective “valuable” sometimes behaves in this way. Suppose that in order for an artifact to be valuable, it has to be both sufficiently rare and sufficiently beautiful. Being very rare on its own will never make the artifact valuable, nor will being very beautiful on its own. It seems like rarity and beauty may be dimensions of the artifact’s overall value, but without a sufficient degree of both, the artifact will never be valuable overall. But this seems to indicate that there will be a point on the scale of overall value where strong Pareto fails.
More generally and perhaps simply, suppose we have two “on-off” or 1-0 dimensions, and our aggregation rule is just multiplication/conjunction. That would also violate strong Pareto.
Are there good reasons to think that there can’t be dimensions of that sort? Or these kinds of conjunctive aggregation rules?
I myself think that “dimension” likely doesn’t admit of an analysis, and there are different concepts of “dimension” hanging around (see Kenny Easwaran’s post on pragmatic conceptual analysis).
I should point out that I’m mostly suggesting such counterexamples for concepts other than “value.” But I do think that this means that SP can’t be true by definition, unless its application is restricted.
But I also admit to being very uncertain about all of this!!
This is the most interesting counterexample to Strong Pareto I’ve ever heard!
I like this. There's something very... uncomfortable, about how our moral dilemmas seem to hinge on trading lives. Like, my gut tells me that as much as possible we should just never put a value on a human life. Even to say that two lives are worth more than one.
I wonder if sometimes we seek a theory mainly to get rid of our responsibility. "It wasn't my decision, it was just maths." Like villains in films saying it's nothing personal.
It’s easy to think “5 > 1, therefore save the many.” But like you, I feel the pull towards thinking that lives aren’t so easily swapped out. At the very minimum, there’s got to be some painful moral remainder!
I think a defence of positive responsiveness can be grounded in equality. I think if we endorse a veto rule, we're endorsing the position that an individual's welfare, who happens to be in the position of being opposed to the majority decision, becomes a dominant concern. I think this is arguing for treating the welfare of those who are opposed to majority decisions as more important in virtue of their being opposed to the majority decision than those who are not. I don't think that an individual's welfare becomes more important in these cases, so I don't think we're justified in increasing the importance of their welfare in these cases.
I think a second defence based in equality could be to use the ex ante pareto principle a way of deciding what the decision rule should be in particular cases. I think this requires making a positive case for ex ante pareto being the a uniquely good way of deciding on decision rules if one accepts equal consideration of interests, and that seems pretty doable to me.
But notice that you’re *assuming as a premise* that the majority, if counted equally, would count for more than the minority. That seems to me like precisely what’s at issue!
To put it another way, why isn’t Anonymity enough to guarantee equality? If your answer assumes that more count for more, then you’re not deriving number counting from equality. You’re assuming the numbers count, and then you’re asking what equality requires *against that background*.