On “tell” and “count” - the words historically actually mean the same thing. The English words “tell”, “tale”, and “tally” are all cognate with the German words “Zahl” (number) and “zahlen” (to pay) and “zählen” (to count) and “erzählen” (to tell). The English word “count” is cognate to the French word “conte” (story). And of course in English you can recount a tale just as you can count a tally.
In addition to score is to mark to scratch to cut, the score is what and when you/one decides to make that mark, and this becomes the account you hope to tell when reckoning up the tally you've told yourself hoping to make an argument where others have failed.
Positive responsiveness seems to me to be the key. One worry I have about giving up positive responsiveness is that it leaves it unclear why I should save Bob or Alice or Carol if I also have the option of saving no one.
Great point, and it's weird that Taurek says so little about saving somebody rather than nobody.
To give the relevant thesis a name:
STRONG PARETO: Doing A is better than doing B if A's better for someone and at least as good for everyone else.
There's actually a bit of a debate in the literature about whether Taurek accepts Strong Pareto. Some say he rejects it, because he holds a "libertarian" view that you don't have to save anybody (using your own stuff). But I think of Strong Pareto as a complete no-brainer, and thankfully there's some decent evidence that Taurek can (and did) accept it.
In addition to some suggestive textual evidence, there's also some testimonial evidence in favor of this interpretation. Kamm reports (in an endnote in Morality, Mortality 1) that Taurek once told her he accepts Strong Pareto. This may give you some sense of how starved we are for clues about what Taurek thought!
PS - as far as I know, the most elaborate discussion of this issue is in footnote 2 of my Ergo paper, "The Many, the Few, and the Nature of Value." Here's the note + a link:
Taurek’s paper does not discuss Pareto, but there are several reasons for attributing the principle to him. First, he is reported to have endorsed Pareto in conversation (Kamm 1993: Chapter 5, n. 12), and his defenders are happy to go along with the reports (see, e.g., Lübbe 2008: 69). Second, Pareto enjoys bipartisan support, being accepted both by Taurek’s nemeses (e.g., Kavka 1979: 292) and allies (e.g., Lübbe 2008). Finally, as I argue in §4, there is a way to derive Pareto from a deeper part of Taurek’s ethics: his concern for people as individuals. (If I am indifferent between saving A & B, on the one hand, and saving only A on the other, that betrays a lack of concern for the welfare of a particular individual: B.) My sincere thanks to an anonymous referee for pressing me to say more about Taurek and Pareto.
...and, of course, Strong Pareto doesn't rule out individual vetos, so it can give us a reason for saving Alice (over saving nobody) while failing to give us a reason to save Alice + Bob over Carol.
I like this. There's something very... uncomfortable, about how our moral dilemmas seem to hinge on trading lives. Like, my gut tells me that as much as possible we should just never put a value on a human life. Even to say that two lives are worth more than one.
I wonder if sometimes we seek a theory mainly to get rid of our responsibility. "It wasn't my decision, it was just maths." Like villains in films saying it's nothing personal.
It’s easy to think “5 > 1, therefore save the many.” But like you, I feel the pull towards thinking that lives aren’t so easily swapped out. At the very minimum, there’s got to be some painful moral remainder!
On “tell” and “count” - the words historically actually mean the same thing. The English words “tell”, “tale”, and “tally” are all cognate with the German words “Zahl” (number) and “zahlen” (to pay) and “zählen” (to count) and “erzählen” (to tell). The English word “count” is cognate to the French word “conte” (story). And of course in English you can recount a tale just as you can count a tally.
My mind is BLOWN.
In addition to score is to mark to scratch to cut, the score is what and when you/one decides to make that mark, and this becomes the account you hope to tell when reckoning up the tally you've told yourself hoping to make an argument where others have failed.
Positive responsiveness seems to me to be the key. One worry I have about giving up positive responsiveness is that it leaves it unclear why I should save Bob or Alice or Carol if I also have the option of saving no one.
Great point, and it's weird that Taurek says so little about saving somebody rather than nobody.
To give the relevant thesis a name:
STRONG PARETO: Doing A is better than doing B if A's better for someone and at least as good for everyone else.
There's actually a bit of a debate in the literature about whether Taurek accepts Strong Pareto. Some say he rejects it, because he holds a "libertarian" view that you don't have to save anybody (using your own stuff). But I think of Strong Pareto as a complete no-brainer, and thankfully there's some decent evidence that Taurek can (and did) accept it.
In addition to some suggestive textual evidence, there's also some testimonial evidence in favor of this interpretation. Kamm reports (in an endnote in Morality, Mortality 1) that Taurek once told her he accepts Strong Pareto. This may give you some sense of how starved we are for clues about what Taurek thought!
PS - as far as I know, the most elaborate discussion of this issue is in footnote 2 of my Ergo paper, "The Many, the Few, and the Nature of Value." Here's the note + a link:
Taurek’s paper does not discuss Pareto, but there are several reasons for attributing the principle to him. First, he is reported to have endorsed Pareto in conversation (Kamm 1993: Chapter 5, n. 12), and his defenders are happy to go along with the reports (see, e.g., Lübbe 2008: 69). Second, Pareto enjoys bipartisan support, being accepted both by Taurek’s nemeses (e.g., Kavka 1979: 292) and allies (e.g., Lübbe 2008). Finally, as I argue in §4, there is a way to derive Pareto from a deeper part of Taurek’s ethics: his concern for people as individuals. (If I am indifferent between saving A & B, on the one hand, and saving only A on the other, that betrays a lack of concern for the welfare of a particular individual: B.) My sincere thanks to an anonymous referee for pressing me to say more about Taurek and Pareto.
https://journals.publishing.umich.edu/ergo/article/id/2260/
...and, of course, Strong Pareto doesn't rule out individual vetos, so it can give us a reason for saving Alice (over saving nobody) while failing to give us a reason to save Alice + Bob over Carol.
I like this. There's something very... uncomfortable, about how our moral dilemmas seem to hinge on trading lives. Like, my gut tells me that as much as possible we should just never put a value on a human life. Even to say that two lives are worth more than one.
I wonder if sometimes we seek a theory mainly to get rid of our responsibility. "It wasn't my decision, it was just maths." Like villains in films saying it's nothing personal.
It’s easy to think “5 > 1, therefore save the many.” But like you, I feel the pull towards thinking that lives aren’t so easily swapped out. At the very minimum, there’s got to be some painful moral remainder!