Pages

Showing posts with label John Stuart Mill. Show all posts
Showing posts with label John Stuart Mill. Show all posts

Friday, May 25, 2018

What We Demand Of Each Other

In the last post I was thinking about Hipparchia's paradox. Hipparchia was a cynic philosopher who lived in Athens in the 4th and 3rd centuries BCE, and she posed the puzzle of why it wasn't OK for her to hit a guy called Theodorus, even though it would have been OK for him to hit himself and morality is supposed to be universalizable. I did try discussing the puzzle a bit, but what I mostly wanted was for moral philosophers to take the puzzle more seriously, to work out how their moral theories can accommodate it, and to start calling it 'Hipparchia's paradox'. I'm not really the kind of person I was hoping would think about it more, but I've been thinking about it a bit more anyway.

I mentioned that we can try resolving the paradox by saying that people were allowed to waive consideration of negative consequences to themselves in the moral evaluation of their own actions. And I worried that if these moral waivers are a thing, then there might be other kinds of moral waivers, and our final theory might end up looking unrecognizable as consequentialism. Some people will be fine with that, of course, but I've long been a bit of a fellow-traveller of (impartial, agent-neutral) consequentialism, and consequentialists (especially impartial agent-neutral ones) are probably the people Hipparchia's paradox is most of a puzzle for.

So, I've been thinking some more about these waivers, and what I'm thinking is that the reason they feel kind of scary is that they're an example of voluntarism1. Voluntarism is the idea that what's right and wrong is fixed in some special way by someone's will. What counts as the will and what counts as the relevantly special way is a bit up for grabs, and some of the disputes over voluntarism will be verbal. But it's not all verbal, and a certain kind of moral philosopher should be scared of voluntarism. A classic version of voluntarism is divine command theory, which is sometimes called theological voluntarism. Divine command theorists say this sort of thing:

  • When something is wrong, it's because it goes against God's will.
  • When something is wrong, it's because God has decreed that it's wrong.

Divine command theory isn't all that popular among moral philosophers nowadays, although it had a pretty good run with them in the middle ages, and it's still alive and well in the moral thinking of some religious people. Its detractors often view it as getting things backwards. Things aren't wrong because they go against God's will; God wants us not to do them because they're wrong. Similarly, when God says something's wrong, that's because it is wrong, not the other way round. This problem is called the Euthyphro problem, after the dialogue Plato wrote about it. The Euthyphro problem isn't just about divine command theory though; it applies in some way to all versions of voluntarism. People don't make things wrong by wanting them not to be done; they want them not to be done because they're wrong, or at least they should. The worry is that anyone adopting a version of voluntarism is taking the wrong side in the Euthyphro problem. That sounds like bad news for the waiver response to Hipparchia's paradox.

Nonetheless, I think it might be worth giving voluntarism another look, at least in the form of these waivers. There are two reasons. First, Hipparchia's paradox does provide a direct argument for waivers. Second, there's a big difference between God being the boss of us and us being the boss of us, or even better, the people our actions have an impact on being the boss of us. Arguments against divine voluntarism may well not carry over to this more worldly form of voluntarism. So now here's the next question: if waivers are a thing, which waivers are a thing? In the previous post I made a list of questions about possible waivers, and I'll repeat that list here, with a bit of commentary explaining why I thought they were worth asking.

  • Can I waive consideration of consequences to myself in the moral evaluation of someone else's actions?

The idea there is that if I volunteer to take one for the team, then it isn't wrong for the team to go along with that. Suppose you want to go to a party which will be pretty good, and I want to go to a different party which will be very good, but one of us has to stay home and wait for the plumber to come and fix the toilet. (We'll assume we'd both enjoy staying home equally.) What I'm suggesting is that if I volunteer to stay home, you don't wrong me in going along with this, even though I would probably enjoy my party more than you would enjoy yours. Now, you might disagree with this assessment of the situation. But the point is quite similar to Hipparchia's paradox: just as Theodorus is allowed to hit himself, people are allowed to sacrifice their own interests for others, even if the sacrifice is greater than the benefit. And even if they can't make the sacrifice without the co-operation of the beneficiaries, the beneficiaries don't do anything wrong in co-operating. If the person making the sacrifice says it's OK, then it's OK. (I'm not saying this is right, but this is the thinking behind the question.)

  • Can I waive consideration of some but not all negative consequences to myself?

I'm not sure I expressed this one as clearly as I could have, but here's what I'm thinking. Maybe it's OK for me to waive consideration of minor things, but not major things. Or maybe it's OK for me to waive consideration of forgone pleasures, but not of positive harms. I won't go into the details of what sorts of things I might not be morally allowed to do to myself, or other people might be wrong to do to me even with my permission, but there's a reasonably venerable tradition of thinking that there is such a distinction to be made. But if there is, you have to wonder what its basis might be.

  • Can I waive consideration of bad things happening to me even if someone else cares about me and so these would also be negative consequences to them?

Nobody is an island, and often if something bad happens to Theodorus, he's not the only person who suffers. If Theodorus hits himself, this might upset his friends, and maybe it's wrong because of that. I think there's a fair bit of pressure from common-sense morality to say that Theodorus hitting himself is nobody's business but his own, and if it bothers his friends then he's entitled to waive that fact from the moral evaluation of his action. There are probably limits to what common-sense morality permits along these lines, and maybe I'm getting common-sense morality wrong. But even if I'm not getting it wrong, I'm not really sure how this dynamic is supposed to work. One possibility is that waiving the harm Theodorus does you by hitting himself is partly constitutive of the very relationship in virtue of which Theodorus hitting himself harms you. While I do think this idea has some superficial appeal, I fear its appeal may be only superficial. But perhaps there's the germ of something workable in there.

  • Can I do this on an action-by-action basis, or at least a person-by-person basis, or do I have to waive it for all people or all actions if I waive it for one?

This is an issue about universalizability and fairness. How arbitrary am I allowed to be in dishing out permissions? One possibility is that we have a lot of latitude about what permissions we can give, but a lot less latitude about what permissions we should give. But I expect we also probably have a fair bit of latitude with the ones we should give, because these permissions are bound up with personal relationships, and we don't have personal relationships with everyone. In particular, waivers in personal relationships might often be part of a mutually beneficial reciprocal arrangement. Being morally in the wrong is bad for you, and personal relationships are difficult, and provided you're both trying hard it might be better not to be morally in the wrong every time you mess up. These waivers probably shouldn't have to be blanket waivers: a certain amount of mutual pre-emptive forgiveness doesn't make it impossible for you to to wrong each other.

  • Are there ever situations where someone can waive consideration of a negative consequence to someone other than themselves?

Part of the issue here is the nobody-is-an-island problem I discussed a couple of questions ago. But the issue also arises in the case of children, and other people who have someone else responsible for their welfare to an extent. It may also arise with God. I think it's quite possible that there just aren't any exceptions of this kind. You're allowed to take one for the team, but you're not allowed to have your children take one for the team. But here's an example I've been thinking about a bit. Suppose that you and I are doing a high-stakes pub quiz together, and we win a family trip to Disneyland. A reasonably fair thing to do would be for us to auction half of the trip between us and have the higher bidder pay the lower bidder for the lower bidder's half. But suppose I just tell you to go ahead and enjoy yourself. My family are losing out here as well as me, but somehow it still feels like I've done something nice, rather than robbing my family of the equivalent of half a trip to Disneyland. I think I'd probably end up coming down on the side of saying I'm wrong to give you my half of the trip, although perhaps the matter is complicated by the fact that my family weren't on the team, so it's my prize not theirs. But letting you have the trip does still put my family out. I'm really not sure what I think about this. But I think it's likely people do make this kind of collective sacrifice from time to time, and that they feel like they've done a good thing and not a bad thing.

I think that a moral theory that incorporated these kinds of waivers in a big way might have some mileage in it. There are plenty of worries about it, of course. I'll talk about two.

First, how freely does someone have to be giving these permissions? People make decisions with imperfect information and imperfect rationality, and they also make them under conditions of oppression. It's a common criticism of libertarian capitalism that letting people make whatever contracts they want will lead to a lot of inequality of outcome resulting from unequal bargaining positions. Most countries don't want the economically disempowered bargaining away their kidneys, and maybe we don't want people bargaining away the fact that harming them is wrong. I think some libertarian rhetoric makes it sound as if they think that contracts actually do have this wrongness-nullifying effect, but it's possible they don't say this, and if they do then I'm really not optimistic about them being right. You might be able to imagine idealized situations where the waivers look plausible, but the reality of it might look pretty hideous in some cases. And when you're doing ethics, hideousness detracts from plausibility.

My second worry is that constructing a theory of moral waivers might be joining what I think of as the excuses industry. Impartial consequentialism is notoriously demanding, especially in our interconnected world. But I don't think that should be surprising really: we don't expect it to be easy doing the right thing all the time. A long time ago I wrote about how the supposedly counterintuitive results of impartial consequentialism seemed to me to appeal to either selfishness, squeamishness, or a bit of both. I still feel the pull of that line of thought, and although I'm not really an impartial consequentialist myself, I am as I say a fellow traveller. Some people try to construct theories that don't have these demanding results, but I don't really want to be in the business of constructing theories that are basically elaborate excuses that allow us to live high while other people die. I hope that's not all I'd be doing, and I don't think it's all that other opponents of impartial consequentialism are doing, but I do think it's a trap you have to be careful not to fall into.

With those worries out in the open, I'll sketch the basic outline of the theory I've got in mind. You start with a background of some kind of impartial consequentialism, and then overlay the waivers. Morality might legitimate us making some very heavy demands on each other, but we don't have to actually make these demands. I guess the way it works is these waivers will create a category of supererogatory actions - actions which are good but not obligatory - which impartial consequentialism sometimes struggles to accommodate. If someone's waived a harm it's still better not to cause the harm, but it's not obligatory. I'm imagining the theory as being most distinctive in its treatment of morality within personal relationships. I mentioned earlier that some reciprocal waiving might be a common or even constitutive feature of some relationships. Perhaps it could be extended to involve relationships between people who don't know each other as well or at all, but who are members of the same community. If I was going to think seriously about that then I'd need to learn more about communitarian ethical theories. I'm really not very familiar with how they work, but from what I've heard they sound pretty relevant.

The post a few days ago closed with this argument against consequentialism:

  • Hipparchia's paradox shows that fully agent-neutral consequentialism is absurd.
  • The only promising arguments for consequentialism are arguments for fully agent-neutral consequentialism.
  • So there are no good arguments for consequentialism.

It's not great, really, and I said so at the time. But let's think about how all this waivers stuff started with Hipparchia's paradox. You could just look at the paradox and say "waivers are in, impartial consequentialism is out", and then merrily start constructing theories with waivers all over the place. I think that would be a mistake. An alternative, which I don't think would be the same kind of mistake, is to look at other candidates for waivers that are somehow similar to the original case. The best case I've got in mind is when a group of people see themselves as somehow on the same side, and so individual team-members' failures aren't moral failures, even though they could have done better and the other members of the team would have been better off. The team has a sufficient unity of purpose that the members view the team as analogous to an individual. Team members don't press moral charges against team members just as Theodorus doesn't press moral charges against himself.

One last thing about waivers is that you might share a lot of the intuitions about the examples but try to incorporate them within a straight impartial consequentialist theory. Maybe people being able to take advantage of the waivers without feeling guilty about it turns out to have the best consequences overall. There's a long tradition of moral philosophers doing this sort of thing. The main trick is to distinguish what it is for an action to be right or wrong with the information we use to decide whether an action is right or wrong. In John Stuart Mill's Utilitarianism he makes this move several times, and in fellow utilitarian RM Hare's Book Moral Thinking: Its Levels, Method, and Point those were the sorts of levels he was talking about. (At least if I remember them right.)

I used to be quite impressed with this move. Now I'm not so sure. The reason I've long been a fellow-traveller of impartial consequentialism without properly signing up is that I'm also a bit of an error theorist. I read JL Mackie's Ethics at an impressionable age, and while I'm a lot more sceptical about philosophical conclusions than I used to be, it's still got some pull for me. But maybe the reason it's got all this pull is because I'm thinking about moral facts in terms of what Henry Sidgwick (I think) called 'the point of view of the universe'. On that topic I read something once about the conflict between deontological and consequentialist ethics that stayed with me. The point was that deontologists shouldn't argue that sometimes there are actions we should do even though things will be worse if we do them. To concede that is to concede too much to the consequentialist: it makes things too easy for them if that's the position they have to attack. The consequentialist needs to earn the claim that there's some meaningful way in which consequences, or the whole world, can be good or bad simpliciter rather than just good or bad from the point of view of a person, or a particular system of rules, or perhaps something else. It's true: the consequentialist needs this claim and the deontologist doesn't. It's non-trivial and the deontologist should make the consequentialist earn it. And I don't think they have earned it. I can't remember where I read this, unfortunately. I'd thought it was in a Philippa Foot paper, but I re-read the two papers I thought it might be in (Foot 1972 and 1995), and while re-reading them was rewarding I couldn't find it in either. I still think it's probably her. If you can tell me where it's from, please do so in the comments. [UPDATE 24/6/18: She makes the point in Foot 1985: 'Utilitarianism and the virtues'.]

Anyway, maybe error theory wouldn't have the same pull for me if I got away from the idea of the point of view of the universe and instead thought about morality as being fundamentally about human relationships, collective decision-making and what have you. The levels move takes the manifest image of morality and explains it in terms of something more systematic at a lower level. But this systematic lower level is where the point of view of the universe is, and that's what threatens to turn me into an error theorist. The manifest image is where the human relationships and collective decision-making are, and maybe those aren't so weird. The dilemma arises because the lower level stuff about how good the universe is can seem more plausibly normative, while the higher level stuff about relationships is more plausibly real.

Of course, as things stand with the theory I've got in mind there's still an impartial consequentialist background with the waivers laid on top. The impartial consequentialist background is as weird as ever, and you can't have a moral theory that's all waivers. But maybe this could be a transitional step on the way to me having a more accurate conception of what moral facts are facts about, and perhaps eventually losing interest in moral error theory altogether. That might be nice.

Notes

[1] I'm a little unsure about the terminology here. It's pretty established to call divine command theory 'theological voluntarism', and I'm fairly sure I've seen 'voluntarism' used more generally to include non-theological versions like the waiver theory I'm talking about here. But 'voluntarism' also seems to be used to refer to theories according to which the moral properties of an action depend on the will with which the action was performed. (This idea is important in Kant's ethics.) The two ideas could overlap, but it's not obvious that they have to. So if you've got strong views about what 'voluntarism' means and think I'm using it wrong, then I apologize. And when you're discussing this blogpost with your friends, you should be careful how you use the word. But I don't know another word for the thing I'm talking about, and I think I've heard people calling it 'voluntarism', so that's the call I've made.

References

  • Foot, P. 1972: 'Morality as a system of hypothetical imperatives', Philosophical Review 81 (3):305-316
  • Foot, P. 1985: 'Utilitarianism and the virtues', Mind 94 (374):196-209
  • Foot, P. 1995: 'Does moral subjectivism rest on a mistake?', Oxford Journal of Legal Studies 15 (1):1-14
  • Hare, R. M. 1981: Moral Thinking: Its Levels, Method, and Point (Oxford University Press)
  • Mackie, J. L. 1977: Ethics: Inventing Right and Wrong (Penguin Books)
  • Mill, J. S. 1863/2004: Utilitarianism, Project Gutenberg ebook #11224, http://www.gutenberg.org/files/11224/11224-h/11224-h.htm
  • Plato c.399-5 BCE/2008: Euthyphro, trans. Benjamin Jowett, Project Gutenberg ebook #1642, http://www.gutenberg.org/files/1642/1642-h/1642-h.htm

Monday, May 21, 2018

Hipparchia's Paradox

The most famous cynic philosopher was Diogenes of Sinope, who lived in an old wine jar and told Alexander the Great to get out of his light. But he wasn’t the only cynic; there was a whole bunch of them. The second or third most famous cynic was Hipparchia. (The third or second was Crates, Hipparchia’s husband.) Hipparchia doesn’t seem to have written much if anything, as tended to be the way with the cynics, but history has recorded at least one of her arguments, via an anecdote about an exchange she had with some jackass called Theodorus at a party one time. Here’s how Diogenes Laertius (not to be confused with Diogenes the jar-dweller) tells it:
Theodorus, the notorious atheist, was also present [at Lysimachus’s party], and she posed the following sophism to him. ‘Anything Theodorus is allowed, Hipparchia should be allowed to do also. Now if Theodorus hits himself he commits no crime. Neither does Hipparchia do wrong, then, in hitting Theodorus.’ At a loss to refute the argument, Theodorus tried separating her from the source of her brashness, the Cynic double cloak. Hipparchia, however, showed no signs of a woman’s alarm or timidity. Later he quoted at her lines from The Bacchae of Euripides: ‘Is this she who abandoned the web and women’s work?’ ‘Yes,’ Hipparchia promptly came back, ‘it is I’. But don’t suppose for a moment that I regret the time I spend improving my mind instead of squatting by a loom.’ [Lives of the Ancient Philosophers 6: 96-8; pp45-6 in Dobbin]
I’ve quoted the context as well as just the argument, the alternative being to quote it out of context. I think it’s pretty clear that Hipparchia is the winner of this story, although it’s possible the reality of the situation was pretty unpleasant for everyone concerned. But having acknowledged the context, I’d like to think a bit about the argument in isolation. Here’s the argument laid out neatly:
  • Anything Theodorus is allowed, Hipparchia should be allowed to do also.
  • If Theodorus hits himself he commits no crime.
  • So neither does Hipparchia do wrong in hitting Theodorus.
The first premise is about universalizability: morality is supposed to apply equally to everyone. It’s a bit less clear what the theoretical basis of the second premise is. It seems like a part of most people’s common sense morality that if someone wants to hit themselves then that’s their own business, and while it might be inadvisable, it isn’t immoral. Common sense morality changes from place to place, but I guess this is part of it that my society has in common with Hipparchia’s. You could explain the truth of the second premise in various ways, some of which will mean qualifying or restricting it, and I think that how exactly we explain it will affect how the paradox gets resolved. The conclusion is meant to be absurd, showing that something is wrong with either the premises or the inference.
I think the most obvious way to try to resolve the paradox is to interpret the permission in the second premise as being explained by a general permission for people to hit themselves, rather than a general permission to hit Theodorus. The action that Theodorus is allowed to do is hitting oneself, not hitting Theodorus. Hipparchia is allowed to do the action hitting oneself too, so universalizability is saved.
There’s a problem with this, though: Theodorus is also allowed to do hitting Theodorus. He’d better be, because if an action is immoral under some description, then it’s immoral. This means there is something he’s allowed to do and Hipparchia isn’t, and so universalizability isn’t saved. Universalizability isn’t the idea that some of morality applies equally to everyone; it’s the idea that all of morality applies equally to everyone. Now, I don’t mean to be disingenuous. I’m not saying that Hipparchia’s paradox shows that universalizability is bunk; I’m just saying there’s more work to do. I don’t think there can be much doubt that it somehow matters that the description of the action as hitting oneself applies to Theodorus’s action and not Hipparchia’s. It just doesn’t resolve the paradox completely, and it’s perhaps more of a restatement of the paradox than anything else. Sometimes a restatement of a paradox is more or less all you need, but in this case I don’t think the restatement is enough.
Here’s another line of attack. Maybe on any given occasion it really is only OK for Theodorus to hit Theodorus if it’s OK for Hipparchia to hit Theodorus. The difference is that occasions when he hits himself will be those rare occasions when he wants to be hit, whereas occasions when she hits him are likely to be occasions when he doesn’t want to be hit. (And also he won’t hit himself harder than he wants to be hit.) This kind of reasoning is behind some anti-paternalist thinking in political philosophy. The classic anti-paternalist work is On Liberty, which was published under John Stuart Mill’s name but was probably coauthored with Harriet Taylor Mill, if you take its dedication literally. (It’s possible the Mills were the greatest philosophical power couple since Hipparchia and Crates. I can’t think of a greater one in the roughly 2150 years betweeen them, although perhaps you can, and perhaps there’s an obvious one I’m missing. [UPDATE: A friend pointed out I forgot Abelard and Heloise.]) They argued that the state shouldn’t be interfering with you if you’re not doing anyone else any harm. Here they are:
The object of this Essay is to assert one very simple principle, as entitled to govern absolutely the dealings of society with the individual in the way of compulsion and control, whether the means used be physical force in the form of legal penalties, or the moral coercion of public opinion. That principle is, that the sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. [On Liberty: p17]
People disagree over how far you can reconcile this with the consequentialism you find in Utilitarianism, but if you’re trying to reconcile them it usually goes roughly as follows. People will do things that have good consequences for themselves, so if their actions don’t have bad consequences for anyone else then they don’t have bad consequences for anyone. Given consequentialism, that means the actions aren’t bad. That means the state shouldn’t be interfering with them. It’s a bit of a Swiss cheese of an argument, and I think it remains so even if you’re properly doing justice to it, but I also think they were on to something important.
A classic example of paternalism is seatbelt laws. Idealizing a bit, the set-up is this: by not wearing a seatbelt you’re not putting anyone at risk but yourself. But by having laws demanding people wear seatbelts, you can save lives. Let’s consider a couple of things a libertarian might have to say about this:
  • “If I value my life so much and my convenience so little that the small chance that wearing a seatbelt will save my life is worth the inconvenience of wearing one, then I will wear a seatbelt.”
  • “The only person who stands to get hurt here is me, and I’m fine with it. Mind your own business.”
The first is a simple consequentialist argument: we don’t have to worry about people not wearing seatbelts in situations where the expected consequences are negative. (It also takes the relative value of someone’s life and convenience to be the relative value they themselves assign to them, but maybe that’s not so silly at least in the case of most adults.) The second libertarian response is harder to categorize. It can still be made out as consequentialist in a way, but it says that people are allowed to waive consideration of negative consequences to themselves. The first objection, where it applies, flows straightforwardly from a simple consequentialism that says the right thing to do is the thing with the best consequences. The second applies more generally, but it says that sometimes it’s OK to do the thing that doesn’t have the best consequences. If we’re allowing people to waive consideration of consequences to themselves in the moral evaluation of their own actions, this raises questions about what other kinds of waivers are allowed:
  • Can I waive consideration of consequences to myself in the moral evaluation of someone else’s actions?
  • Can I do this on an action-by-action basis, or at least a person-by-person basis, or do I have to waive it for all people or all actions if I waive it for one?
  • Can I waive consideration of some but not all negative consequences to myself?
  • Can I waive consideration of bad things happening to me even if someone else cares about me and so these would also be negative consequences to them?
  • Are there ever situations where someone can waive consideration of a negative consequence to someone other than themselves?
None of these seem to me like they have obvious answers, with the possible exception of the last one, even if we grant that people can waive consideration of harm to themselves in the moral evaluation of their own actions. I expect some readers will think some of the answers are fairly obvious (and that the last one is obviously obvious), or will at least have views on some of the questions, perhaps based on the literatures which presumably exist on each of them. To be clear, I’m not saying that a consequentialism with a self-sacrifice caveat can’t be made coherent. You could say that an action is permissible iff it either maximizes expected utility or has an expected utility for other people at least as high as the expected utility for other people of some permissible action. That seems to get the right results. The problem I have is that if waivers are a thing, then there are other waivers we might want to include in our theory as well, and after a while our theory might end up not looking much like consequentialism at all.
One way to avoid these questions is to deny that people can waive consideration of themselves in the first place. But then Hipparchia’s paradox comes back, at least a little. The problem with this simple consequentialist response to the paradox is that people don’t always do what’s best for them. Unless we supplement the response somehow, it will mean that whenever Theodorus hits himself and it isn’t what’s best for him, he is doing something wrong after all. (At least when he had enough information to work out that it probably wouldn’t be best for him.) Is this what we want to say?
I can sort of see how some people might want to bite this bullet. If you’re an agent-neutral consequentialist, then you think that the only information relevant to whether an action is wrong or not is how good its consequences are. Who did the action isn’t relevant. So this kind of consequentialist should say that Theodorus hitting himself really is immoral whenever it’s inadvisable. If someone gets on their high horse with you about how you’re not doing what’s best for yourself, they actually do have the moral high ground. Perhaps this is right. But it’s weird.
I don’t really feel like I’ve got very far with this. But my main aim was to present the argument as something worth thinking about, because I do think it’s worth thinking about. I’ll close by presenting another argument, which is also a Swiss cheese of an argument, but which I’m also worried might be on to something.
  • Hipparchia’s paradox shows that fully agent-neutral consequentialism is absurd.
  • The only promising arguments for consequentialism are arguments for fully agent-neutral consequentialism.
  • So there are no good arguments for consequentialism.
References
  • Dobbin, R. 2012: Anecdotes of the Cynics, selected and translated by Robert Dobbin. Penguin Random House.
  • Mill, J. S. 1859/2011: On Liberty, Project Gutenberg ebook #34901, http://www.gutenberg.org/files/34901/34901-h/34901-h.htm
  • Mill, J. S. 1863/2004: Utilitarianism, Project Gutenberg ebook #11224, http://www.gutenberg.org/files/11224/11224-h/11224-h.htm