Wednesday, February 15, 2017

We Are Not Agreed

A few days ago University Ventures authored a piece in response to a post from the New America Foundation comparing Republicans who defend for-profit colleges to climate change deniers. The unattributed University Ventures article argues "this piece re-fights yesterday’s war... the many challenges and opportunities facing higher education lend themselves to bipartisan consensus – perhaps more than any other area of public policy."

Bipartisanship is of course a U.S. phenomenon. But it is worth noting that there are many things U.S. lawmakers agree upon that are opposed in corners around the world. I find myself frequently occupying those corners, and today is no exception. So, setting aside the for-profit colleges debate for another day, I'd like to take the time to point to the points where I disagree with what is taken to the the emerging consensus.

The text in italics is their contention; what follows is my response.

1. Completion is the most powerful lever

The author makes the very reasonable point that "drop rates approaching 50% at many four-year institutions and 80% at many two-year colleges" represents a failure of traditional post-secondary institution, and responds that "there’s no 'free college' silver bullet to the complex completion challenge."

But completion is not the powerful lever that drives everything else; it is the pointless anchor that weighs everything down. It is becoming increasingly untenable to stop everything in your life to complete four or eight years of studies, especially when the mechanisms for delivering that education are increasingly inefficient and expensive.

Indeed, completion would be irrelevant as a lever were it not for the mechanism of granting recognition only at the end of the four or eight year program. To be sure, students value those degrees and diplomas. They have no choice; there's no other way to earn recognition for their learning.

As recognition becomes more distributed, however, we will see other more fundamental levers emerge: the requirement that learning be relevant, that it help us solve problems, that it support networking and collaboration, and that respect our personal interests and abilities.

2. Bachelor’s degree “addiction” is hurting students

The author argues that "it takes a Candide-like idealist to continue to insist that a bachelor's degree is the optimal or only path to establishing the core cognitive and non-cognitive executive function skills that lead to successful white collar careers."

The disagreement here is not whether we're questioning the relevance of the Bachelor's degree. It is actually rather more subtle than that.

First, this second point can be seen as code for "we need to restrict the number of people admitted into Bachelor's programs," with the idea that alternative schools - trade schools, business skills schools, etc. - would emerge to pick up the slack. We see this in the allusion to "successful white collar careers", which already assumes the separation between advanced education and trades. The idea here is that white collar workers are the new tradespeople. But returning universities to their original position of offering education to the elite is not what I would consider a progressive step forward.

Second, this second point continues to carry the presumption that the point of education is to lead you step by step toward a future career. We see this in phrasing like "optimal or only path". The presumption that education amounts to preparation should be challenged. This may be one function of education, but it is not the only one, nor even the most important one. There's no end to the stories about students being 'prepared' for a future that no longer exists. Education should be addressed toward capability, not preparation.

3. Colleges need to do much more to help graduates get great jobs.

The author's point here is that colleges and universities "must ensure students are equipped with the technical skills employers increasingly require for entry-level positions." The idea of colleges and universities being preparation for the employment, whether for one's first job or eventual career, is anathema to many. From my perspective it's not a matter of faith but of common sense: chasing after "what employers want" is a mugs' game you can never win, and is increasingly irrelevant in a world where you make your own employment.

First of all, if employers want certain outcomes from our education system, then why don't they pay for that themselves, rather than requiring governments and students to pay for it?

Second, if employers want certain outcomes from the system, why don't they hire on that basis, rather than on (among other things) college pedigree, connections and friends, biases and stereotypes, proximity, and willingness to work for lower wages?

Third, employers lobby for certain outcomes from the education system - computing science grads, nursing students, engineers, etc. - in order to drive down labour costs. Why should any of us support a mechanism that actually reduces our negotiating position in the marketplace? 

What about that New America survey showing that the reason students enroll is "to improve employment opportunities (91%); to make more money (90%); and to get a good job (89%)"? When you read the survey, you find it is "an online survey of 1,011 U.S. residents ages 16-40, who were largely prospective college students." So this reflects the sales pitch, but does it reflect the reality?

Colleges and universities - indeed, all of education - should help students become self-sufficient. That's what the elite programs do. That's what they should all do.

4. Employers bear much of the blame

We can certainly agree with the author that "Opaque Applicant Tracking Systems and imprecise job descriptions have turned getting in front of a human hiring manager into a 'rigged' game. And "campus-based recruitment at a select number of schools" merely reinforces this perception. Employers (and banks, and venture capitalists) aren't looking for qualifications in new employees; they know that the right person can always adapt to the needs of the position, especially entry-level positions. They are looking for the right pedigree.

That's why the proposed 'agreed upon' solution won't work, and indeed distracts from the core issues. Here's what the author suggests: "utilizing new People Analytics technologies to identify competencies that are predictive of success, incorporating these skills into job descriptions, and proactively searching among passive job seekers and current students will become a competitive advantage for farsighted employers." Nonsense.

If it accomplishes anything at all, identifying competencies will fit only short-term positions for specific tasks. As a mechanism for long term employment and career-readiness, competencies will prove to be an unmitigated failure. Employers will care about a few very general core competencies (can they speak and write reasonably well (and without an accent), do they know the jargon of the field, can they work with other people (and especially our team), do they dress well and bathe themselves, have they done this kind of thing before, are they connected?).

Should it be this way? Of course not. I too would like to see "a shift from degree- and pedigree-based hiring to competency-based hiring... while also increasing workforce diversity." But changing the way we educate people won't accomplish this result. Much broader social changes are needed, not just in the U.S. (where they are engaged in a political struggle over this point) but also around the world.

5. Accountability shouldn’t start and end with for-profit colleges

The author argues that "if we can agree on desired and measurable outcomes... while for-profit schools may need to be held to a higher standard given the potential for abuse, there’s zero logic in letting traditional colleges off the hook entirely." This is based on the dubious premise that traditional colleges ever were "off the hook", which is demonstrably false. In the U.S., there are numerous federal, regional and occupational accreditation bodies. In Canada, colleges and universities are accredited by provincial governments. Other countries have similar requirements.

What the author's argument glosses over is that the existing set of regulatory bodies hasn't been nearly enough to accommodate corruption in the private sector. The profit motive in education (as in health, as in justice, as in government...) creates incentive for dishonesty that doesn't exist in an environment where dishonesty doesn't provide financial rewards.

Nor is accountability itself any guarantee of appropriate behaviour. The U.S. is one of the most regulated economies in the world, yet conflicts-of-interest converts much of that regulation into tools to protect existing markets and to cater to specific lobbies and entrenches interests. I just referenced an article the other day showing how pizza has been classified as a vegetable in U.S. school lunches.

Education is better viewed as a profession with core ethics - akin to medicine, law, accounting, ect. - than an industry depending on legislation and accountability to constrain fraudulent behaviour. That means that the core objective of education has to be something other than the pursuit of profit, otherwise the only ethic is (as it is in the financial service industry) the bottom line.

6. Outcomes should be about “distance traveled”

This is the author's "pizza is a vegetable" moment. "When we measure outcomes, we need to ensure we’re not focusing on metrics that correlate entirely with inputs, but rather on 'value added' by the institution to students."

On the surface the intuition is sound: "providing extra points to institutions with a demonstrated track record of enrolling low SES students and producing strong education and employment results."

On the one hand, this simply replaces one form of institutional cheating with another one. Instead of denying admission to low SES students, the focus turns to 'force marching' them along predefined paths (think: 'special education for poor people'). And because the only measure is 'distance traveled', it remains acceptable to produce graduates who are unqualified in terms of competencies and skills, and in addition bereft of self-management or self-sufficiency skills.

On the other hand, the representation of education as a 'path' is itself fundamentally misguided. I've talked about the weakness of the path metaphor in the past. And I've talked about the key requirement that educators prepare people not to be followers, but to be explorers.

7. Technology is key to improving learning

The gist of the author's argument here is that technology can make the delivery of instruction more efficient. There is a nod to the idea of better outcomes, but the emphasis is on more productive delivery of existing outcomes (and of reducing or limiting educational faculty salaries).

We see this in the reference to the Baumol effect, "a rise of salaries in jobs that have experienced no increase of labor productivity," which is part of the jargon of the productivity movement. As Wikipedia (correctly) explains it, "Baumol's cost disease is often used to describe consequences of the lack of growth in productivity in the quaternary sector of the economy and public services, such as public hospitals and state colleges. Since many public administration activities are heavily labor-intensive, there is little growth in productivity over time because productivity gains come essentially from a better capital technology."

So the point that is 'agreed upon' here is that, in education, human labour can (finally!) be replaced with technology to improve productivity and achieve outcomes more efficiently (where, as we've seen above, outcomes will be measured in 'distance traveled' toward 'competencies' which result in 'employment outcomes').

This may be how education is viewed from the outside, from a corporate, financial and perhaps political perspective, but few people actually employed in education view it this way. Oh sure, we'd like to see our graduates get jobs and succeed economically. But we like to see this as the result of the student's efforts, not as something we merely provided for them. We see it as the capability, growth and self-sufficiency we've provided, rather than as the terminus of our own efforts in the field.

So technology plays a very different role in education than it plays for people talking about education. Technology increases the capacities of educators and helps them focus on the hands-on tasks (which they're never really had time for before) while automating the things that can be automated. Technology held address many of the needless expenses associated with education - like content and content delivery, records management, unwanted and unneeded courses, etc) - and to focus on the real and present needs of students.

The objective here isn't 'efficiency', though it's easy to see why outsiders cast in in this light. It's precision - being able to target our work where it will do the most good for the greatest number of people. Precision isn't simply a matter of hitting a target more often than not (that's efficiency). It's hitting the right target, at the right time, in the right way.

8. Assessments are needed to save the liberal arts

The author's argument here is that students (especially poorer students) have been increasingly turning to "pre-professional degrees" like business, healthcare, education, and technology while turning away from the liberal arts, and that unless schools can actually document the outcomes of liberal arts programs, they "will be increasingly a plaything for rich kids (who’ll use connections to get good first jobs, so it doesn’t matter what they study)."

My own education qualified as liberal arts. I majored in philosophy but took strong concentrations in the sciences, history and geography, and religious studies. As I've often said, there was a sign on the wall in the University of Calgary Philosophy Department warning student not to expect employment as a consequence of a philosophy degree. Despite taking out the maximum in students loans (totaling $25K in 1980s dollars) I didn't care.

Why not? The 1980s were recession years in Canada, and having spent time in industry before my university education I could see first-hand the fallacy of believing that a specific university program would get me a job. It didn't really matter which degree I had; they were looking only for the persistence and tenacity (and wealth and upbringing) that having any four-year degree demonstrated.

And also, I lived in Canada, and we don't starve in the streets just because we're unemployed. I knew that, and I knew that no matter what happened (as I often said at the time) "they can't take my education away from me". Not that they didn't try - the Universities withheld transcripts and collection agencies destroyed my credit. But they couldn't take the knowledge back out of my head - all they did is to create a healthy scepticism and distrust of institutions.

Societies that truly want to 'save the liberal arts' will derisk the pursuit of them. It's not a question of documenting outcomes - the benefits of studying grammar, logic, communications, mathematics, the arts and astronomy are actually pretty self-evident. No really successful person has succeeded without them (even Steve Jobs talks about how important the study of calligraphy was to him). When students take pre-professional degrees, they are saying, in effect, "maybe later, it's too risky now".

9. Follow the money

The author writes, "colleges and universities get paid no matter what." As with some of their previous premises, this is demonstrably false. Colleges close all the time - in the U.S. the 10 year average is five a year. Look at the struggles faced by the University of Phoenix over the last year or so. Look at the decades of declining state funding for institutions in the U.S. The story is told in other institutions in other countries. It is simply false that "colleges and universities get paid no matter what".

The author uses this premise to argue that 'we are agreed' that "the federal government has two choices: it can condition funding on outcomes (à la Gainful Employment) or require schools to put 'skin in the game'" in the form of "risk capital" for each and every student. Forcing institutions to bet on students' future financial prospects would certainly change institutional behaviour. But not for the better. It would convert 'education' into 'venture capitalism'.

I won't get into the problems with this approach in detail. It suffices for the purposes of this post to point out that there is scarcely unanimity behind the proposition that education is fundamentally an economic activity that should be financed the way we finance business and industry. But this sort of perspective should not be surprising coming from 'university ventures'. After all, there's money to be made in 'student IPOs'.

10. Colleges are worth saving (especially the one you attended!)

The author's point is exactly the opposite of the bullet point: "we don’t have enough resources to save every college (or, for that matter, to discharge every student loan)." The point is essentially that not every college can be saved and not every student can be funded. We should "avoid the myopia" that sees our own college as something that "represents the apex of civilization."

It's true. Colleges rise, colleges fall. Civilizations rise, civilizations fall. Even Plato's Academy shut down after a successful 300 year run (or 800 year run, depending on who you talk to).

But there's a difference between observing that colleges and civilizations fail and arguing that we should just stop supporting them. What we should be doing is to preserve the good that these institutions provide society rather than giving up on the enterprise wholesale. A company can be happy to sell its legacy to the highest bidder. A society should not. Yes, there are "natural limits" to almost everything, but this does not constitute an argument for being the agent that applies them.

It's not a question of whether or not colleges and universities are "worth saving". To view the question in such terms is to treat them merely as economic entities and assessing them against their financial value. But they are just vessels. 

What we have, in societies around the world, is a millennia-old legacy of educational institutions as stewards and purveyors of our collective wisdom not as an engine of employment or economic development, but as the reason employment and economic development exist.

In a sense, the role played by the educational system in society is the same as the role played by an education in an individual. It may result in income and employment, but that is not the purpose behind it. It is to help us not only adapt to the winds of chance and fortune but to rise above them, to create our own place in the world as free and fully realized beings, to flourish in every sense of the word.

That's not something a venture capitalist will invest in. But it's something each one of us lives for, each and every day.

Friday, February 03, 2017

An Ethics Primer

Many readers will find this section unnecessary, but for many others the range and variety of ethical theories extant may be new to them. It is my objective here to show that a significant number of questions and assumptions in dialogue around ethics are open for discussion. Ethics is by no means a complete or closed discipline; it is a living study that has been shaped and formed by thinkers from the ancient world through to the modern era.

Virtue and Character

Ethics is in the first instance the study of virtue in a person, in a person’s actions, or in a society. But what is a virtue? The SEP says, “A virtue is an excellent trait of character. It is a disposition, well entrenched in its possessor—something that, as we say, goes all the way down, unlike a habit such as being a tea-drinker—to notice, expect, value, feel, desire, choose, act, and react in certain characteristic ways.” (SEP, 2017)

While we typically characterize virtue by means of various traits - honesty, frugality, piety, humility, caring, courage, generosity, moderation - the concept of virtue is not defined by those traits. It might be derived from some sense of ideals or perfection, as Plato might say, or it might be derived from the Greek notion of arete (ἀρετή) - “be all that you can be”.

The achievement of virtue is essentially tied up with the development of character. As Aristotle says, the achievement of virtue might be a lifetime task. Virtue is the opposite of what might be termed the “weakness of the will” - our succumbing to the temptation to indulge, to become intemperate, dishonest, or violent. (Aristotle, 1959)

Simply developing one’s own character, though, might seem selfish to some. It’s self-indulgent, at the very least. And one might question whether the cultivation of virtue constitutes a basis for ethical action. We need a sense of normative virtue ethics, such that the virtues not only describe good character, but prescribe right actions. (Hursthouse, 1998)

We see this perspective reflected in modern ethics by writers such as Michael Foucault (1985). In The Use of Pleasure he talks of morality as “self-formation as an ‘ethical subject,’ a process in which the individual delimits that part of himself that will form the object of his moral practice, defines his position relative to the precept he will follow, and decides on a certain mode of being that will serve as his moral goal.”

Ethical Rules

To prescribe right behaviour, one might appeal to a set of rules describing the virtues. A classic example of this is the Ten Commandments, which requires that adherents be honest, to not covet, to not kill, and the like. (Bible: Exodus 20)

With rules one encounters almost immediately what has come to be known as ‘the conflict problem’. In a case where the application of different rules produces different conclusions, which rule takes priority? Additionally, we encounter what might be called ‘the exception problem’ - the rule may say, for example, that you must not kill - but what if this is the only possible result of defending oneself?

But more significantly, morality doesn’t seem to simply be a matter of following the rules. “If right action were determined by rules that any clever adolescent could apply correctly, how
could this be so? Why are there not moral whiz-kids, the way there are mathematical (or quasi-mathematical) whiz-kids?” (Hursthouse, 1998)

Categorical Imperative

For Kant, morality poses the question of what would constitute a duty to act. This is found in the bases of Kantian morality autonomy and freedom. It is only through autonomy and freedom that we have the possibility of making moral choices. As we would say today, “ought implies can”. The morality of making a choice entails the possibility of making a choice. (Kant, 1956)

So morality applies to any rational being, and the nature of morality can be known through reason (indeed, it is this very fact that makes morality possible at all). There are several elements to Kantian ethics; one of the most significant is the categorical imperative.

In a nutshell, this is the principle that we must act in a way that we would imagine the action being a universal law. This is not the principle your mother appeals to when she says “what if everybody did that?” Rather, it’s the idea that you would will people to act in such a way because such actions are inherently good. (Kant, 1998)

What sort of actions could be universalized in such a way. Many typical actions, those based merely in our own pleasures, where we use other people as a means to an end, would not qualify. The only consistent universal principle of morality imposes on us the duty to treat people as ends in themselves, rather than as a means to an end.


Utilitarianism is sometimes known as ‘the happiness principle’. The simplest statement of utilitarianism is that something is morally good according to whether it produces pleasure and avoids pain. In a society, a morally good action is that which produces the greatest good for the greatest number. (J.,S.Mill, 1957) Utilitarianism is therefore an important statement of ethical consequentialism, that is, the idea that the effect of one’s actions are relevant to ethical appraisal. It is worth noting that utilitarianism is concerned with the goodness of an act, as opposed to the Kantian concept of duty to act.

With utilitarianism come several immediate objections. (Smart & Williams, 1973) For one, there is the concern that utilitarianism caters to our lowest desires; for example, in hedonism we find the ethic of personal pleasure. Another is there is the question of how consequences may be measured (the unit of measurement sometimes derisively called a ‘hedon’). Indeed, we might not be able to know, or to calculate in time, the ‘unintended consequences’ of an action.

Many of these objections are answered by John Stuart Mill. The cultivation of taste, he writes, leads one to enjoy the ‘higher pleasures’. Better to be a discontented man than a contented sheep. As well, we need not evaluate each act individually. We may distinguish between ‘act utilitarianism’, which looks at the consequences of individual acts, and ‘rule utilitarianism’, which looks at the consequences of types of behaviour generally.

But a final critique of utilitarianism is that it is cold and unfeeling. Do the needs of the many genuinely outweigh the needs of the few? If seven billion people could be made to feel slightly better by the life-long torture of one person, is this act morally permissible? Intuitively this seems wrong, though a utilitarian calculation might say otherwise.


Another form of consequentialism, Egoism is the philosophy that one is required only to act in their own self-interest. This is the philosophy often associated with Ayn Rand under the heading of ‘objectivism’ (Rand, 1970), and though Rand’s arguments in favour are incoherent, reasoned argumentation for egoism is not rare.

Egoism can be expressed in different ways. “Psychological egoism asserts that it is impossible for anyone to do anything other than seek his own good. Ethical egoism tells us that a person ought to promote his own interests.” (Mcconnell, 1978) Both of these suggest that whatever the status of ethical theory, it is not really possible for a person to adopt any ethics other than personal self-interest.

Egoism forms the foundation of modern economics. As Adam Smith Writes, "It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages" (Smith, 1937, I.ii.2).

Social Contract

While we usually associate consequentialist theories with the pursuit of pleasure and avoidance of pain, consequentialist theories can identify other goods, for example, justice, fairness, and equality. However these are even more difficult to define and measure than pleasure and plain. An alternative mechanism is required; historically this has been the social contract.

The social contract appears first with any significance in modern philosophy, and in particular the work of Hobbes, Locke and Rousseau.

Hobbes argues that we willingly cede power to the monarch in order to escape the state of nature in which no rules exist and where, as he says, there are "No arts; no letters; no society; and which is worst of all, continual fear, and danger of violent death: and the life of man, solitary, poor, nasty, brutish and short." (Hobbes, 1986)

John Locke depicts the contract as a mechanism to defend the rights of citizens against the sovereign, and in particular, to protect their right of property, which they acquire by removing goods from the state of nature and adding their own labour to them. Failing this, writes Locke, the recourse is either legitimate revolution to overthrow the sovereign, or emigration to unoccupied land. (Locke, 1821)

“Man is born free,” writes Rousseau at the beginning of the Social Contract, “yet everywhere he is in chains.” Rousseau depicts a ‘state of nature’ quite opposite to Hobbes, where people lived in peace and plenty, and the net effect of society was to constrain this freedom and enslave people to serve the individual will of the master. The objective of the social contract is to ascertain ‘the general will’ expressed by the unanimity of citizens. (Rousseau, 1950)

A significant and influential modern version of social contract theory emerges with John Rawl’s A Theory of Justice. Rather than postulate an ethically dubious ‘state of nature’, Rawls proposes that we imagine what sort of contract we would negotiate with each other if we were not aware of where we would be in society. What results, he argues, is a theory of “justice as fairness” (which doesn’t sound remarkably different than Plato’s version, “to everyone his due.” (Rawls, 1971)


The study of meta-ethics is the study of what grounds an ethical argument. To some degree this discussion is already present in the range of ethical theories described above (and many writers place the discussion of meta-ethics prior to the list of ethical theories). I have chosen to place it here because, after reflection on the different theories, it is relevant to ask about the bases or grounds for one approach or another.

For example, as we consider these different theories, we see that even what counts as ethical can vary from one viewpoint to another. Some see it as a form of excellence in individuals, others see it as defined in terms of duties and responsibilities, still others characterize ethics in terms of good and bad or right and wrong, while others see ethics expressed in terms of value and worth.

Does Might Make Right?

Suppose Gyges has a ring, says Glaucon in Plato’s Republic, where this ring makes him invisible and hence essentially free of retribution for any act. He can take whatever he wants, lie with anyone he wants, even murder anyone he wants, and there will be no retaliation. Why then would he act in a moral manner at all, no matter how we define morality? (Plato, 2000)

Friedrich Nietzsche makes a compelling modern case for this argument. He argues that if a man becomes ‘Superman’ (ubermensch), then whatever he does is by that fact moral. (Nietzsche, 1900)  We see echoes of this today in the proclamations of Donald Trump when he observes that the President can’t be in a conflict of interest. (Voskuhl & Melby, 2016)

Conversely, if a person must behave ethically because of the power of an authority (whether it is the will of God or the dictates of a King) and is unable to do otherwise, on what grounds would we cann behaving in this manner moral at all? If I am falling, and will kill someone when I land on him, I am powerless to stop or to change direction. Am I still responsible for the man’s death?

The relation between power and morality is a complex one. If morality is based on subservience to power, this takes away the element of choice, which seems essential to morality. But if the element of power is removed, what them makes an act moral or immoral?


There is a long tradition in ethics, often depicted as a variation of rationalism, to the effect that right and wrong are defined by natural law. This can be expressed in different ways. For example, there is the argument that human rights are based in natural law, as evidenced in the U.S. Declaration of Independence: “We hold these truths to be self-evident: That all men are created equal; that they are endowed by their Creator with certain unalienable rights…” (Stoner, 2017)

There is also an interpretation of naturalism and natural law to the effect that we should behave according to our nature, or (variously) according to our best nature. Thomas Aquinas, for example, places the creation of our nature in the hands of God, which therefore makes behaving according to that nature. (Magee, 1996)  Flavours of naturalism can also be found in Taoist and Confucian thought. (Nelson, 2009)

But can we deduce moral facts from nature, or even from human nature? David Hume argued famously that one cannot deduce an ‘ought’ from an ‘is’. If it’s the nature of something to do something, there is no right or wrong about it. (Hume, 2003) G.E. Moore called such an inference “The Naturalistic Fallacy.” Specifically, the fallacy is “the assumption that because some quality or combination of qualities invariably and necessarily accompanies the quality of goodness, or is invariably and necessarily accompanied by it, or both, this quality or combination of qualities is identical with goodness.” (Moore, 1903)

There is, after all, no means of determining which natural properties are identical (or opposite to) goodness. If flight is not natural, is flight a sin? If violence is natural, is violence ethically acceptable?

Moral Sentiment

Perhaps moral judgement isn’t based on rationality and reason at all. Perhaps it is based on how we feel. This argument as most famously advanced by David Hume against rationalist accounts of morality. For one thing, reason alone cannot persuade us to act - “reason is, and ought only to be, the slave of the passions,” he writes. (Hume, 1739, II.3.3) “Truth is disputable; not taste: What exists in the nature of things is the standard of our judgment; what each man feels within himself is the standard of sentiment.” (Hume, 1751, 1.5)

The nature of ethical dilemmas arises from the subjective experiences of moral disagreement we have in ordinary life, writes C.L. Stevenson. (1937) These can be differences of belief or disagreements of attitude. In the case of the latter, people agree on the state of affairs in question, but interpret them very differently. Take ‘desire’, for example: we might agree objectively that something is ‘capable’ of being desired, ‘worthy’ of being desired, but then there is the entirely separate matter of whether in individual actually does desire it.

Consequently, argues Stevenson, moral persuasion may often be non-rational. “It depends on the sheer, direct emotional impact of words—on emotive meaning, rhetorical cadence, apt metaphor, stentorian, stimulating, or pleading tones of voice, dramatic gestures, care in establishing rapport with the hearer or audience, and so on.” Consider, for example, how the impact of some of today’s most significant moral statements is obtained - the repetition of words in King’s “I have a dream” speech, for example. (Boisvert, 2011)


Kant argues that morality is based on the categorical imperative, the duty that arises out of universal moral precepts. But what if morality exists only in relation to some purpose, goal or outcome. Then they become hypothetical imperatives. In her paper of the same name, Philippa Foot asks what we are to say to the man who does not care about the ends we would ascribe to the moral man - justice, liberty, etc. (Foot, 1972) If he does care about them, it is because he values them as an end, not because he must (in absolute sense) ought to care.

Care ethics is a type of morality that can be understood as a hypothetical imperative. Drawn from feminist theory, which stresses nurturing and relationships, “care ethics affirms the importance of caring motivation, emotion and the body in moral deliberation, as well as reasoning from particulars.” (IEP, 2017) What’s significant about care ethics is that it addresses not only motivations and actions, but also attitudes and motivations. (Held, 2006)

A final question concerning relativism is whether it is feasible. While some argue there can be no compromise on ethical principle, relativists will generally hold that different perspectives can (to a certain degree) be compatible with each other. For example, in a society some people may subscribe to care ethics, but it does not follow that all people must entertain the same attitudes and motivations.In economics we have the concept of “incentive compatibility”, which expresses a similar idea, where people may have different interests, provided they are consistent with the principles of exchange adopted by the group. (Myerson, 2009)

Thomistic Philosophy - the Philosophy of Saint Thomas Aquinas. Web. 17 Jan. 2017.
Aristotle, and W. D. Ross. The Nichomachean Ethics. London: Oxford UP, 1959. Print.
"BibleGateway." Exodus 20 - - Bible Gateway. Web. 17 Jan. 2017.
Boisvert, Daniel R. "Charles Leslie Stevenson." Stanford Encyclopedia of Philosophy. Stanford University, 15 Apr. 2011. Web. 18 Jan. 2017.
"The Declaration of Independence | Natural Law, Natural Rights, and American Constitutionalism." The Declaration of Independence | Natural Law, Natural Rights, and American Constitutionalism. Web. 17 Jan. 2017.
"The Ethics of Tenuous Faculty." The Ethics of Tenuous Faculty | AAUP. Web. 20 Jan. 2017.
Foot, Philippa. "Morality as a System of Hypothetical Imperatives." The Philosophical Review 81.3 (1972): 305. Print.
Foucault, Michel. The Use of Pleasure. New York: Pantheon, 1985. Print.
Held, Virginia. The Ethics of Care: Personal, Political, and Global. Oxford: Oxford UP, 2006. Print.
Hobbes, Thomas, and C. B. Macpherson. Leviathan. Harmondsworth, Eng.: Penguin, 1986. Print.
Hume, David. "An Enquiry Concerning the Principles of Morals." The Clarendon Edition of the Works of David Hume: An Enquiry concerning the Principles of Morals (1751): 1-2. Print.
Hume, David. "A Treatise of Human Nature." David Hume: A Treatise of Human Nature (Second Edition) (1739). Print.
Hume, David, and Tom L. Beauchamp. An Enquiry concerning the Principles of Morals: A Critical Edition. Oxford: Clarendon, 2003. Print.
Hursthouse, Rosalind. "Normative Virtue Ethics." How Should One Live? (1998): 19-36. Print.
Hursthouse, Rosalind. "Virtue Ethics." Stanford Encyclopedia of Philosophy. Stanford University, 18 July 2003. Web. 17 Jan. 2017.
Kant, Immanuel. Critique of Practical Reason. New York: Liberal Arts, 1956. Print.
Kant, Immanuel, and Mary J. Gregor. Groundwork of the Metaphysics of Morals. Cambridge, U.K.: Cambridge UP, 1998. Print.
Locke, John. Two Treatises on Government. London: Printed for R. Butler, 1821. Print.
McBride, Kelly. "The New Ethics of Journalism: About This Blog." Poynter. Poynter, 25 Nov. 2014. Web. 19 Jan. 2017.
McBride, Kelly, and Tom Rosenstiel. The New Ethics of Journalism: Principles for the 21st Century. Los Angeles: SAGE, 2014. Print.
Mcconnell, Terrance C. "The Argument from Psychological Egoism to Ethical Egoism." Australasian Journal of Philosophy 56.1 (1978): 41-47. Print.
Mill, John Stuart, and Oskar Piest. Utilitarianism. Indianapolis: Bobbs-Merrill, 1957. Print.
Moore, G. E. Principia Ethica. Cambridge: At the UP, 1903. Print.
Myerson, Roger B. "Fundamental Theory of Institutions: A Lecture in Honor of Leo Hurwicz." Review of Economic Design 13.1-2 (2009): 59-75. Print.
Nietzsche, Friedrich Wilhelm, and Thomas Common. Thus Spake Zarathustra. New York: Modern Library, 1900. Print.
Plato, G. R. F. Ferrari, and Tom Griffith. The Republic. Cambridge: Cambridge UP, 2000. Print.
Rand, Ayn, and Nathaniel Branden. The Virtue of Selfishness: A New Concept of Egoism. New York: Signet/New American Library, 1970. Print.
Rousseau, Jean-Jacques, and G. D. H. Cole. The Social Contract: And Discourses. New York: E.P. Dutton and, 1950. Print.
Smart, J. J. C., and Bernard Williams. Utilitarianism; for and against. Cambridge: U, 1973. Print.
"Statement on Professional Ethics." Statement on Professional Ethics | AAUP. Web. 20 Jan. 2017.
Stevenson, Charles Leslie. "Ii—The Emotive Meaning Of Ethical Terms." Mind XLVI.181 (1937): 14-31. Print.
Voskuhl, John, and Caleb Melby. "Trump Says 'Can't Have a Conflict of Interest' as President." Bloomberg, 22 Nov. 2016. Web. 17 Jan. 2017.