Philosophy 130 – Ethics (Spring 2003)
Theories of Conduct journals and summaries
[Students and professors, please read.]
Ch. 3: Relativism
What are moral values relative to? Culture, nation, group, individual?
I don’t know exactly, but here is why I don’t think “right and wrong” is relative to culture (which is hard to define, see definition of culture in chapter 1 journals).
Each individual (even if they may share values in common with other individuals) is born with unique biological predispositions, and is exposed to unique life experiences (not excluding stages of life) which influence their cognitive awareness and what they value. Therefore, each individual is unique, even within their own culture. One individual’s values may be considered abnormal by another individual with different biological predispositions and/or life experiences.
Consider the “last two people left alive” hypothetical situation. Suppose they come from drastically different cultures. If they want to “reason out” which opposing view is the correct one (as many married folk attempt to do, hopefully before getting married, so they know they will get along) — ethical relativism assumes reason is useless — may as well agree to disagree — what is normal for someone is what is normal for the culture they grew up in (revolutions would be abnormal and therefore immoral).
Now — suppose an individual spends from birth to age three in an Eastern-collectivist culture, age three to age seven in a Western-individualist culture, age seven to age thirteen in a cannabilistic culture, and age thirteen to age eighteen in a militaristic culture? Hmm… An individual’s values may change throughout their lifespan (even if they grow up in only one culture), so that they may look back and see the values they held in, say, preadolescence, as abnormal (incompatible with their current stage in life), or maybe even closer to normal compared to their current values (maybe want to get in touch with their inner child, hehe).
Does Relativism deny a main goal of multiculturalism?
p. 97: “ethical relativism’s basic premise (is) that moral codes can’t be transplanted from one culture to another.” “We have no right to criticize other cultures, period.” p. 99: “We also cannot praise and learn from that culture.” “Ethical relativism . . . precludes learning from other cultures, because there can be no ‘good’ or ‘bad’ that is common to all cultures.” (According to relativism, you can not reason that a behavior is good for one culture for the same reason that the same behavior is good in another culture, and therefore should be good in all other cultures for those reasons — a moral code can only apply to one culture.) p. 111: “Can ethical relativism function, therefore, in a country as diverse as ours, where we often find opposing values?” —especially if 1/2 the country has a value that the other 1/2 opposes — or if the real majority goes unheard for whatever reason (money talks, ahem) — in other words, if there is no majority (problem 2, p. 100) to set what is normal and thus moral for that culture/country (morals would be subjective to each person, if we still haven’t decided to use logic).
Are values culturally-specific?
I think values (not necessarily “right and wrong”) are individual-specific, but I do see the influence of “other individuals” in the development of an individual’s values. “A man is not an island,” as they say.
Do behaviors exemplify the value?
Not necessarily. There are many reasons behavior might not line up with attitude. p. 100 gave the example of infidelity being considered immoral, but being a common practice, which is a case of conflicting values, sexual-/love-relationship satisfaction (perhaps over-simplified) being valued over fidelity. Situational constraints and a feeling of not being able to perform the behavior may prevent attitudes from being expressed in overt behavior. My social psych class covered the theory of reasoned action, the theory of planned behavior, and the attitude-to-behavior process model, all three of which take into account “subjective social norms” — whether others will approve or disapprove of this behavior.
Are there core values?
Can the question be phrased — do humans seem to have many values in common? — I would have to say “yes” — but what they have in common can change — values can change.
Do we need full understanding to judge something?
I would have to say “yes”. We have no business judging something we don’t understand, and even if we tried, a lack of understanding would not yield an accurate judgment. I think if we want to understand other cultures (if they are too different from our own that empathy isn’t doing the job), we should live in them for a while.
Do we even have full understanding of ourselves?
As long as we are still debating over whether biological predisposers or socialization has greater influence on various behaviors, our lack of full self-understanding will be evident. If you want to understand the society, which is just a collection of individuals, you have to understand the individuals. If those individuals don’t even understand themselves, attempting to judge them seems — why? What purpose does it serve?
Summing up Chapter 3: Ethical Relativism
One can feel that there is a universal morality that all people are accountable to, as in absolutism or hard universalism (mandated by God, not up for discussion), or that there is no right or wrong, as in nihilism, or consider skepticism, the thought that everyone can not collectively know of a universal morality that applies to all individuals, or one can favor moral subjectivism, in which moral values are relative to the individual according to their biological predispositions and life experiences, as opposed to ethical relativism, in which “right and wrong” is relative to the behavioral norms of the majority of their culture (the observation that different cultures have different moral codes is known as cultural relativism).
Moral subjectivism (individual) and ethical relativism (culture) benevolently intend tolerance of others, in that we should not criticize what others value. However, both subjectivism and relativism lack conflict-resolving capabilities when there are conflicting values between individuals (do you stop someone from beating their child, if they find nothing wrong with it?) in subjectivism, and between cultures [(Can the U.S. defend itself against an attacking culture in such a way that, in the spirit of ethical relativism, tolerantly does not criticize that culture’s values? (problem 6)] in relativism, because it is impossible to come to an agreement (without someone going against their own values) on which is the moral course of action — no one can reason that they are the only one who is right or that the Other is wrong. This means that logic has no place in relativism or subjectivism, because it assumes one is right or wrong based on who they are, and, along with other theories that believe there is no universal moral code, also assumes that because there is disagreement between cultures, there is no reasonably right or wrong answer — also illogical. The text also covers the problem of induction, I covered the problem of “majority rule” (problem 2), what makes a majority (problem 4), and that relativism assumes we cannot criticize or learn from other cultures (problem 1), when answering the question on multiculturalism, and I covered the problem of what really defines a culture (problem 5) in my chapter 1 journals. I discuss problem 3 after this summary.
Do the rules of logic even apply to values? If there is a rational element to our emotions (hinted at in chapter 1), and if our emotions play a large part in forming our values, then maybe we can reason (using logic) that there is a set of core values shared by all humans (descriptive soft universalism) that can, and should, be applied as a moral code for all humans, in every culture (normative soft universalism)? Soft universalism is not absolutism because it seeks a common ground of universal moral truths that we act on in different ways and which prism out into differing opinions and mores.
A word on problem 3: “Professed or Actual Morality?” Ruth Benedict, defender of ethical relativism, assumed morality to be majority behavior — “moral” is “normal” — how the majority actually behave. So, in the example the text gave, when the majority professes a morality (fidelity) that is contradicted by the majority’s actual behavior (infidelity) — ethical relativism dismisses what they feel they ought to do, and assumes what they actually do to be the value that should be respected by other cultures. Now go back and reread my answer to the question, “Do behaviors exemplify the value?” I also suggest that when the morality contradicts the actual behavior, perhaps it reflects that the morality of the culture is in a period of transition, and that soon, the values they profess will be more in line with their actual behavior. But social change doesn’t fit well in ethical relativism — all that matters is what is normal ‘now’.
Ch. 4: Egoism
What is the most selfish act you can imagine?
Hmm… exploiting and murdering someone helpless in order to gratify a perversion.
Why is it the most selfish one?
It is so (destructively) selfish that it hurts others for no good reason — just self-gratification. Because one would have to either completely detach from the other person’s feelings (and the feelings of the person’s family/friends/all affected by it) in order to go through with it, or actually find their pain personally pleasureful. It is completely centered on self (twisted/destructive desires so encouraged in thought-life that they take priority over other people’s directly-witnessed and directly-caused pain) — the other person is just an object.
If you found the act morally objectionable, what specifically was objectionable about it?
The pain it directly and purposefully caused, and that was seen as acceptable or even beneficial in the offender’s mind.
Is self-preservation a moral imperative or just a fact?
I think self-preservation is part of being human, and so is group (extended self) -preservation.
From an evolutionary standpoint, is “survival of the fittest” selfish or selfless?
Selfish — it includes others (not pure selfishness, granted) — but in order to help self survive.
Are there situations in which self-preservation is not the highest value?
Instead of the word “highest” I would use “basic” — and I would think — no — it will always be basic — but it will not always seem to be our greatest concern, like when our lives are not at all in danger. See Maslow’s hierarchy.
Is it selfish to prefer saving one’s own life to that of others?
Yes, it is more selfish than sacrificing one’s own life. But it isn’t pure altruism to sacrifice your life to save the lives of others, unless you are going against your personal (self) values (don’t know any moral person who would do that). Giving to those who need money is not selfless if it is in line with your values, but giving to those who have more than you is a selfless act if it contradicts your values. But value-contradiction is immoral, is it not?
Can an ethical egoist be a good friend?
It depends on your definition of a “good” friend. A purely altruistic friend? No — and who would expect it (besides an “individual egoist” p. 144) — (not to mention it’s impossibility)? Is a professed ethical egoist a friend that pretends to be purely altruistic? No–and honesty is a good quality in a friend.
Summing up chapter 4
There are few terms to discuss, so we’ll get them out of the way, and then dissect the criticism of egoism discussed in the chapter. I am going to define the terms the way I think of them, not with the slant of the author. First, an egoist believes we are all basically selfish, whereas an egotist has a very high opinion of him/herself. Psychological egoism describes that humans are basically selfish, and that it is humanly impossible to set one’s self completely aside even while helping Others (for, we wouldn’t even want to help others if we couldn’t empathize with them, and we couldn’t empathize with them if we didn’t refer back to ourselves and how we would feel in their shoes), whereas ethical egoism says it is good and moral to be selfish, that it is not bad or immoral, or “rotten at heart” (p. 136) to be selfish. Psychological altruism, a symmetric opposite of egoism, would describe that humans basically put Others before self, whereas ethical altruism says it is good and moral to put Others before self. The average person thinks of “selfish” as having negative connotations, as in “devoted unduly to self” or “disregarding the interests of others” — what Ayn Rand would label “brutish”. However, that brand of selfishness would be closer to the “white light” end of the spectrum of selfishness (see comment on ‘fallacy of the suppressed correlative’ below), and does not describe the acknowledgment of neutral self-preservation defended in egoism. However, the author doesn’t seem to agree with me on that (she never once states it clearly, and she seems to criticize egoism from the perspective that it defends the “brute” — I will go into some examples below), so I could be wrong.
Rosenstand says psychological egoism as a theory is ‘bad science and bad thinking’ because it is not falsifiable, it does not allow for the possibility of counterexamples, whereas evolution is falsifiable because it is based on empirical research that can be verified objectively, and allows for counterexamples because it is open to revision if new and different evidence should surface. She doesn’t quote any egoists to support her claims of egoists not allowing for the theory to be wrong, and I can quote evolutionists who state the theory is “fact”. I also have gone over a whole bunch of research concerning “the self” and “why we help others” in my social psych. class — there is plenty of empirical evidence supporting egoism. She also mentions “begging the question” — she says egoism “assumes all acts are selfish and therefore interprets all acts as selfish”. However, my social psych. text mentions studies concerning group-egoism (looks like empathy altruism, but isn’t–also covered in ethics text), empathy avoidance, selective altruism, negative-state relief model (as in Lincoln’s peace of mind on p. 140-141 of ethics text, but also just to improve a bad mood), empathic joy hypothesis, and genetic determinism model (also covered in ethics text). My social psych. text also covers formation of self-concept, and how our entire universe is organized relative to self (self-reference effect) — we can’t help it. Lastly, the social psych. text covers our “social self” — our interpersonal relationships and the groups we belong in, based on a biological “need for affiliation”–explains group-egoism. There is much more to mention, but I can’t go on and on (although I do tend to..). Studies out the wa-hoo. So maybe Rosenstand is not aware of these studies? One thing I can say is my social psych. text seems to have the same slant against egoism as my ethics text–they both define it as exclusively concerned with one’s self, rather than others. But I think they both neglect to acknowledge that egoism just states a basic aspect of human nature, defends it as being natural rather than immoral, and illustrates that moral aspect by explaining that we only help others when it is in line with our personal values. Egoism isn’t trying to justify brutish, antisocial behavior. Correct me if I am wrong.
A comment about “fallacy of the suppressed correlative” on page 142: If the correlative of light is dark, and the correlative of selfish is unselfish or selfless, then another way to say dark is “unlight” or “lightless”. If a total lack of light would mean dark (nothing, no correlative), then a total lack of selfishness would mean no self. Dead. Shades of grey which graduate from black darkness slowly into white light, would parallel shades of selfishness/selflessness with self-preserving selfish/lessness in the middle (neutral grey, –also where the “last man on earth” situation would fit, where there are no others to consider, except in memory), then the evil selfishness which puts others’ pain below personal pleasure (intended or completed), on the left (white) turning slowly, through graduating shades of grey, into “pure altruism” or “dead” on the right (black). (From chat) — “I think the shades of grey all have some light in them, as do all acts have some ‘self’ in them until we die, darkness.”
white light —- —– ——- neutral grey —– ——– black darkness
evil selfish —————- neutral selfish ————– no selfish
(harmful to others) — (self preservation) ——- (no self — dead.)
However, the author of the text sees things this way: If a behavior is not utterly selfish, then it is “less selfish” than “evil selfish”, which means it is “unselfish” which means it is “selfless” and could be thought of as “reciprocal altruism” or even “altruistic behavior”. But just as grey can look black relative to something white — grey is grey — selfish is selfish — and pure altruism is not reality, if “ought implies can” (p. 132). On the same token, there is no limit to how destructively selfish a person can be. That doesn’t mean basic self-value or the instinct of self-preservation is evil.
On page 131 there is a discussion of “selfish” versus “self-interested” as being concerned with “what we want” versus “what we ought” — the complicated line between ought and want doesn’t negate the fact that “self” (not others) is the basis for the decision.
The Golden Rule: Nina says the egoist interprets the golden rule as “what goes around comes around, so treating people good will come back to you”. But I think actually an egoist interprets it as — you can’t treat others well, if you don’t even value your self. Rosenstand also claims that an egoist would claim a person who gives their life to save that of another’s is throwing their life away — but I think that would only apply if the life-giver were contradicting his/her personal values (values of self).
Ch. 5: Utilitarianism
Does a vote by consensus relieve us of the consequences of our own actions?
No. That’s the weird thing about democracy — “of, for and by the people” — but how many people are voting? And maybe we might have a gut feeling that a democratically agreed-upon law is immoral — do we go with our gut or follow the law — and if we follow the law — do we blame the democracy for the consequences of our actions? Heck no, in my opinion. Maybe I misunderstood the question, though.
What if the group agrees on an action you are morally opposed to?
What if your friends jumped off a cliff–would you jump, too?
Are consequences more important than intentions?
Intended consequences are indeed important — but not many are able, nor should they be expected, to predict the future with certainty.
How do you know what the final consequences will be?
Ya don’t, until the end (and when is that?). You can only guess, maybe based on past similar experiences.
Who does the calculating?
The one(s) making the decision, I guess. So, yeah, the test can be rigged by being biased toward their own values.
Summing up chapter 5
Utilitarianism (credited to Bentham) is based on the principle of utility, or the greatest happiness principle: “When choosing a course of action, always pick the one that will maximize happiness and minimize unhappiness for the greatest number of people,” (p. 175). So–utilitarianism is a consequentialist theory, but more specifically has the common good as its ultimate end.
In hedonistic utilitarianism (hedonism being pleasure seeking), happiness or pleasure have instrinsic value (they are valuable in and of themselves), and anything that helps achieve them has instrumental value (they are tools utilized to achieve happiness or pleasure). The text mentions the hedonistic paradox: “If you look for pleasure, chances are you won’t find it. Pleasure comes to you when you are in the middle of something else and rarely when you are looking for it,” (p. 178). To illustrate (ha ha! jk)…
Using hedonistic (hedonic) calculus, one can mathematically weigh the pros (pleasures) and cons (pains) of a course of action, which include the intensity, duration, certainty/uncertainty, propinquity/remoteness, fecundity, purity, and extent of the pleasure and pain resulting from the course of action. The pleasure and pain of anyone affected by the action, to include self (ahem, group egoism?), and not limited to human beings, is included in the calculus (what constitutes pain or pleasure is up to each person to decide, making ‘greatest happiness’ an egalitarian principle).
Two problems with the calculus are that 1) biased humans assign the numerical values to the pros and cons, in essence rigging the test, and, 2) one would have to be omniscient, empathic, and able to predict the future with certainty in order to get a complete list of pros and cons.
A controversial aspect of the test (without applying rule utilitarianism, discussed below) is that “if only a few are suffering, and many benefit from their suffering, it is the morally right course of action,” (p. 181), utilizing the instrumentally valuable suffering few as “mere tools in someone else’s agenda,” (p. 187). Is it better for the many to perish in order to keep from using the few as “tools” — or for many to survive at the cost of the lives of a few, with which the passengers of Flight 93 seemed to agree? And what if the situation is not a time of crisis, not life-or-death? Assuming (giving the benefit of the doubt, here) the hedonic calculus to be an exhaustive list of pros and cons which have been assigned numerical values without bias — an act utilitarian (classical) would say harmful (to minority) ‘means’ justify beneficial (to majority) ‘ends’. However, a rule utilitarian (modern) would say, “Don’t do something (in this case which would cause harm to a minority) if you can’t imagine it as a rule for everybody (in this case which would cause harm to a majority), because a rule not suited for everyone can have no good overall consequences,” (p. 201). (Different from Kant’s categorical imperative and deontology, which comes up in the next chapter.)
Utilitarianism only holds one (mentally competent adult) accountable for conduct that directly (not indirectly) harms others. Otherwise, as per the harm principle, society has no business interfering in an individual’s life, thus solving the problem of the tyranny of the majority. This view of personal liberties is referred to as classical liberalism, and when ensured by the government, is referred to as modern liberalism, and when teamed up with a laissez-faire (hands-off, non-interference) government, is referred to as a conservative economic philosophy (as expressed by the Libertarian Party).
The text also covers Mill’s political vision of making available to everyone equal opportunity to enjoy the “higher pleasures” through education. But, when it came to determining what those “higher pleasures” were (and ought to be), by simply observing what the majority desired, he was working against the naturalistic fallacy: what people actually do is not necessarily what they ought to do (you can’t go from “is” to “ought”). Relying on what the majority actually do, besides running the risk of promoting the tyranny of the majority, encounters the same problems and logical fallacies as the “majority rule” concept encounters in relativism.
Ch. 6: Deontology
Is it possible to always stop and think before acting?
Depends how much time you have to make the decision. A split-second moral (as opposed to immoral or morally neutral) decision is easier to make if you are an habitual good-willer, if it is in your (nurtured) nature to will good (fun with words) (practice makes perfect — or atleast ‘seasoned’ or ‘experienced’). Could be argued that you didn’t think through it, but reacted automatically because of your habitual good nature — could be argued that the moral decision in that case was morally neutral, duty in line with gut-inclination (as if by instinct, but not, because it is learned/developed). Or maybe it actually was from the gut (instinctual), and thought or habit had nothing to do with it? If the act is unacceptable in society, how do you prove you never intended it? How is it proven immoral, if it was a natural response you had no time to prevent from occurring (that has to be a rare thing, I would think?)? But that’s a tangent.
If your good intentions backfire, are you responsible?
No. Same reason given in response to that implication in utilitarianism — can’t predict the future, much less will it. If the intentions are good, and the consequences go haywire and completely opposite of what you wanted, that is beyond your control, and thus not your responsibility. Best you can do is give a best guess and go with it, hope it turns out correct, the way you intended.
What if someone else’s good intentions backfire and you are hurt? Do you forgive and forget?
Yes, just as you would like the same forgiveness if your good intentions backfired and got someone hurt. Hopefully be able to appreciate the person’s intentions, and separate them (the person) in your mind from the actual consequences.
Is there ever a time when it is acceptable to use a person as a means to an end?
According to Kant, no–not even to save a life (I think, atleast that’s an implication). However, here is my take on killing a murderer in the act of attempting murder. Every Kantian universalized maxim contains an unwritten “intend to”. “Do not kill” is actually “Do not intend to kill.” When you (hypothetically) kill (end someone’s life, which has absolute value) a murderer (in the act) to stop them from killing (assuming you have progressed through the levels of force and none of them worked, there is no other option but killing: ending someone’s life, which has absolute value) — that is not necessarily using a person (the murderer) as a means to an end (stopping murder) unless you wanted them to die by your hand. On the contrary, his/her death could just have happened to be the unintended consequences of your intentions to stop his/her intended consequences of murder.
P.S. If you think bugs can suffer, and that they should not be treated as a means to an end (Kant wouldn’t agree), then you are a murderer every time you drive on the freeway. High-speed vehicles are all immoral to use, because they kill bugs. So what matters more in a moral universe: being able to suffer/die, being able to reason (as Kant believed), or just social order in general? Something to chew on.
How do you respond to the aphorism “the road to hell is paved with good intentions”?
I don’t buy it, unless it means that “actions speak louder than words” or “evil happens when good men do nothing to prevent it” or whatever. I draw a parallel between that saying and “Faith without works is dead,” — meaning that you can say you have faith ’til you are blue in the face, but if there is no evidence of true faith in your life, then what you say has no meaning or truth to it. Could combine the two phrases and say, “The road to hell is paved with empty professions of faith.” If you actually have faith, the profession of it is not empty, as evidenced by the example of your life. That isn’t to say (as some might) you must have faith (in Jesus) to live an exemplary life. And it isn’t to say (as some might) that if you don’t have faith, or if you don’t live an exemplary life, that you are going to hell.
Summing up chapter 6
Kant reasoned (rationalized?) that it is not the actual consequences, but the intended consequences, and more specifically, the good will behind the intention, which makes an action morally good. Having good will means having respect for the universal maxim, or the rule that applies to everyone, behind our action, despite any inclination to act in opposition to our moral duty — duty being to the categorical imperative, (or the universal maxim), which is absolute; inclination being to the hypothetical imperative, (or the conditional maxim), which makes situational exceptions and runs the risk of being self-contradictory (illogically undermines the original intention) when applied to everyone.
This is Kant’s categorical imperative in a nutshell: “Always act so that you can will that your maxim can become a universal law.” When considering the morality of an act, you must ask, 1) What is the maxim (rule) of the act you are contemplating, and 2) Would that act (regardless who commits it) still be possible, or would there be horrible consequences, if everyone (universal) followed that maxim? If you base your actions dutifully on respect for the universal maxims they uphold, regardless what your inclinations might be, then your actions are moral. If you base your actions on what the consequences/outcome might be, or how it might make you feel, without taking the categorical imperative into account, then your actions have no moral worth (though they might be immoral), because you were not trying to do “the right thing” (you had other motives on your mind) according to Kant.
Kant’s categorical imperative also focuses on respect for persons (specifically: rational, morally autonomous–able to reason out universal moral rules for oneself and others–beings who can assign value and therefore have absolute value–they are priceless, irreplaceable, with dignity–as opposed to all beings in general who can suffer): “Act in such a way that you treat humanity, whether in your own person or in the person of another, always at the same time as an end and never simply as a means.” This means that we must not treat a person (including ourselves) as a tool (with instrumental value), but rather as a being which is instrinsically valuable in and of him/herself. We saw in chapter 5 that rule utilitarianism took care of the problem in classical utilitarianism of treating people as tools, but it still put more emphasis on the consequences of an act in order to determine its moral value, rather than on the good will, or respect for the moral law, behind the act. Kant would say that, even if the consequences turned out bad for everyone and made people utterly miserable, if your intention was to do the right thing, your act was morally praiseworthy.
Response to Criticism of the Categorical Imperative:
1. Mill would say the categorical imperative does actually imply concern for consequences, since in effect it is asking, “What are the consequences of everyone doing what you want to do?” Yes, it takes into account our best guess of what the logical implications might be if everyone did what we intend to do–consequences are important in forming a logical universal maxim. However, it doesn’t wait and see if those consequences come true, and it doesn’t call an act immoral if the consequences turn out bad. When determining the morality of an act, intention comes before consequence for Kant, whereas for Mill, outcome is more important than intention; intended consequences are only part of the equation. Where Mill might say “The end justifies the means,” Kant (well–besides his prejudices) might side with Rev. King, who said, “(E)nds are not cut off from the means, because the means represent the ideal in the making, and the end in process, and ultimately you can’t reach good ends through evil means, because the means represent the seed, and the end represents the tree.” Could replace the phrase “evil means” with “evil intentions”.
2. A good criticism of the categorical imperative is that it does not settle an ethical dilemma where two duties are in conflict.
3. A criticism of the cat. imp. is that it has a loophole — making the maxim so specific that, after universalizing it, it can only apply to yourself. But I don’t think this is a true loophole, but, rather, a slippage from absolute (categorical) back into situational (hypothetical).
4. What is rational? “(T)he real world provides examples of people who most of us believe acted irrationally while in their own minds following a sure rational path toward a goal. [. . .] So, if the rationality of one’s decisions depends on one’s personal interpretation of the situation, how can the categorical imperative be a gaurantee that we will all reach the same conclusion if only we use logic? [. . .] Kant seems to assume that we all have the same general goals, which serve as a gaurantee of the rationality of our actions. Change the goals, though, and the ideal of a reasonable course of action takes on a new meaning,” (228). Takes us back to relativism and subjectivism.
5. No Exceptions? The author seems to think universalization needs exceptions to the rule, like, “Do not lie, except to save a life.” The example given is whether to reveal where our friend is hiding, or lie to the killer to save our friend’s life. Do not lie, or save a life. Keep in mind that if we reveal where our friend is hiding, just so we can tell the truth, then we are treating her as a means to an end (pointed out on 239 as an irreconcilable difference). I think, when deciding whether to tell the truth about where your friend is hiding from a killer, or to save her life, there is a third option that makes both possible, without creating an exception: Create two maxims that do not conflict: “No one should ever provide facts to someone who will use those facts in a harmful way,” and “Never provide false information or deceive by communicating that you do not have the requested facts (do not lie).” All you have to do is be upfront and honest and say, “I’m not going to lie: I know exactly where she is, but I am not going to tell you where she is, because I know what you want to do is wrong, and I’m not going to help you.” Does this put you at risk for the killer to torture the truth out of you–and is it worth the risk? Does lying and directing the killer to a far-off location seem more to your liking? Then maybe we are getting back into situational ethics? –maybe there is no possible “absolute”? Or maybe we could stick to the categorical imperative–it just isn’t easy or comfy all the time?
In addition, it seems to me that consequences are exceptions that are foundations for a maxim–even a universal has a situational context. For example: “Do not (intend to) kill IF it will end the person’s life,” or even “Do not (intend to) kill IF it will not save the person’s life.” So when it comes to killing in self- (or Other-) defense–why not just say it this way?: “Do not kill someone with the intention of ending their life, but only with the intention of saving a potential murder victim’s life (assuming you have progressed through the levels of force and they have failed, and the only way to stop the murderer is to exert so much physical force, that it results in the murderer’s unintended death).” Two maxims that do not conflict: “Do not intend to end a person’s life,” and “Defend life from being intentionally ended — do everything you can to save that life.”
A Word about the Categorical Imperative and Psychological Egoism
The author makes some references to Self, which I do not feel are flaws of the theory, but acknowledgments of the validity of psychological egoism.
On page 222, “(D)etermine whether we could imagine others doing to us what we intend to do to them. In other words, Kant proposes a variant of the Golden Rule.” An egoist variant, sounds like. But maybe a misunderstanding, read on…
On page 224: “Could I still get away with it if everyone did it? The answer is no, you would undermine your own intention…” I think this is probably just a misunderstanding of what Kant meant about illogical self-contradiction between original intention and universalized maxim–self, or “getting away with it” doesn’t really enter the equation.
On page 225, “(It) draws on the same fundamental realization that I called a spark of moral genius in the Golden Rule: It sees self and others as fundamentally similar–not in the details of our lives but in the fact that we are human beings and should be treated fairly by eachother.” Fair treatment of others is considered reasonable on the basis that others are like self–that’s group egoism.
The second formulation of the cat. imp. on page 234: “Act in such a way that you treat humanity, whether in your own person or in the person of another, always at the same time as an end and never simply as a means.” Humanity group-egoism. If extended to all who suffer, then…still group egoism…read on…
Humans have absolute value because we can assign value (232)–we can see ourselves as valuable and defend ourselves from being treated as tools–this endorses egoism, because it respects self-value, and self-defense (and social order). Those who cannot place value on themselves can not defend themselves or argue against being utilized, and therefore are open to exploitation and being considered as ‘things’. Kant believes one is a person, as opposed to a thing, if one is a rational being–he does not address the question of humans who can not reason (those with Alzheimer’s, in a coma, etcetera). He talks about “partial rights” like with infants who “belong” to their parents (debatable). He is against animal cruelty because, “It dulls (a human’s) shared feeling of (the animal’s) pain and so weakens and gradually uproots a natural predisposition that is very serviceable to morality in one’s relations with other men,” (237). In other words, beings who are able to suffer should not be treated cruelly because, in their pain, they are like self–that’s group egoism. It is also androcentrism.