Tuesday, June 13, 2006

Doubter of the Fish Doubts Rational Self-Interest in Split-Second Emergency Decisions

My friend Olly and I have been tossing the proverbial hot potato of morality back and forth for a few posts now, and he recently threw the potato back into my court. This post will serve as my attempt to throw the hot potato back at him.

Olly and I get closer and closer to being on the same moral page with each consecutive response to each other. Even if Olly ultimately rejects my fact-based individualist moral system, it still feels good to be making so much progress with each round of our discussion. It is a fresh change from the brick walls that seem to pop up so frequently in so many moral conversations.

Olly begins his response with some kind words:

Aaron Kinney posted an excellent response in our ongoing conversation about morality, self-interest, etc over at Kill the Afterlife (post can be found here). Let this post serve as my volley back, so to speak.

As Aaron fleshes out his argument for me, I'm finding a few things out. First off, while I do still have my sticking points, I'm impressed by the completeness of Aaron's system. Even if in the end I come to disagree with Aaron, I won't say that he hasn't thought it through!

I really appreciate Olly's remarks. I would like to note that, at least to me, the thoroughness of my "thinking it through" is actually a feature (rather, a strength) of my moral system. What I mean is that my moral system easily lends itself to thoroughness in application. Why? Because it combines the axiom of self-interest with the objectivity of a fact-based reality, and applies these ideas consistently through the principle of universality. The axiom of self-interest, the objectivity of fact-based reality, and the principle of universality are all thorough by their nature, and therefore provide a superior applicability to any other moral system, especially the relative ones. Once you familiarize yourself with the fact-based individualist moral system, and get a firm grasp on it, you will find that it is applied with ease to any moral scenario, with satisfactory results.

Now, let's move on to the meat and potatoes of Olly's response!

First off, in response to my request for clarification on the principles Aaron claims are absolutes, Aaron brings up Francis Tremblay's essay The Moral Razor as an explanation for the principal of universality. The exact quote from Francis is

"A moral principle or system, or a political principle or system, is invalid if it is asymmetrical in application (to locations, times or persons)."

On this, I will agree with Aaron (and of course Francis) completely. If morality/immorality is defined as being interactive between different actors, then the reciprocation of that interaction is what makes the actions themselves moral. But here is where, again, I see a problem with Aaron's argument. Aaron follows up the principle of universality with a derivation of an axiom (which I had asked him to do for me), the axiom of self-interest. Basically, as I understand it, Aaron's argument is that if the principle of universality says that a moral principle that is true for one person is true for all persons, then if self-interest is true for one person, then it is by logical definition true for all persons as well. The quote from Aaron:

"Thanks to the principle of universality we can say that if self-interest is valid for one individual, then it is valid for all individuals."

I don't think that you can derive a specific axiom, such as the one that Aaron posits, from the principle of universality.

Olly is a sharp guy. He caught me red-handed in a mistake. I did indeed imply that the axiom of self-interest is derived from the principle of universality. This is of course incorrect. I wrote my sentence wrong and got the ideas flipped in my head. Olly is correct. The axiom of self-interest is not derived from the principle of universality. Rather, the axiom of self-interest and the principle of universality work in concert with the facts of reality to determine what actions are moral and what actions are immoral. This needed clarification, and I'm glad that Olly caught this.

Olly then has more to say about the principle of universality:

The principle is talking about morality in general, and when applied in that sense it works. But applying it to try to derive specific morals becomes a major problem. What Aaron has done is break the principle of universality down to the following logical sentence form: if x is valid for one individual, then x is valid for all individuals. The problem is that the variable x can be anything and the axiom still works. A Fish Bibbler could just as easily claim: If Christian morality is valid for one person, then it is valid for all individuals. The only argument then is attacking the validity of Christian morality itself (which Aaron does very well on a daily basis), but you can't use the principle as a standalone argument.

What the principle of universality (aka The Moral Razor) says is not in relation to values, but in relation to moral rules or moral principles. The principle of universality cannot be used to say, "Mustangs are good for me, so Mustangs are good for everyone," but it can be used to say, "Coercion is wrong when initiated against me, so coercion is wrong when initiated against any individual." The difference is hard to see at first glance, but is of vital importance. If you remember one thing about the principle of universality, remember this: The principle of universality is not applied to individual values, but to the moral framework in which those values exist. The principle of universality is the universal application of a moral framework equally to all individuals, not a universal application of the same values.

It often helps me to use this analogy: Values are all different, just like all rocks are of different shape and mass. But the moral framework that people express their values in is the same for everyone, just like the same laws of physics apply equally to all rocks, regardless of the rock's shape or weight.

After Olly agrees with me that even Kamikaze pilots were acting in accordance with their perceived self-interest, Olly raises an objection first mentioned by the esteemed Sean Prophet about the difficulty of making a rational moral decision in split-second emergencies:

...I'd like to refer to a comment by Sean Prophet from the Black Sun Journal, in response to an earlier post in the ongoing converation with Aaron:

Sean Prophet: "In practice, when a couple is attacked by an armed aggressor, things happen so quickly that it would be nearly impossible to make a rational calculation."

I think that Sean hits the nail on the head with this one, and I would say that science supports him. I would argue that in that split second decision to jump in front of a bullet, it's Fight or Flight that kicks in, not any conscious decision that I make. If that's the case, then indeed I'm not making a rational decision about self-interest, but rather making an instinctual move to save my wife. But why?

So here's where I'm going to concede a point to Aaron, before I clarify that I still think he's wrong in some ways . Aaron has me mostly ready to buy his self-interest theory. But it doesn't mean that I buy every aspect of it. I would like to ask Aaron for one more clarification first: does the recognition of self-interest have to be a rational one? Let me explain:

So my answer to the above 'why?' I think it's an extension of Richard Dawkins' theories about the self-interest involved in saving copies of our genetic makeup. If I was saving the life of an offspring, Dawkins argues it's because it's in my self-interest to keep a copy of my genetic code alive. But in the case of my wife, since we don't share genetic code (or indeed even kids with shared genetic code yet), Dawkins theory falls down.

So to extend the theory, I would argue for a kind of genetics-by-proxy argument. Tribalism is heavily ingrained in human beings, over centuries and millenia of evolution. I would argue that the instinct here is not just towards my own genetic code, but to those that I have extended that familial obligation too, through emotional bonds. This is, in some ways, neo-tribalism for the 21st Century. While the tribe itself has been mostly erased from modern culture, the instincts towards tribalism (loyalty to loved ones, protection of mutual interests, etc) remain.

In emergency situations, things get complicated, and as Sean Prophet said, there is no time to make a rational decision. Instincts come in to play in these situations. However, genetic code-sharing is not the only motivator in an emergency situation like this. I would argue that whether it is familial, friendly, loving, or even professional/financial, any kind of bond between two people, if strong enough (in other words, if incorporated enough into one's perceptual lens or worldview), will cause a person to put himself at considerable risk for the sake of the other, since the interest of the victim directly relates or affects the interest of the rescuer/intervener.

Let's say that I see a stranger in a suit getting robbed downtown. I would likely not intervene directly, because it isn't worth risking my life for this guy. I would probably just call the cops and stay a safe distance away. However, let's say that this same man in a suit getting robbed is a potential buyer for my self-started business (if I had a self-started business). This man's interests are then much more closely related to my own, and it would be worth more risk to take a personal stake in this man's safety, because if this man gets robbed, hurt, or killed, then there goes my chance at selling my business! I might offer my own wallet to the robber, or even step in between the robber and the potential buyer as a way of protecting my interests, and by proxy, the man in the suit's interests. I'm not saying that I would necessarily take a bullet for the guy (it probably depends on the purchase price of the business and how my personal financial situation is), but I am saying that, in emergency situations, my own personal risk-taking will directly increase in proportion to my personal stake in the victim's well-being. This personal stake can be financial, emotional, genetic, or any other kind of connection.

Now the more important question: In an emergency situation, is it necessary to have time to logically think things through before making the appropriate action? No. Why? Because the human mind tends to subconsciously incorporate knowledge automatically during split-second decision-making. If, during moments of logical thought, I am aware of my love for my best friend, I need not be consciously aware of this love when I jump in between my best friend and a bullet. This is because my subconscious mind automatically incorporates this knowledge into my decision-making during that split second. I will automatically and instinctively act to save my best friend without taking the time to logically weigh my options and reflect on how much I value my friend.

Now of course the human mind isn't perfect. After taking the bullet for my best friend, I may lie bleeding on the ground regretting my split-second decision. The grass is often greener on the other side, after all. However, my argument of subconscious knowledge/value incorporation into split-second decision-making still stands. In that extremely brief moment, my mind automatically chose what seemed to be the best option: To protect my values (my best friend).

If you look carefully, you can see split-second decision-making in many daily activities. For example, I know that proper driving technique in America involves driving on the right-hand side of the road. I also know that turning the wheel of my car controls the car. I also know that I value my car, and my personal safety, and that my actions in controlling my car directly affect the well being of my car and myself. So if I am driving down the road, and I suddenly see headlights coming right for me (yes this has happened to me before), I will instantly and instinctively know that 1) the oncoming car is doing something he isn't supposed to be doing, which is driving on the wrong side of the street, 2) that I don't want to hit this dumbass, and 3) that I better spin my steering wheel to alter my course and protect myself and my precious, precious Mustang. I will perform these actions subconsciously. There is no time for logical thought, and typically only after the whole emergency scenario has passed will I have time to think about the chain of events, my automatic reactions, and how well it all turned out.

From avoiding accidents to playing video games to conducting oneself in social situations, people automatically incorporate conscious knowledge at a subconscious level for the purposes of making instinctive split-second decisions in accordance with their values, all without having to "think" about it. Even sports heroes, rock stars, and firefighters will tell you that their best performances (when saving lives, making the goal, or pulling off a sick guitar riff) were when they weren't consciously pondering each action or decision made, but instead just reacting automatically to the situation, with their mind in an almost shut-off or trance-like state, where only after the actions were performed did the person consciously think about what happened.

This is why practice, or value reinforcement, is so important to a good performance. In fact, I think that a skillful sports performance and a split-second life-saving act are very similar. The sports hero practices the same play over and over, reinforcing it into their mind. Similarly, Olly spends lots of time thinking about his wife and his love for her, reinforcing her value to him in his mind. So when the sports hero has that chance to make a goal, or when Olly has a chance to protect his wife, that split-second decision will be made, and the action will be performed, because that value was repeatedly drilled into their head beforehand.

This post got longer than it needs to be, so it's time for me to wrap this up. People always act within their perceived self-interest. People are motivated by their self-interest to apply the facts of reality (at least as they perceive them) to their values, and determine their actions accordingly. The more a value is reinforced within a person's mind, the easier it is to act automatically in split-second decisions without consciously pondering the logic behind the decision until after the action is performed (thanks to the subconscious mind).

I feel that in this recent sequence of posts (1, 2, 3, 4, 5, 6), I have justified the fact-based individualist moral system, through an explanation of its principles, a refutation of objections raised, and easily understandable real-world examples of the system in action. What do all of you think? Who agrees with me now that a godless, individualist, fact-based morality is the way to go? If anyone still has objections or questions, what are they?


Anonymous said...

Not really an objection, nore of a clarification, really. I had posted this in a prior thread, but it was never answered, so I will repost it here:

I have another question. (If this is very basic stuff and there is a site that covers all these basic questions, please direct me to it).

I assume there are levels to the coercive acts. For instance, if someone lies to me, that does not leave me free to kill him, right? Killing is only moral in self defense or with consent (e.g. terminal illness).

Is it that the first to use coercion frees the second person to act to the same or lesser degree and still allow the second person to be acting morally?

Aaron Kinney said...

Good question, The Schwa.

Sorry for missing the last time you asked this question. Here is my answer:

In a fact-based individualist moral system, it really DOES matter who initiated the coercion; who "started the fight."

Allow me to quote from Francois Tremblay's essay, The Moral Razor:

"There is one exception, and that is when we are looking at scenarios where a valid rule was already broken. Arresting someone when no crime was committed is asymmetrical, but arresting someone who initiated force is a different scenario. In this case we are looking not at a political principle - which is what the Razor is about - but rather at the consequence of breaking such a principle. In that case I would argue that, as long as no other asymmetry is present, singling out initiators of force should not be seen as breaking the Razor a priori."

Its like this: If I try to kill you, it is not immoral for you to attempt to do the same to me at the same time. What yo are doing is defending yourself from coercion or force. You would be attempting to reflect or redirect that coercion back at the initiator, where it belongs.

It is moral for you to defend yourself against coercion and protect your interests and values. This defense can take many forms, and one of those forms of defense is an offense. In other words, the reciprocation of coercion in a direct attempt to prevent coercion from happening to you.

But the reciprocation of coercion usually only applies to emergency scenarios, where options, recources, and time to make a decision are all limited. It is obviously preferable to defend oneself against coercion through non-coercive means when possible.

For example, if someone tries to blackmail me and I have 72 hours to comply with their demands, it would probably be better for me to call the police or a lawyer or something rather than try to blackmail the other person back.

Coercive defense is a defense of last resort, when non-coercive defense is simply not available due to the circumstances.

But remember also, that in a scenario where you must use coercion as a defensive measure, it is only because the aggressor already forced you into a position of limited options, and in effect already succeeded at coercing you on one level by forcing you into a position where reciprocal coercion is your only way out.

olly said...


Thank you for the post!
That is exactly what I was looking for in response. And I have a bit of a confession to make: I really do buy your fact-based individualism, and have for a while now. But i'm a firm believer that the system untested is the system unworthy, and without testing it myself against it's author, I had no way of truly working the kinks out in it. Consider me accepting your system, since indeed it fits how I live my life anyway, but also consider me someone to try to catch you 'red-handed' in the future as well!

Thanks again, I've enjoyed our conversations, and my blog will continue, as I turn back to what I'd like to focus on anyway, baiting the Fish Bibblers!


Anonymous said...

From the point on where you say, "any kind of bond between two people, if strong enough (in other words, if incorporated enough into one's perceptual lens or worldview), will cause a person to put himself at considerable risk for the sake of the other, since the interest of the victim directly relates or affects the interest of the rescuer/intervener.", you seem to contradict your stated intention, "I have justified the fact-based individualist moral system."

It seems to me that you have done the opposite. I fail to see where your earlier statement and the examples you give justify a moral system.

For an extended deterministic analysis of certain ideas such as moral choices, free will, and the soul, listen to Daniel Dennett's lecture on "Freedom Evolves--A Dangerous Idea".


He does use expressions such as "moral choices", but in a very restricted sense.(sorry, I haven't found the transcript of his talk).

To see a negative review of Dennett's book, go here:


I think this reviewer gets many things wrong and is engaging more in self-justification of his preconceived ideas rather than really trying to deal with Dennett's hypotheses. But he does present some of the questions.

"On the freedom side of the ledger the reader must quickly realize that Dennett’s topic is free will, not political freedom or psychological freedom from one’s worries, or any of the other kinds of freedom that might legitimately be of concern. But free will is no little thing. On it hangs our humanity. If persons are not free in their choosing then they are not moral agents responsible for the intentions and consequences of “their” actions. In fact, without freedom of choice homo sapiens only behave; they do not act. And so they are not persons at all."

But in fact, homo sapiens do only behave. But that doesn't mean they are not persons. The reviewer obviously has his personal definition of what constitutes a person.

"Dennett tries to reassure us; you’ve got free will (don’t worry), but it’s not the kind you thought you had. The modest kind you actually have “is all the freedom worth wanting.” Get used to it. The freedom you are addicted to is not good for you. Learn to live with less. Less is more.

So now we know. We know what we are getting into when we let him get his foot in our door. Dennett’s strategy is to substitute a “hermeneutical switch of perspectives” (heuristically adopting an intentional interpretation of “avoidance behavior” for the metaphysically and morally hearty choosers we think we are; and then he substitutes “caused as inevitable effects” as a definition of determinism. The second substitution may be more acceptable than the first.

Dennett is motivated to explain how free will could naturally evolve because he believes that the “false belief” that free will is impossible in a causally determined world is the driving force behind most resistance to materialism generally and to neo-Darwinism in particular. (p.15) On the contrary, Dennett insists: “Naturalism is no enemy of free will; it provides a positive account of free will,” one free of superstition and panicky metaphysics. (p.16) And then comes the confession in small print, overlooked by some: “I can’t deny that tradition assigns properties to free will that my variety lacks. So much the worse for tradition, I say.” (p. 225)

When Dennett complains that our use of exaggerated definitions of free will and determination causes problems for empirical theorizing, we wonder this: does he have a genuine concern for morality, or is it science he really cares about? His answer, that “we are evolved animals without souls but with free will” is ambiguous in this regard."

"Dennett, in his defense, says that whether free will is real or illusory depends on what you mean by “free will.” As an aside he points out that, in either case, free will (when regarded as harm-avoiding behavior) could have survival value—reproductive consequences. But what we want to know is: can harm-avoiding capacity be free enough to make us responsible for what we do, responsible enough that we might come to feel bad about ourselves as the price of, hopefully more often, feeling good about ourselves? We want buck-stopping responsibility. And that requires “ownership” of our actions and attitudes. The measure of our bad-avoiding success is whether we freely chose the good. Freedom is not, as Dennett claims, merely “the capacity to achieve what is of value in a range of circumstances.” Thermostats do that. What thermostats don’t do is “own” the values they seek."

Dennett presents the idea of free will in the context of the opposition of inevitability/evitability. He says that what we know as free will is just the increase of situations where we have evitability. The last example by the reviewer is badly chosen, as Dennett at the beginning says that we are made up of tiny "robots" i.e. chemicals and molecules that determine our actions and that don't care about us as a whole or as a person. "Owning" is not part of Dennett's thesis.

BlackSun said...

Hi Aaron, great post. And thanks for calling me "esteemed." ;-) Self-esteem is great, but so is recognition!

I basically agree with you that we incorporate our version of morality into our subconscious calculations which are constantly happening below our threshold of awareness. In my response to Olly's post, I was not arguing that a person would be incapable of responding quickly enough, but that it was unlikely. For an act of such great consequence, we would usually spend a lot more time in consideration.

I have observed the following difficulties with fact-based moral systems such as the one you described:

1) What happens when free agents do not subscribe to the moral system?

2) What happens when people have to choose between following their moral system, and their survival?

In both these cases, we have a situation where people pursue self interest and bend the rules to their own advantage. I think this is the most likely situation for humans. This is why we have ongoing physical warfare, battles through politics and memetics, and all manner of deceptions.

I agree with you that in relation between two rational, moral people, deception is a form of coercion. But these forms of rule bending are so fundamental to human nature and evolution, that I don't think we can avoid dealing with them. I touched briefly on this in the last episode of Vox Populi, where I mentioned the fact that it has been theorized by evolutionary psychologists that a primary factor in human brain development has been the capacity for deception of the opposite sex, and consequently also capacity for detection of that deception by the opposite sex.

It is clear that on a subconscious level, everyone is constantly analyzing the ever-changing circumstances and people in their lives. Often a strategic calculation is made as to whether or not a given moral transgression will be recognized, and/or punished. (It doesn't matter whether this punishment or repercussion comes through action of the law, retaliation, or damage to one's reputation.)

I do find it useful as you have done to elaborate, what would be the ideal moral system. I think it provides a great starting point for ethics. In much the same way, the ideal gas law in thermodynamics, or approximations of behavior of electromagnetic fields can be useful for basic calculations. But in the realm of real-world moral situations as in real world scientific observations, far more complex factors must be taken into account.

An important fact that bears on this discussion is the concept that all deception first involves self-deception, even if it is only in deceiving myself about the rightness or wrongness of a particular act. We might call that "rationalization."

I think the reptilian brain is programmed to survive at whatever cost, and only through forming the agreements that make up civilization can those instincts be tamed. Then, what takes over as you described is mutual self-interest. In a peaceful civilized society, people don't see everyone else as a potential enemy and are fairly likely to help others, so long as the cost to themselves is not too great.

In other words, as you said we'll call 911 to report a broken down car or accident, but most likely we would not stop and risk our own life to help someone.

But I think it is a constant battle between the desire to get ahead, and a desire to avoid danger, censure and punishment. It is often a simple gamble, and in many cases--to those who risk go the rewards.

A great book, which discusses these types of questions is called "The 48 Laws of Power" by Robert Greene. It is not a book on morality, per se, but rather a collection of 48 historical examples that show the benefits of following a particular law and the devastating consequences of ignoring it.

I realize this is not exactly a book on a fact-based moral system related to human nature. But given the historical context of these laws, I think we would be insane to ignore them. Not because such Machiavellian principles are necessarily ethical, but they rather describe the actual methods Humans have used to get things done. The basic premise of the book is that if we do not use the principles to our own advantage, it is certain they will be used against us by others.

I know this is a little off-topic, but I think it would be good to broaden your already excellent discussion of fact-based morality.

Francois Tremblay said...

"It seems to me that you have done the opposite. I fail to see where your earlier statement and the examples you give justify a moral system."

I guess some people just don't want to hear the facts...

Anonymous said...

FT, I have given you the facts. If you can't see the contradiction in Aaron's thesis and his examples, too bad.

It would be better to consider the facts and the analyses of the scientists Dennett and Gould that I cited before going into fuzzy-logic philosophizing.

If people like Dennett and Gould are right--and I am not saying they necessarily are--then these theories above on moral systems don't hold water.

I thought that atheists had a scientific outlook, or at least that they should have. You have to consider the scientific evidence before you construct a theory.

Francois Tremblay said...

"If you can't see the contradiction in Aaron's thesis and his examples, too bad."

One cannot see what is not there.

Aaron Kinney said...


You said:

From the point on where you say, "any kind of bond between two people, if strong enough (in other words, if incorporated enough into one's perceptual lens or worldview), will cause a person to put himself at considerable risk for the sake of the other, since the interest of the victim directly relates or affects the interest of the rescuer/intervener.", you seem to contradict your stated intention, "I have justified the fact-based individualist moral system."

It seems to me that you have done the opposite. I fail to see where your earlier statement and the examples you give justify a moral system.

I think you simply have a failure of comprehension. If it "seems" that Ive done the opposite, then why dont you explain to my why you think that is? Support your assertion where you claim that my examples can be used to prove the opposite of wyhat Im trying to prove.

What my examples showed was that even the most seemingly self-sacrificial act done for the benefit of another, is, in reality, a fundamentally selfish choice on the part of the one who is seemingly sacrificing himself.

I spelled it out in plain language in multiple examples, from Olly's taking-a-bullet-for-his-wife example, to the Japanese Kamikazes, and more. In each of these cases, the Kamikaze or bullet-taking husband are acting within their own self-interest because their wife or nation or whatever is a part of their value system; it is a part of their perceived self-interest.

Im not interested in pawing through the collected works of Dennett and figure out which of his claims apply to my moral system.

If YOU think that Dennett's work has relevance to my system, then YOU should explain, in your own words, what his different claims are and how they counter mine.

All youve done is paste tons of someone elses work and make empty claims about how it counters everything I said. Im not doing your homework for you. Take whatever claims of his you want and tell me directly how they apply to the things Ive said. And explain why you think that what I said actually proves the opposite of what I claim it proves. And explain to me how a Kamikaze or a bullet-taking husband ISNT doing those actions in their own self-interest, since I Ive clearly and plainly explained why I think that they ARE acting within their own self interest.