Imagine a small boat stranded in the middle of the ocean. There is no one around for miles and no one can even signal for help. There is a limited amount of food and water, but there is a more pressing matter. The boat is sinking. Slowly, but surely, the boat will fill completely with water unless one person jumps out. The boat is only made for 5 passengers, and there are six people on board: a priest, a young woman, her baby, a famous celebrity, an old man, and a person convicted of numerous crimes. Who should be sacrificed to save the rest?
According to Joshua Greene, Ph.D., an analytic philosopher, there are two different ways of viewing moral situations like the one of above. Some questions require people to logically assess the situation and come up with a reasonable solution. This might require sacrificing a few people to save many. This view supports a type of utilitarian morality, which would allow a few to die as long as some greater good is achieved. Other questions require a more emotional response. They, what we would call deontologists, would argue that killing, of course, is wrong, no matter what circumstances arise. They would protect every life, even the smallest, such as the baby, or the most undeserving, perhaps the criminal.
In an article entitled, "An fMRI investigation of emotional engagement in moral judgment," in the journal Science, Greene performed a study posing two very similar situations, each evoking a different response out of his subjects. He then took scans of their brains as the two questions were asked. He notes:
"A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. The only way to save them is to hit a switch that will turn the trolley onto an alternate set of tracks where it will kill one person instead of five. Ought you to turn the trolley in order to save five people at the expense of one? Most people say yes. Now consider a similar problem, the footbridge dilemma. As before, a trolley threatens to kill five people. You are standing next to a large stranger on a footbridge that spans the tracks in between the oncoming trolley and the five people. In this scenario, the only way to save the five people is to push this stranger off the bridge, onto the tracks below. He will die if you do this, but his body will stop the trolley from reaching the others. Ought you to save the five others by pushing this stranger to his death? Most people say no."(Greene, 2105-8)
Using the fMRI, Greene found that in the footbridge scenario, the regions of the brain associated with emotional processing were activated and therefore lit up. With the trolley scenario, those same areas were not activated. Some moral questions require a more logical approach. These become impersonal to us, so we can perhaps justify killing a few to save many. Therefore, we would choose to allow the one person to die to save the five on the tracks from the train. Others can be answered with a more emotional and personal touch. If we apply universal morality to the situation, such as respect for fellow human beings, then how could we ever allow one person to be killed?
Greene observed high activity in brain regions associated with emotion when they were asked about killing babies, even if such an action would save a small town from invading soldiers, for example. Where utilitarian thinking dominates, he observed high activity in regions associated with cognitive function. In one such area, the right anterior dorsolateral prefrontal cortex, activity increases for those who would consider more rational or utilitarian choices, in this case, chose to smother the baby. Greene stated that there are two opposing views in our brains. One, the ancient emotional brain, embodies the view of universal morality of the deontologists, who disapprove of killing. Two, the new brain, equipped with higher-power cognitive function, indicates the utilitarian's "for the greater good." He argues not for the dichotomy of reason and emotion, but an evolved view of "areas associated with cognitive control and working memory," vs. "areas associated with emotion," with obvious bias towards the prior.
There are some obvious flaws to using fMRI to study the neurology of thoughts and emotion. The fMRI signal correlates to a function in the brain. If a particular region lights up, it doesn't mean that the signal originated at that region. According to "Does Neuroscience refute ethics?" published by mises.org, "In fact, the fMRI signal does not even provide a direct measure of the spiking of neurons, so we do not know whether it reflects the inputs or outputs of the activated area." Even with hard data, like the fMRI scans, it is hard to decipher a moral meaning. We cannot find meaning where there isn't from data. For example, we cannot prove that candy is evil because dentists have proved that the sugar can cause cavities. On the flip side, human emotions, like love and hate, cannot be disregarded as less useful than hard facts, especially in matters such as relationships and family. Just because we have fancy scans to prove brain activity, we cannot prove that the outcome of cognitive functions in the brain leading to a more utilitarian decision is morally superior to emotionality, because reason always trumps emotion and feelings. Greene's thinking that a moral relativism is far more applicable than universal morality. We can each follow our own moral compass, so long as it leads to some sort of benefit in the end. We cannot be held accountable for things if every person's beliefs about murder and stealing vary. If you don't support this, then your brain must be more prone to emotional thought, or your "emotional brain is overdeveloped." The article sarcastically comments that though Greene uses fMRI scans to support his findings about opposing brain function with regards to thought and morality, everyone is entitled to their own opinion. He concludes that " 1) there are no moral facts, it's all a matter of opinion; and 2) we should all become utilitarians and donate to charity."