Philosophy And Science Overlapping Magisteria
Recently Massimo posted about Michael Shermer's misguided attempt to claim for science what traditionally - and rightfully - belongs to philosophy. It is another episode in a recently growing trend, as exemplified by Sam Harris' book-length treatment of the same matter in "The Moral Landscape". For anyone convinced that ethics is ultimately the proper domain of philosophical inquiry (though philosophical reasoning can and should be "informed" by our best science), it can be a very frustrating experience to have to continually combat this rising tide of incipient scientism.
But, of course, that doesn't mean we stand back and assume that the opposing viewpoint will ultimately exhaust itself. On the contrary, we should be "more" inclined to criticize positions that are becoming successful (i.e., popular); at the vey least responding to others sharpens our own way of thinking. In this sense, I must admit to a love/hate relationship with people like Sam Harris. Although I disagree with many of his positions, I have to admit that I admire his tenacity and his courage to stand alone and be criticized. And his popularity (or at least his controversial public persona) helps create the necessary conditions for a vigorous dialectic - and "that" is a good thing.
Harris' central premise, which is essentially the same premise shared by Shermer and others in that camp, is that questions of value can "at least in principle" be reduced to facts about the well-being of conscious creatures, and that these facts "and their interpretations" fall within the purview of science. Harris further maintains that the most relevant discipline here is neuroscience.
Back in August, 2012, "Philosophy Now" published an essay by philosophers Julian Savulescu and Ingmar Persson entitled "Moral Enhancement", where they essentially argue for a nascent program of eugenics, what they call the "biomedical means of moral enhancement," or "moral bioenhancement." Their reasoning is that the evolutionary course of the human species has left us ill-equipped to deal with specifically modern existential challenges like global climate change or warfare that may involve weapons of mass destruction, thus threatening to eradicate all sentient life on the planet. This echoes Sam Harris' motivation for writing "The Moral Landscape": "changing people's ethical commitments... is the most important task facing humanity in the twenty-first century."
But, just as with Harris' book, where he admits that concepts like "well-being" and "flourishing" are notoriously difficult to measure, Savulescu and Persson also acknowledge that "it is too early to predict how, or even if, any moral bioenhancement scheme will be achieved." What these authors do have in common is an unflagging confidence that science will be able to figure it out. I understand this feeling of confidence. In fact, I share this confidence concerning many if not most things science tackles. But ethics isn't one of them.
I think that moral reasoning and the related dialectical activity is the most important thing we can do in life. I believe this not only because of obvious existential threats we face, but because knowing "how to live and what to do," to paraphrase the late poet Wallace Stevens, seems to be the most indispensable and perhaps even the oldest need of our species - going back possibly to the earliest emergence of self-consciousness in our evolutionary lineage. I mean, so far as we know, other animals don't experience ethics (broadly conceived) as a "problem"; every aspect of their existence is determined or ordered by instinctual behavioral patterns. Obtaining sustenance, finding mates, avoiding dangers: these aren't problems for them in the way ethics is a problem for "us". We still face the same issues, of course, but our nature as social creatures and more critically our capacity for "knowing that we know" (and knowing we have the ability to choose between alternatives based on reflection), creates the possibility for "doubt" about which course of action is best, whether it's deciding which personality type would ensure the best marriage or which hobby or career would give the most satisfaction in life. We humans do more than wonder which action is the most "utilitarian" (lowercase "u"); it's also about which action is the most "rewarding".
Despite Harris' confidence in his moral realism, there is a streak of relativisn in his own approach as articulated in T"he Moral Landscape". In my own annotated copy of his book, I've marked six significant concessions to the variability of the concept of "well-being." Most tellingly, when comparing moral well-being to the notion of physical well-being (i.e., health), he says that science "cannot tell us why, "scientifically", we should value health." Of course, Harris doesn't see this as a knock-down punch to his general project; he goes on to say that "once we admit that health is the proper concern of medicine, we can then study and promote it through science." Yes, that's true; but the key words here are "study" and "promote." Just think about the voluminous yet conflicting scientific pronouncements on health from sites like Medical Xpress: one week there's a report that "coffee will make you live forever!", and the next week they report that "coffee kills you faster!" (Full disclosure: I truly believe coffee will allow me to live forever. Of course, my cognitive apparatus may be compromised by addiction in this case.) Reading any of these medical news aggregation sites illustrates perfectly the amorphous nature of "well-being," whether physiological or psychological.
And as a professional dog trainer, I have to ask: whatever happened to good old operant conditioning, the discovery that a system of naturally-occurring rewards and punishments determine and shape behavior? The principles of operant conditioning apply to all animals, humans included. There are four basic "quadrants" delineated by this concept: 1) positive reinforcement; 2) negative reinforcement; 3) positive punishment; and 4) negative punishment.
Positive reinforcement is the idea that when a subject receives a reward for doing a particular action, then that subject is likely to perform that action with more frequency and more vigor in the future, and it results in "lasting" behavior modification in most cases. A classic example of this is the pigeon in the lab pressing a lever and receiving a pellet of food every time it does; the bird will peck the crap out of that thing!
Negative reinforcement, on the other hand, is the idea that "removing" an aversive stimulus will increase the frequency and vigor of a desired response. As an example, think of that godawful buzzing noise you hear if you don't buckle your seatbelt when driving.
Positive punishment is by far humanity's favorite method of behavior modification (despite its inadequacy), and it entails "introducing" an aversive stimulus in order to decrease the frequency of an undesirable behavior. I hardly need to cite an example, but think of smacking someone's hand when they reach for the cookie jar.
Negative punishment, on the other hand, is the removal of a desired stimulus or reward in order to decrease an undesirable behavior. Think of taking away your teenager's video game privileges because he's been bullying his sister.
The unique and most fantastic consequence of discovering these principles is that we are in a position to intentionally manipulate and exploit them, whether with other animals (consider the history of animal domestication for human benefit) or with our fellow human beings. We don't have to simply rely on naturally-occurring environmental circumstances to trigger a behavior that we "hope" will happen. We all use these four quadrants in varying degrees every day, without really being aware of what we're doing (or why), and without the sense that if we apply a little sophistication to our approach, we may be able to make more effective use of them.
Of course, I think there is a bit of difference between changing "behavior" and changing "beliefs"; but if behavior flows from belief, then the principles of operant conditioning should be able to accomplish what we desire from moral philosophy, assuming we're assiduous enough and creative enough to apply them properly. If we can succeed in changing behavior for the better, do we need to change beliefs? If we can succeed in indefinitely deterring Iran from using nuclear weapons against Israel or us, do we need to change their belief that we're the Devil Incarnate?
I think those in Harris' camp believe that "science" - particularly neuroscience, but possibly even evolutionary psychology and the like - is a powerful shortcut to the type of behavior modification we're all seeking. Harris contrasts the type of "science of morality" that psychologists like Jonathan Haidt and Joshua Greene do with the kind Harris envisions. He believes that theirs is important but ultimately insufficient. I also believe that the work of those like Haidt and Greene is important; but I think that it's up to philosophers to be aware of and utilize the findings from "this" science of morality in their moral reasoning.
Interestingly, there was an article entitled "The Folly of Scientism" in "The New Atlantis" by biologist Austin L. Hughes, who takes Harris and others to task. Hughes blames in part the discipline of philosophy for abdicating its prerogative with regard to some intellectual matters, allowing the louder voices of the hard sciences to take over discourse on things like "values" and such. I'm not a part of academia, but based on published books, essays and blogs by philosophers, I don't think Hughes is correct here. It seems more likely that scientists have simply become emboldened by and enamored of the success of their respective disciplines, and are thus riding that wave onto the shores of philosophical discourse, where they come crashing impudently down.
Unlike the dispute between religion and science, where most people believe the two approaches have nothing to say to each other, and where Stephen J. Gould famously sought to establish an ideological Switzerland with his notion of N.O.M.A. (Non-Overlapping MagisteriA), philosophy and science "do" overlap; and I believe the best course forward is to maintain and "enhance" the dialogue currently taking place between philosophers and scientists. Having prominent (or at least popular) thinkers like Shermer and Harris and others stake out their positions with verve, and having others muster an equally vigorous critique of their positions carries on the ancient Greek tradition of the "agon", a good way of getting clear on how to solve the problems of our age.
Harris and others seem to be desperately seeking a way out of this intellectual morass we call moral philosophy. But why should we expect it to be anything "but" a morass? Why should we expect definitive or clear-cut answers to ethical questions? Instead of trying to settle once and for all the questions upon which humankind has meditated since time immemorial, we should strive for the best "approximation" to sensible answers, which will of necessity be moving targets (at a minimum, insuring job security for philosophers!).
A "science of morality" should result from the best efforts of philosophers, psychologists, neuroscientists, sociologists - and even economists - working in the most open, mutually-beneficial manner possible. I think that is what is actually shaping now, despite a growing tidal swell of scientistic sentiment coming from some skeptical quarters. And scientism needs to be countered both because it's intellectually misguided and because it engenders endless misconceptions about science in the public at large.
Shermer's piece was a response to John Brockman's annual question, "What should we be worried about?" In my opinion, we should be worried about the usurpation of philosophy by scientists.
0 comments:
Post a Comment