Tuesday, April 29, 2008

Developmental Biology

What it is and why you should know about it.
DATE: April 29, 2008
TIME: 1:00 PM EST

Scientists have grown a human ear on the back of a laboratory mouse using cartilage cells from a cow. In the peritoneal cavity of a mouse, scientists have coaxed a severed human fetal limb to grow into a tiny human hand. Scientists have also confected a human jaw bone in the laboratory, elevating "plastic surgery" to new heights.

Scientists can grow human neurons (brain cells) in a Petri dish and use them to test for drug toxicity. They can also grow sections of human brain within laboratory animals to study human neurogenesis.

Scientists have created hybrid animals like the "geep" (through the fusion of a goat and a sheep embryo), and they are rapidly garnering the technology to grow synthetic strands of DNA, insert them into living organisms, and alter them to breed heretofore unimaginable hybrid organisms. Scientists are also acquiring the technology to coax stem cells to become sperm and egg cells so that, one day, homosexual and lesbian couples might be able to be the genetic parents of their own offspring through IVF.

This all appears to be a mix of the macabre, the medically promising, the morally good and the morally perilous. Welcome to the world of developmental biology.

At the risk of over simplifying, we can describe developmental biology as the study of how the organism as a whole governs and guides its self-development and self-maintenance as a living being. Now, this marks a relatively recent development in biology. In previous decades, biology was characterized by what we might call a "parts-to-whole" approach as the field was characterized by its reduction of process to biochemical underlay, and endeavored to unlock the secrets of these fundamental biological dynamics, culminating in the monumental sequencing of the human genome.

With the advent of developmental biology, the field assumes a "whole-to-part" approach as it now endeavors to study and harness the laws which govern the genesis of whole organisms. Of paramount interest here is to discover how human embryos "do it," how a one celled human zygote brings about the development of an entire human organism.

Perspectives on the future possibilities of this science hold out the prospect of medical breakthroughs that were unimaginable only years ago: the elimination of certain birth defects, the generation of human organs in the laboratory, recovery of motility after spinal cord trauma, a cure for Parkinson's disease, and so on. The scientific acquisition of such knowledge is now the true holy grail of the science of developmental biology. But again, of paramount importance toward the acquisition of such knowledge is the conducting of research directly on human embryos.

This is why efforts to defend embryonic human life will only be realistic and effective if they take into account the full reality of this rapidly emerging field.

Undoubtedly, we must acknowledge the legitimate aspirations of this field: to further human knowledge by acquiring an understanding of the dynamics of organismic development, and to put that knowledge at the service of humanity. As opponents of embryo destructive research, we must also understand that there is no such thing as turning this field back or of saying, "stop Brave New Word, I want to get off!" Nor, in principle, is there reason to desire this.

Notwithstanding the more harrowing scenarios I described above, and the way Hollywood plays on our deep-seated suspicions of such science [think of The Island, or more recently I am Legend], I would actually suggest, however, that we have nothing to fear in principle from developmental biology.

I say, in principle.

Are there potential perils in developmental biology? Are these extraordinarily dangerous in some respects? Of course. But those dangers in themselves do not constitute reasons for foregoing the progress of human knowledge in this particular field. Human knowledge is a fundamental human good; but from the garden of Eden onward, history has witnessed that it is the free use of knowledge--not the knowledge itself--which can lead to evil outcomes.

We are, nonetheless, at a genuine turning point in human history. As Stanford's Dr. William Hurlbut, member of the President's Council on Bioethics, has affirmed,

In reflecting on these dilemmas, it was immediately clear that we are at a defining moment in the progress of science. The choices our society makes now regarding embryonic stem cells (and other ethically controversial uses of biomedical technology) will put into place both the conceptual principles and practical foundations for future techniques of research and patterns of clinical practice. Once established, these moral precedents and scientific techniques will serve as the platform on which further practice will be built layer upon layer; like the foundations of a building, these will be difficult to retract or revise.[1]

So, outside of setting up a commune somewhere north of Saskatoon, I would suggest that our only way forward--in order to preserve the integrity of human dignity at all stages of life in the age of developmental biology--is to work toward an adequate delineation of what many have called the "boundaries of humanity."

This means working to discover solutions that will allow the science of developmental biology to go forward, while at the same time precluding the direct use of human embryos or at least substantially minimizing that use by offering ethically and scientifically acceptable alternatives. The Westchester Institute has been dedicated in full to just such a project for the past three years, and we will continue.

Our efforts have been in the direction of sustained and painstaking moral and scientific consideration of what distinguishes a human organism from non-human, non-organismic biological artifacts. It can be morally licit to create the latter in the laboratory under certain conditions. But such discernment is presenting itself to be very difficult. It requires a delineation of the biologically and metaphysically minimum requirements for organismic existence. It requires us to attempt to define the set of primary, necessary and sufficient indicators of what constitutes a living human organism. Such efforts hold out the hope that such demarcation will one day offer us sound scientific and philosophical insights on which to arrive at moral judgments regarding the creation, use, and moral status of an array of biologically confected entities of genetically human origin.

Again, our efforts here are motivated by the concern that many of developmental biology's pet projects would become so much easier if scientists could just work directly on human embryos to harness the laws that govern the genesis of entire organisms, organismic systems and parts.

If-if a majority of Americans didn't consider it morally repugnant to manufacture human embryos solely for research purposes.

How long before they are finally disabused of such an antiquated "moral taboo?" Perhaps as soon as January 20, 2009.

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.

[1] William D. Hurlbut, "Framing the Future: Embryonic Stem Cells, Ethics and the Emerging Era of Developmental Biology," Pediatric Research, 59, 4 (2006) 6R. Dr. Hurlbut is Consulting Professor of Neurology and Neurological Sciences, StanfordMedicalCenter, Stanford University.

Copyright 2008 The Westchester Institute for Ethics and the Human Person.

Benedict at Ground Zero

A moment "firmly etched" in his memory and ours.
DATE: April 22, 2008
TIME: 9:00 AM EST

Benedict’s sojourn among us last week was composed of countless “moments.”

There were, first and foremost, all those personal “moments”: “He looked right at me!” “He smiled at me!” “He grabbed me by the hand!”

There were the moments of ecumenism.

There was that moment of fraternity with his brother bishops which did not lack candor and admonishment—albeit highly measured—for how some bishops had failed to be the shepherds they should have been in the handling of sexual abuse by priests. And then the moment Benedict met with some of the victims of that abuse.

There were plenty of light moments as well. About all he had to do was light up with that shy grin of his to send electricity through his audience. There was also that particularly warm moment when he greeted physically disabled young people at St. Joseph’s seminary in Yonkers.

And then, of course, there was the moment at Ground Zero—a moment on which I now want to reflect in greater depth.

“My visit this morning to Ground Zero” Pope Benedict told 3000 well wishers present to see him off at JFK airport on Sunday night, “will remain firmly etched in my memory, as I continue to pray for those who died and for all who suffer in consequence of the tragedy that occurred there in 2001.”

We can only hope that Benedict’s uninterrupted moments of silent prayer before a small reflecting pool built for the occasion has brought the family members of those who perished on September 11th closer to closure—a word we were hearing a lot on Sunday. One thinks especially of the families of approximately 1100 victims of the attack who never recovered so much as a fragment of the bodies of their lost loved ones. Benedict blessed them and he blessed the ground—the hallowed ground—in which those bodies, as one family member of those 1100 put it, are simply understood to rest.

Theologian and commentator George Weigel wrote last week in Newsweek magazine about another kind of “moment” Benedict may have already had—not necessarily during this apostolic journey to the US, but perhaps already somewhere in his three-year-old papacy.

Weigel was recalling the June 1979 visit of Pope John Paul II to Poland. Wrote Weigel:

Cold-war historians now recognize June 2–10, 1979, as a moment on which the history of our times pivoted. By igniting a revolution of conscience that gave birth to the Solidarity movement, John Paul II accelerated the pace of events that eventually led to the demise of European communism and a radically redrawn map in Eastern Europe. There were other actors and forces at work, to be sure; but that John Paul played a central role in the communist crackup, no serious student of the period doubts today.

Weigel’s salient point, however, is that few people were able to discern the significance of that trip at the time. There were certainly many reasons for this, but a deeper reason, suggests Weigel, might lie “in the filters through which many people read history today.” He notes that, according to one such filter, religious and moral conviction is irrelevant to shaping the flow of history. Nearly thirty years since that historic trip, history itself has demonstrated the stark inadequacy of such a filter.

Whether or not Benedict’s own “June 1979 moment” has already come, only time will tell. I don’t expect his presence at Ground Zero will necessarily play itself out as that moment, but then again, who knows? In addition to the peace and—we can only hope—further healing it brought to the families of the victims, how can we fail to grasp other significant aspects of this event?

In the person of Benedict, faith and reason met at Ground Zero on Sunday—the faith of the leader of one billion Catholics in the world, and the reason of one of contemporary history’s most educated men. Benedict has been unflinching in his contention that faith without reason (religious fanaticism) and reason without faith (secularism) are dangerous paths for humanity. Might this event occasion an even more intense dialogue between Islamic and Christian intellectuals on the place of religion in public life, its ability to shape culture, and our common need to respect religious freedom? Might Benedict’s presence at Ground Zero give renewed vigor to those agents of culture who are striving to disabuse Americans of our own brand of secularism which relegates religion—at best—to the realm of the personally therapeutic and quirky, if not seeing it as something inherently divisive and even dangerous. We can only hope—precisely what Benedict would have us do.

The iconic image of the 81-year-old Pope lost in prayer before a reflecting pool at Ground Zero was, in the end, a poignant reminder that we live in a time, we might say a season, of great consequence for humanity. Time and again, almighty God—faithful to his creatures to the very end—has raised up men and women to lead us through remarkable seasons of the Church and of human history.

Isn’t this why, come to think of it, Robert Bolton’s play A Man for All Seasons–the story of just one such individual, Sir Thomas More—has garnered such a timeless appropriateness and meaning? We have good reason to believe that Benedict is a man for our season, and that his moment has now come.

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.

Copyright 2008 The Westchester Institute for Ethics and the Human Person.

What Will Benedict Tell America?

Ten things I'd love to hear him say.
DATE: April 15, 2008
TIME: 9:35 AM EST

In the (highly unlikely) event that I get a phone call later today from the Pope’s secretary asking me for input on the speeches that Benedict will deliver here this week, here are some talking points I would offer for his consideration. These are the things I would love to hear Benedict say:

First, to Catholics in America:

  • Contrary to recent news reports, I have not come to “give you a boost,” or a “shot in the arm” or to lead a pep-rally for you. Actually, I’ve come to challenge you to be more authentically and thoroughly Catholic. Millions of your brothers and sisters throughout the world actually embrace the fullness of Catholic teaching, especially on moral issues, without picking and choosing—as too many of you do, cafeteria style—which doctrines you like and which ones you don’t. Those who embrace the fullness of Catholic teaching are not mindless and subservient automatons. Rather, they have considered the reasons behind such teaching and found those reasons thoughtful and convincing.
  • I’ve also come to remind you that the Catholic Church is bigger than the Church in the United States. It’s important for you to recognize that the needs of the Church throughout the world are too great, and our shared mission too big, to be lost to self-absorption.
  • You presidents of—supposedly—Catholic universities: do the human family a favor and please be authentically Catholic in your campus life and academic culture. Such “catholicity” on a Catholic campus does not translate into accommodating--in the name of “tolerance”—customs, behaviors, art forms, student associations or doctrines on campus or in the classroom whose core messages and philosophies are antithetical to the Gospel. (And, yes, I am referring to “The Vagina Monologues” among other things.) Tolerance, by the way, is not the core virtue of Catholicism; and there’s much more to being Catholic than working for social justice.
  • My brother priests: please recognize, if you haven’t already, that the level of religious knowledge and practice in your parishes is often near zero. Treat your parish ministry as a genuine mission field. Far too many Catholics hold to a feel good, design-your-own brand of Christianity which is a hybrid of Catholic faith and modern therapeutic, self-absorbed Emotivism. If you fail to preach the whole Word of God, then the situation will continue to worsen until the actual Catholic faith is only a faint memory in the minds of most of the laity.

Then, to all Americans:

  • Don’t be afraid of asking the big questions (about God, truth, and ultimate reality). Instead, fear the peril of falling into that existential boredom so characteristic of Europeans these days.
  • Be the leaders, culturally and politically, in rejecting the idea that science should be untethered from moral restraints.
  • Keep the discussion about world religions honest, and don’t let a misguided understanding of “tolerance” lead you to accept anti-Christian bigotry and hatred. And just because I give you reasons for the values that I uphold, it doesn’t mean I am trying to “impose” my values on you.
  • Remember that the moral principles which sustain a healthy society (sanctity of life, marriage, etc.) are not simply faith-based, but are in fact naturally human and rational.
  • Remember that "democracy" is not a magic word. “Democracy cannot be idolized to the point of making it a substitute for morality or a panacea for immorality. Fundamentally, democracy is a ’system’ and as such is a means and not an end. Its ’moral’ value is not automatic, but depends on conformity to the moral law to which it, like every other form of human behaviour, must be subject: in other words, its morality depends on the morality of the ends which it pursues and of the means which it employs. But the value of democracy stands or falls with the values which it embodies and promotes.”[Note to his Holiness: you will likely recognize this last paragraph; it’s from Evangelium Vitae, n.70.]
  • A religiously pluralistic society can co-exist peacefully without asking people of faith to suspend their commitment to the truth of their doctrine. Religious dialogue does not consist in everyone agreeing to abandon their particular truth claims in order to come together and profess that no one's vision of ultimate reality is any better than anyone else's. A mutual commitment to the truth and a healthy respect for our common struggle for it is the sounder basis of inter-religious dialogue and tolerance. Don’t allow your religious and creedal beliefs to deteriorate into a tyranny of relativism.

§

Hmmm. Maybe I should go ahead and send these talking points along just in case. Now, where the heck did I put the Holy Father’s fax number?

___

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.

Tuesday, April 8, 2008

When Do We Die?

A long-established criterion for determining death is under growing scrutiny.


Thirty-six hours after Zack Dunlap had an accident last November with his souped-up ATV, doctors performed a PET scan on Zack and found there was no blood flowing to his brain. After informing his parents, the doctors declared Zack brain-dead. Then followed the call to the organ harvesting team to come and retrieve organs from Zack. As they were being flown in by helicopter to the Wichita Falls, Texas hospital where Zack lay presumably dead, nurses began disconnecting tubes from his inert body. It was only then that one of Zack’s relatives who happens to be a nurse tested Zack for reflexes. Not only did Zack respond to pain, he was later able to tell a stunned television audience and Today Show host Natalie Morales that he heard the doctor declare him brain-dead—and how much that ticked him off.

Stories like Zach’s seem to be more prevalent of late, and more disturbing. They occasion reasonable doubt about three related issues: the reliability of the brain-death (BD) criterion as a standard for determining death; the degree of rigor with which such determinations are made; and whether the medical establishment is not dangerously biased toward organ harvesting as opposed to long-term, potentially regenerative care for persons who meet the loosest standard for BD.

Until recently, the general consensus had been that BD—the irreversible and complete cessation of all brain function—constituted a sufficient criterion for establishing that a human individual has, in fact, died. However, the consensus surrounding BD has been challenged of late. Opponents, most notably Dr. Alan Shewmon, Chief of the Department of Neurology at Olive View Medical Center, UCLA, point to cases of individuals who have been declared brain-dead and have “survived” with the aid of artificial respiration/nutrition for weeks, months, and even years. Shewmon has published a controversial study of such survivors that has posed a diametric challenge to the neurological standard for determining death. In testimony before the President’s Council on Bioethics, Shewmon observed:

Contrary to popular belief, brain death is not a settled issue. I've been doing informal Socratic probing of colleagues over the years, and it's very rare that I come across a colleague, including among neurologists, who can give me a coherent reason why brain destruction or total brain non-function is death.

There's always some loose logic hidden in there somewhere, and those who are coherent usually end up with the psychological rationale, that this is no longer a human person even if it may be a human organism.

The American Academy of Neurology (AAN) established a set of Guidelines for the determination of brain death in 1995 which currently remain a point of reference for many hospitals and physicians throughout the country. The AAN guidelines lay out diagnostic criteria for making a clinical diagnosis of BD. The guidelines note that the three “cardinal findings” of brain death are coma or unresponsiveness, absence of brainstem reflexes, and apnea (the cessation of breathing). It further outlines a series of clinical tests or observations for making these findings. The guidelines also note that certain conditions can interfere with clinical diagnosis, and recommends confirmatory tests if such conditions are present. Finally, the guidelines recommend repeated clinical evaluation after a six-hour interval (noting that such time period is arbitrary) using a series of confirmatory tests that are described in the document.

A recent study published in the journal Neurology, noting widespread variations in the application of the AAN Guidelines, drew these conclusions:
Major differences exist in brain death guidelines among the leading neurologic hospitals in the Unites States. Adherence to the American Academy of Neurology guidelines is variable. If the guidelines reflect actual pract ice at each institution, here are substantial differences in practice which may have consequences for the determination of death and initiation of transplant procedures.
Such variability in applying a uniform criterion of BD, in addition to the growing number of survivors of BD, must give us pause. And so must the growing societal pressure to donate organs—notwithstanding the genuine hopes that organ transplants holds for millions of people.

That pressure arises from the fact that the numeric gap between available organ donors and patients who need organ transplants continues to grow every year. A recent survey indicated that in 2006 over 98,000 organs were needed for patients on US transplant waiting lists.

Complicating matters, the number of available organs through donation from brain dead patients has remained stable for a number of years. And while organ donor cards and growing use of advance medical directives have occasioned a slight increase in the numbers of cadavaric transplants, more organs are needed than are currently available.

Consequently, transplantation services are pressed to find new and ethically acceptable ways to increase the number of available organ donors. Some advocates of a less rigorous application of BD have gone so far as to openly consider the moral licitness of removing organs from anencephalic newborns, and from persons diagnosed as being permanently comatose or in a permanent vegetative state (PVS). And some members of the medical profession believe the solution lies in redefining BD so as to make it less restrictive.

One such approach would define BD (and consequently death itself) as cessation of all higher level (cortical) brain functioning—even if there were activity in other areas of the brain. Such was the proposal suggested by Dr. Robert Veatch of the Kennedy Institute of Ethics at Georgetown University in his testimony before the President’s Council on Bioethics two years ago. “We could shift to a new definition of death that would classify some of these permanently comatose persons as dead,” affirmed Veatch. “In fact, a large group of scholars now in rejecting a whole brain definition has [endorsed] … a higher brain definition where some of these patients would be legally classified as dead.”

“But would the ordinary citizen accept such a definition?” he then asked. In response, he pointed to a study done at Case Western Reserve University looking at the opinions of ordinary citizens in the State of Ohio. The results were startling. Of a population of 1,351 citizens who participated, 57% considered the person in permanent coma to be dead, and 34% considered the person in a permanent vegetative state to be dead. Furthermore—again on Veatch’s interpretation of the data—with regard to the propriety of harvesting organs, in the case of a more rigorous application of the BD criterion, 93% percent thought it acceptable take organs. But in the case of permanent coma, 74% would procure organs, and even in the case of PVS, fully 55% percent would procure organs.

Veatch ended by exhorting those present: “I suggest that it's time to consider the enormous lifesaving potential of opening the question about going to a higher brain definition of death or, alternatively, making exceptions to the dead donor rule.”

Food for thought—and potentially for nightmares.

Admittedly, proponents of BD would question, in cases of survival after a BD determination such as that of Zach Dunlap, whether the criterion was applied strictly enough when they were declared brain dead. That’s a legitimate question.

But research like Dr. Shewmon’s and the growing list of survivors of BD are not only generating uneasiness in the medical field but also among potential organ donors who fear succumbing to some physician’s premature diagnosis of death. It seems to me that such uneasiness is warranted, and that the time has come for a much more rigorous moral and medical evaluation of the propriety of the BD criterion.
________

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.

Copyright 2008 The Westchester Institute for Ethics and the Human Person.

Tuesday, April 1, 2008

Morality and the Emerging Field of Moral Psychology

Is there such a thing as a ‘moral instinct’?

In my March 11th column I began an exploration of some of the postulates of the emerging field of moral psychology. I would like to finish those reflections here by offering a more extensive critique of a lengthy article that ran in the New York Times Magazine in January entitled “The Moral Instinct,” authored by Harvard psychologist Steven Pinker.

Moral psychology is an emerging field of research that delves into questions that have long captivated the curiosity of a broad array of disciplines in the Arts and Sciences: To what extent do our own bodies influence and determine our moral judgments and behavior? Are there genetic predispositions for everything from altruism to serial killing? How are we to make sense out of the uniquely human endeavor of formulating moral judgments? Can an understanding of neurobiology and genetics shed any light on this?

These are not only valid questions, they are important and fascinating ones.

Steven Pinker, today a dominant voice in this field, suggests in his Times Magazine essay that moral psychology is going to allow us to get at “what morality is.” His essay is an extensive exposé in layman’s terms of one of the fundamental theses of moral psychology, namely, that there is a “distinctive part of our psychology for morality” and when this psychological state is turned on, we begin to moralize. Pinker then notes two hallmarks of this moralizing psychological mindset: first, moral rules invoked in the state of moralization are claimed to be universal; second, persons who transgress those rules are considered punishable.

Now, I would suggest that Pinker and colleagues have not happened upon anything particularly remarkable here. If truth be told, they are simply noting a plainly obvious aspect of human nature: human beings moralize, we make value judgments. Cross-culturally and diachronically, human beings express an understanding of right and wrong behavior which we reward and punish respectively. That’s why I cannot help but agree with Pinker when he affirms:

In fact, there seems to be a Law of Conservation of Moralization, so that as old behaviors are taken out of the moralized column, new ones are added to it. Dozens of things that past generations treated as practical matters are now ethical battlegrounds, including disposable diapers, I.Q. tests, poultry farms, Barbie dolls and research on breast cancer.

Pinker is also absolutely correct when he affirms that most people do not engage in moral reasoning, but in moral rationalization. “They begin with the conclusion, coughed up by an unconscious emotion, and then work backward to a plausible justification.” Indeed, principled and painstaking moral reasoning is today an ever more a refined and atypical art.

But I would suggest that Pinker and colleagues err in educing this paucity of sound moral reasoning as evidence that morality is inherently unreasonable, that its sources are to be found exclusively in non-rational, non-cognitive depths of evolution-driven human psychology.

The apparent gap between (presumably irrational) moral convictions and their corresponding (rationalized) justifications does not constitute evidence for Pinker’s repeatedly un-argued assertion that we indeed have a “moral sense”, that is, a set of built in moral categories which are the product our own psychological evolution. Pinker and colleagues, by the way, are certainly not the first thinkers to suggest that human beings make moral determinations based on the operation of something they call a moral sense. As a putative explanation of morality, moral sense theory dates back at least to the mid 18th century.

The upshot of Pinker’s essay, however, is that after going to extreme lengths to suggest precisely this—that our experience of morality is ultimately anchored in this “figment of the brain” he calls the moral sense—he will end by denying this very premise or at least severely qualifying it. I’ll get to that in a minute.

Not withstanding my critique, there is actually plenty of interesting material in Pinker’s essay, just as there are plenty of good and valuable insights to expect from the field of moral psychology—much or all of which will be perfectly compatible with (or at least accountable for) from within Aristotelian-Thomistic natural law theory.

For instance, Pinker spends much of the essay speculating on the significance of a set of instinctive moral intuitions which researchers suggest are shared in diverse ethnic and cultural traditions the world over. Pinker observes that such observations appear to lend credence to the theory that humans are hardwired with a “universal moral grammar”. Explaining an analogy from the political philosopher John Rawls, Pinker notes that just as the linguist Noam Chomsky suggested that infants are outfitted with a “universal grammar” enabling them by default to analyze speech by way of built in categories of grammatical structure, so too, an innate universal moral grammar “forces us to analyze human action in terms of its moral structure, with just as little awareness.”

Pinker goes on to explain how those moral structures could consist in such things as the following: the impulse to avoid harming others; the propensity toward altruism and fairness; our inclination to respect authority; our avoidance of physical filth and defilement as well as our avoidance of potentially risky sexual behavior; our willingness to share and sacrifice without expectation of payback.

Some moral psychologists have reduced these to a specific set of categories—namely ‘harm’, ‘fairness’, ‘community’, ‘authority’ and ‘purity’—which they understand to work as fundamental building blocks of our moral experience. These five categories, explains Pinker, “are good candidates for a periodic table of the moral sense not only because they are ubiquitous, but also because they appear to have deep evolutionary roots.” He adds, for good measure—and again without argumentation—that these five moral categories are “a legacy of evolution.”

Now even though, reading between the lines, we discover that Pinker must be quite convinced that these categories are the product of evolution (“figments of our brain”), he nonetheless sustains that “far from debunking morality, the science of the moral sense can advance it, by allowing us to see through the illusions that evolution and culture have saddled us with and to focus on goals we can share and defend.” By this he appears to suggest we should simply learn how to cope with inborn moral sense, quirks and all (such as our taboos against homosexual attraction, and our “yuck” response to the prospect of things like human cloning), leveraging our understanding of both its virtues and defects, in order to cobble together a kind of shared set of moral values and societal prerogatives we can all live with. Indeed, affirms Pinker, we must get around the quirkiness of that built in moral sense because it can potentially “get in the way of doing the right thing.”

Now, that begs a huge question, doesn’t it?

If not in terms of our built in moral sense, then in virtue of what exactly is Pinker proposing that we can know “the right thing” to do? His affirmation can only makes sense—contradicting what would appear to be a core assumption of his article, namely, that the moral sense is all we’ve got—if there is some other moral agency in us with which we can judge, refine, correct, or ignore our built in moral sense.

To be sure, it is entirely plausible that we are endowed with something like a moral sense, with certain built in predispositions toward empathy, altruism and the like, and that we can even discover something like these behaviors in non-human primates. Natural law theory can accommodate this rather easily, and on strikingly similar grounds as those on which Pinker holds his evolutional moral sense suspect: only human reason can function as the immediate criterion for adjudicating the reasonableness of such built in categories and tendencies in every given moral scenario in which they would come into play.

But if that’s the case, then the moral sense, as a figment of our brain, is as fascinating as it is impotent to explain all that Pinker purports it to explain about morality. Indeed, if I understand Pinker correctly at the end of his essay, he is affirming that, when the day is done, moral determinations will be guided by reason, and not by any moral sense at all. So, why, I must ask, didn’t he just say that in the first place?

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.

Copyright 2008 The Westchester Institute for Ethics and the Human Person.

Tuesday, March 18, 2008


The “Vagina Monologues” to be presented (again) at Notre Dame

Last week I began a two-part column exploring and critiquing some of the postulates of the emerging field of ‘neuromorality.’ I will get back to that after Easter. I felt compelled to interrupt that topic, however, because I was so disturbed to hear that The Vagina Monologues would once again be presented (for a sixth year, in fact) at Notre Dame University. [I wish to note, however, that this is not altogether unrelated to the topic of how brain relates to morality.]

First of all, and for the record, here are some facts as I understand them from a friend who teaches at N.D. University president Fr. John Jenkins has set the following conditions for presentation of the play:

  • The play may only be presented if sponsored by an academic department. This year’s presentation is sponsored by the departments of sociology and anthropology; last year, no department sponsored it, so it had to be presented off campus;
  • The presentations may only be done in an academic setting (a classroom, not a theatre);
  • Immediately following each of the six scheduled presentations, there will be a mandatory panel discussion during which Catholic doctrine will be clearly expounded on the issues raised by the play;
  • The presentations shall not be aimed at fundraising for any purpose;

While noting that Fr. Jenkins would clearly be in his right, as president, simply to disallow the play, I also recognize that he has his own prudential reasons for allowing it. I also hasten to add that this six-year old saga should not be construed as a bad reflection on Notre Dame as a whole. I lectured at the Law School there in January and I have friends on the faculty. I must say I was delighted with what I saw and experienced on campus: wonderful things are happening at Notre Dame.

That said, when I first heard news late last week about this year’s presentations of the Monologues on campus, I was immediately reminded of that wonderful quote from G.K. Chesterton:

The modern world is not evil; in some ways the modern world is far too good. It is full of wild and wasted virtures. When a religious scheme is shattered (as Christianity was shattered at the Reformation), is not merely the vices that are let loose. The vices are, indeed, let loose, and they wander and do damage. But the virtures wander more wildly, and the virtures do more terribel damage. The modern world is full of the old Christitan virtures gone mad.

Now, President Jenkins was quoted as saying, in support of his decision to allow the play on
campus again this year, that:

It is an indispensable part of the mission of a Catholic university to provide a forum in which multiple viewpoints are debated in reasoned and respectful exchange—always in dialogue with faith and the Catholic tradition—even around highly controversial topics.

Well, amen to that

But here, is it not the case that the virtue, if you will, of ‘reasoned and respectful exchange’ has gone a little mad? I don’t deny, of course, that it’s possible to have a reasonable discussion even about different forms of moral depravity. But no matter what the topic, reasonable exchange of thought presupposes many things, among them a prudent setting, and a morally inoffensive presentation of the facts. Fr. Jenkins has made an effort to supply the former in requiring that the play be presented in an academic setting, but the latter condition remains unmet.

Now, how would we conduct a reasonable dialogue, say, of the exploitation of women through pornography? By gathering faculty and students together in a classroom to view and discuss blowups of Playboy centerfolds? Without having viewed the Monologues myself, I know enough about it to know that it is crudely offensive in a similar way and renders the very idea of a substantive, genuinely reasoned discussion preposterous.

It is therefore a striking instance of serendipity that not three weeks after the presentation of the Monologues on the Notre Dame Campus, Pope Benedict will be meeting (as reported last Friday by The Washington Post) with more than 200 top Catholic school officials from across the country.

What can we expect the Pope will say at the meeting? I expect his remarks will echo much of the substance of his papal address at the University of Regensburg (about which I’ve written in previous columns). Which is to say, Pope Benedict will likely make affirmations to the effect—and to echo the words of Fr. Jenkins—that “an indispensable part of the mission of a Catholic university is to provide a forum for reasoned and respectful exchange of ideas.” And no matter what else he might say, we have here the very reason why any institute of higher learning should refrain from making a mockery of reasoned discourse, and refuse demands for anti-cultural trash such as the Monologues.

To be sure, sponsorship of the Monologues is not by a long shot the only or even most egregious instance of unreasonable nonsense being passed off as culture at Catholic or secular universities. Nonetheless, it is central to the mission of intellectual stewardship that faculty and administrators at institutes of higher learning muster the backbone to say ‘no’ to unreason, and to say ‘no’ when necessary to very vocal minorities or majorities, no matter how vocal or how vicious.

* * *

And turning now to a superlatively more worthwhile topic, to all those taking the time to read this column today, I want to extend my warmest best wishes and the assurance of my prayers for a very blessed Holy Week and celebration of Easter.

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.

Tuesday, March 11, 2008

Morality as Genetic Predisposition and Neurobiology

A look at the emerging field of moral psychology

Within the intersecting disciplines of psychology, neurobiology, philosophy of mind, ethics, and cognitive science, a new field of inquiry has emerged of late. Although it goes by different names, including such recent coinages as ‘neuromorality’, the field is perhaps best referred to as moral psychology. I have touched on this topic in a previous column, but given the recent preponderance of media fixation on this topic, I thought it was time to take a closer look.


One might consider moral psychology as an emerging field of research that delves into questions that have long captivated the curiosity of a broad array of disciplines in the Arts and Sciences, some for several centuries: To what extent do our own bodies influence and determine our moral judgments and behavior? Are there genetic predispositions for everything from altruism to serial killing? How are we to make sense out of the uniquely human endeavor of formulating moral judgments? Can an understanding of neurobiology and genetics shed any light on this?


For many researchers in this field, such questions boil down to the challenge of mapping out what some would call the "neuro-anatomy of moral judgment." Across the country, moral psychologists, working in tandem with behavioral psychologists, evolutionary biologists and persons in related fields, believe they are hot on the trail of figuring out how humans are "wired for morality."


As a token example of the work in this field, we could note the investigations of Harvard university psychology professor Marc Hauser, author most recently of Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong (2006). Hauser was featured last May in the Harvard University Gazette On-line:


In a talk April 26, psychology professor Marc Hauser argued that our moral sense is part of our evolutionary inheritance. Like the “language instinct” hypothesized by linguistic theorist Noam Chomsky, the capacity for moral judgment is a universal human trait, “relatively immune” to cultural differences. Hauser described it as a “cold calculus,” independent of emotion, whose workings are largely inaccessible to our conscious minds.


Hauser, along with other leaders in this emerging field would have us conclude that morality is ultimately explainable in empirical terms: genetic evolution and inheritance, brain anatomy, neuronal activity, mixed with our environment and education—this and little more.


And that is where the trouble with neuromorality begins. It is indeed unfortunate that the pioneers of the new moral psychology—given all the potential for truly breathtaking and worthwhile insights which their discipline can provide—appear to be all too ready to succumb to that intellectual hubris that would reduce the broader whole of understanding to one very narrow vantage point. And this is already leading to untenable extremes.


Case in point is a recent lengthy exposé that ran in New York Times Magazine entitled “The Moral Instinct,” authored by another Harvard psychologist Steven Pinker. I will engage in a lengthier critique of Pinker’s article next week, but just to give you a better taste of some of the unfortunate excesses of neuromorality, allow me to share and comment on the following amazing paragraphs. Writes Pinker:



The gap between people’s convictions and their justifications is also on display in the favorite new sandbox of moral psychologists, a thought experiment devised by the philosophers Philippa Foot and Judith Jarvis Thomson called the Trolley Problem. On your morning walk, you see a trolley car hurtling down the track, the conductor slumped over the controls. In the path of the trolley are five men working on the track, oblivious to the danger. You are standing at a fork in the track and can pull a lever that will divert the trolley onto a spur, saving the five men. Unfortunately, the trolley would then run over a single worker who is laboring on the spur. Is it permissible to throw the switch, killing one man to save five? Almost everyone says ‘yes’.


Consider now a different scene. You are on a bridge overlooking the tracks and have spotted the runaway trolley bearing down on the five workers. Now the only way to stop the trolley is to throw a heavy object in its path. And the only heavy object within reach is a fat man standing next to you. Should you throw the man off the bridge? Both dilemmas present you with the option of sacrificing one life to save five, and so, by the utilitarian standard of what would result in the greatest good for the greatest number, the two dilemmas are morally equivalent. But most people don’t see it that way… When pressed for a reason, they can’t come up with anything coherent, though moral philosophers haven’t had an easy time coming up with a relevant difference, either (p. 35, emphasis my own).


What this is supposed to show is that our deepest convictions about right and wrong are not based on reasons, but on deep-seated tendencies, hardwired into our brains by our DNA and evolutionary history. The fact that people have a hard time coming up with reasons for their moral convictions is educed as evidence that either there are no reasons, or that any reasons given are utterly relative and may or may not reflect the deeper workings of our DNA driven psychological dispositions.

Pinker’s interpretation of the Trolley Problem —and presumably that of most people in the survey—fails to distinguish between intending to harm and allowing a foreseeable harm on reasonable grounds. The former is immoral; the latter might constitute a licit option depending on the case. Which is to say, the natural law tradition clearly provides reasons why it might be licit to pull a lever and divert the trolley onto a spur (the first case), and reasons why it would never be licit to throw the fat man down onto the tracks (the second case). The fact that persons surveyed had trouble articulating reasons for their moral convictions should not suggest that morality is ultimately irrational—determined within the deep recesses of our genetically predisposed subconscious—but simply that most people today have little or no formal training in ethics, let alone natural law theory. But more on this next week.

To conclude, the field of moral psychology is in many ways fascinating. It will undoubtedly make many valuable contributions not only to our philosophical understanding of human nature and morality, but also to our cultural considerations about how to educate our young people to live sound moral lives. It will do a grave disservice to the same, however, if moral psychologists aim to reduce entire fields of human understanding (in this case moral knowledge) to "nothing but" the stuff of neurological function and evolutionary biology.


Tuesday, March 4, 2008

McNihilism goes to church (when if feels like it)

A new study finds religious affiliation in the U.S. to be “extremely fluid”

A couple weeks ago I was reflecting in this column about the American appetite for “facts” amidst our growing anxiety over how little we really seem to know and understand about our world. I suggested that this lust for the easy knowledge-chunk and the inside story was a symptom of cultural ill-health, that it could indeed be “the very dynamic that perpetuates and aggravates the McNihilism that is eating away the very core of our culture, and even our mental health.”

By ‘McNihilism’, I mean that ubiquitous, routine, and largely subconscious, brand of Joe-on-the-street nihilism lived by millions of Americans.[1] It broadly describes the situation of a person who feels quite incapable of bringing ‘transcendence’, ‘purpose’, or ‘meaning’ in life into sharp focus—and who is largely uninterested in doing so anyway. It means the more or less conscious acquiescence to the perception that there is no overarching ‘meaning’ or ‘truth’ out there; no one specific, ultimate reality that will fulfill us in life; no ultimate point of reference for explaining ‘right’ and ‘wrong’; no God, no great, unalterable truths. Only ourselves.

Americans of course, along with most human beings, want to live with their feet firmly planted on as many certainties (in addition to death and taxes) as possible. Uncertainty is naturally disconcerting. That sense of uncertainty however, is normally not alleviated by a steady diet of “facts” for one simple reason: a steady flow of information, bereft of an overarching sense of meaning in which to assemble our facts, is about as useful as bricks without mortar. Additionally, too many of our “facts” are nothing more than snippets of hearsay, conjecture, inaccuracy, or illogic. Lusting after factoids –whether it’s the latest gossip about Britney, or the latest numbers on Obama, or the latest theory of the universe—doesn’t help ease that sense of the ground endlessly shifting under one’s feet. One way of dealing with that unease, of course, is simply to get used to it, and get over it, and accept that all is in flux, all is relative, paradigmatic and what academicians like to call “perspectival.”

It comes then as no surprise that so many Americans—as a groundbreaking new study suggests—appear to be embracing sexier brands of McNihilism in the forms of “spirituality,” “scientism”, “secularism” and “agnosticism.”

Focused on the religious affiliation of the American public, with a survey sample of 35,000 American adults queried between May 8 and August 13, 2007, the Pew Forum on Religion and Public Life discovered the following:

  • 44% of adults have switched religious affiliation, or moved from being unaffiliated to affiliated, or dropped any religious affiliation whatsoever;
  • 28% of adult Americans have abandoned their religious affiliation of birth to move on to another religion or no religion at all;
  • Of adults between ages 18-29, 25% (one in four) describe themselves as being unaffiliated with any particular religion;
  • 12% of the entire adult population willingly describes its religious affiliation as “nothing in particular.”[2]

Of course, the meaning of these statistics will generate much debate for years to come. As was noted in a Washington Post article:

Some [scholars] think that secularism is underreported as people may check a box correlating to a faith group without actually believing its tenets or following its practices. Others think the growth of the unaffiliated (sometimes called "religious nones," because they check "none" when asked their faith on polls) disguises the number of people who consider themselves "spiritual but not religious."

Tom Smith, director of a major sociological survey at the University of Chicago, estimated that about 25 percent of U.S. adults think of themselves as spiritual but not religious. "Some trends show there is less support for organized religion but either a steady or, by some measures, rising support for personal religious beliefs," Smith said.


The idea of being “spiritual but not religious”, and the evident fluidity of American religious affiliation raise a serious concern about the shallowness of the religious experience of Americans. And it should. Yet, I think there is a deeper concern it should trigger. This shiftiness, the existential inability to persevere in a religious tradition, all the religious window-shopping that goes on in America, all the doctrinal cafeteria-style picking and choosing that goes on within Christian communions of late, is too often nothing but a further manifestation of the pervasiveness of McNihilism.

This is perhaps nowhere more evident than within Christianity. The last two centuries have seen Christianity ravaged by ideologies posing as theology which have proposed the steady regression from Ecclesial Christianity, to Christianity without Church, to Christianity without Christ, to Christianity without doctrine—a Christianity, that is, which can embrace all belief systems because it has been virtually emptied of any positive doctrinal content (save for the universal injunction that Christians should be “nice”). And last of all, today, we have “spirituality” without Christianity.

This emptying out of Christianity, its loss of form (and consequently the nihilism which it embodies) is not a recent phenomenon; it was already being decried in 19th century Germany by that most famous victim of nihilism himself, Friedrich Nietzsche. Speaking of the theological tendencies of the Germany of his day (the following was penned in 1873), Nietzsche’s air of exasperation presages our own:

What are we to think if we find Christianity described by the “greatest theologians of the century” as the religion that claims to “find itself in all real religions and some other barely possible religions,” and if the “true Church” is to be a thing “which may become liquid mass with no fixed outline, with no fixed place for its different parts, but everything to be peacefully welded together” — what, I ask again, are we to think?[3]

Of course, we should not be led to think that all those who have bought into the neutering of Christianity or who engage in religion-shifting are nihilists. Many have been deeply disturbed by the prospect of the nihilist nightmare and are trying to evade it by searching out a “true” religion.

My fear, however, is that far more, if not most, have indeed succumbed to the nightmare. For them, shifting from one religion to another, then to no religion, then back again means merely to acquiesce to the deeper truth that there is no transcendent truth to which any religion could direct us anyway. And that is why the McNihilist relegates religion to the status of just one more personal preference.

In sum, my concern arises not because of religious change as such, but because of what I perceive to be one of the root causes of that change. If such change were rooted in a serious and widespread religious search motivated by a sense of one’s duty to seek the truth, this would not be a bad thing.

However, such fluidity is more clearly rooted in treating religion, as the late sociologist Philip Rieff described it, “therapeutically.”[4] For far too many Americans, emotional self-contentment—not truth—is the real factor behind religious affiliation. When one set of religious symbols, practices, and forms no longer “feel” good, the McNihilist moves on and tries on another set of religious accoutrements to see if they “fit” or “feel” better. Whether such symbols are true or not is a question that simply does not arise. After all, what is true for the McNihilist is simply whatever works wherever he or she is—not something that makes claims upon us regardless of our preferences. What is there, asks the McNihilist, beyond personal preference anyway?

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.



[1] The term is used in a manner loosely analogous to a term coined by political theorist Benjamin Barber in his 1996 best-seller Jihad vs. McWorld. ‘McWorld’ here becomes a shorthand term for western-style globalization.

[2] And then there’s the heartbreaker for Catholics:

Catholicism has experienced the greatest net losses as a result of affiliation changes. While nearly one in three Americans (31%) were raised in the Catholic faith, today fewer than one in four (24%) describe themselves as Catholic. These losses would have been even more pronounced were it not for the offsetting impact of immigration.

The study goes on to point out that it is immigration (along with a constant trickle of converts) that keeps the American Catholic population at about 25% in the U.S.

[3]Friedrich Nietzsche, The Use and Abuse of History, translated by Adrian Collins (Indianapolis: Bobbs-Merrill Company, 1957), 43.

[4]See Philip Rieff, Triumph of the Therapeutic (New York: Harper & Row, 1966).

Tuesday, February 19, 2008

Reason in the Public Square, Part II

When public discourse on moral issues is good for the culture and when it’s not

As I noted last week, our current overabundance of argumentation in the public square—on myriad topics—suffers from two endemic flaws, both of which have been masterfully scrutinized by the philosopher Alasdair MacIntyre. The first is that we too often encounter these arguments entirely untethered from the moral traditions which generated them. The second is that we too often find ourselves—as has been the case for the past three centuries in the west—unable to adjudicate between rival arguments as to which is right, overall, which wrong, overall, which true and which false in a universalizable sense.

I want first to unpack the meaning of these two flaws and then examine how they occasion a further flaw, namely, the assurance that ‘public reason’, as it is sometimes called, can give rise a viable public consensus on difficult and contested moral issues.

Arguments out of context

As Dr. MacIntyre has so cogently observed, the interminable nature of moral disputes occasioned by the plethora of arguments in the public square reveals how those engaged in moral “converse” are often not actually conversing at all, but simply enunciating to each other a series of premises entirely untethered from their respective—and often times quite diverse—moral traditions. Here there is not conversation really, no exchange of argument, but rather simultaneous moral monologues.

Living as we do in a culture which is generally at a loss to make sense out of competing moral claims, we would do well to point out the obvious, namely, that our opposing arguments are informed by starkly contrasting accounts of man and the moral life, starkly contrasting moral traditions. One person’s assertion of a ‘right to terminate pregnancy’ is based on a conception of rights as consisting of the free exercise of preferences within a radius of self-determination as established in law by a polity; my insistence on the ‘right to life of the unborn’ is based on a conception of right as a protection owed to an individual human being as a response to that individual’s intrinsic good, and for the flourishing of that human individual. When we fail to point out the de-contextualized nature of our arguments, our argumentation all too easily feeds the cultural trend of interpreting our exchange as merely the clash of legitimate, but opposing preferences (“You oppose abortion, I do not; so, keep your preferences to yourself”).

Moral argumentation untethered from an overarching conception of what constitutes right human living—normally only intelligible in light of a shared view of human flourishing—might give the impression of good health, but will in fact be intrinsically flawed, to not say decadent.

Which argument is the best?

That same plethora of arguments also reveals our inability to adjudicate between competing moral claims and theories in a meaningful and definitive way. This enduring malady of post-Enlightenment moral culture was the topic of MacIntyre’s now classic work After Virtue.[1]The proliferation of argument without any socially dominant, reasonable and principled manner of adjudicating between competing arguments—especially in those arguments which endure and are protracted for decades on end—only reinforces the popular impression that moral speak is simply not susceptible to rational adjudication, that morals are not the stuff of rational consideration, but simply a matter of personal preference.

Can ‘public reason’ save the day? The myth of moral consensus

Attempts to save our culture from this state of affairs have turned up precious little in the realm of moral philosophy. As I noted in my January 30 column, some seek a solution in the notion of ‘public reason’.

Now, this notion is not an entirely bad foothold for nourishing the moral discourse which is essential to our democratic way of life. But as I noted in that column, confusion today over what type of assertions should be deemed “reasonable” and what should be admitted into public reason is at the very heart of our confusion about liberty and liberal democracy.

Moreover—and this is third flaw I mentioned at the outset—it is the notion of public reason that underlies a false and confused confidence in our presumed ability to arrive at “moral consensus” on difficult moral issues.

Let me illustrate. Sometimes a solution to moral disagreement is proposed—based again, on a supreme confidence that “public reason” holds the key to conflict resolution—which points us in the direction of identifying the “core values” each side holds in common and in regard to the disputed issue at hand. These putative “core” or “shared” values would then constitute a supposed set of moral maxims to which opposing sides could consent.

I would suggest, however, and following the thought of MacIntyre, that underlying those maxims we will all too often discover rival versions of morality, whose internal logic renders rival conclusions about how those maxims are to be applied; and these rival versions of morality derive, in turn, from very incommensurable traditions of moral enquiry, the incommensurability of which is locked into irreconcilable conceptions of ‘the good life.’ Of course, this is all too often overlooked when minds are swayed by the compelling, congenial and soothing language of “values.”[2] In sum, yes, there is plenty of argumentation going on in the public square.[3] But argument alone cannot make for a thriving moral culture. Our argumentation in the public square must go further, to show the moral tradition from within which our premises are drawn, and to show furthermore why that moral tradition, and the premises it supports is superior to competing arguments and accounts of morality. Arguments alone only feed public frustration and disgust over our (apparently) interminable moral dissonances. They further feed the confused notion that morality is simply a matter of sheer irrational preference.

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.




[1]In an article written close to the time of publication of After Virtue, MacIntyre sums up the book’s central thesis in these terms:

Any particular piece of practical reasoning has rational force only for those who both have desires and dispositions ordered to some good and recognize that good as furthered by doing what that piece of practical reasoning bids… Such a community is rational only if the moral theory articulated in its institutionalized reason-giving is the best theory to emerge so far in its history. The best theory so far is that which transcends the limitation of the previous best theory by providing the best explanation of that previous theory’s failures and incoherence… and showing how to escape them.

In "Moral Arguments and Social Contexts: A Response to Rorty," Journal of Philosophy, 80 (1983), 590-91. Reprinted in Hermenteutics and Practice, Robert Hollinger, ed. (Notre Dame, Indiana: University of Notre Dame Press, 1985), 296.

[2]Those engaging in moral discourse, specialists as well as non-specialists, too often do so oblivious to the incommensurateness of the traditions of moral reflection and inquiry from within which their moral views are generated. Even holders of one and the same moral view are seldom aware that they can hold that view from any one of several incommensurate moral traditions. For example: Are the persons advocating marital commitment as enduring until the death of one of spouses aware that this view can be advocated from within Thomistic natural law, Kantian deontology, or even a Rawlsian neo-contractarianism? Agreement on the moral view is often accepted, heedless of the vast disagreement on the process of moral reasoning, particularly on the premises that generate the particular moral view in question.

[3] To mention just one recent and provocative example, see the exchanges between Robert George, Christopher Tollefson and William Saletan over the moral status of the human embryo.

Wednesday, February 13, 2008

Reason in the Public Square, Part I

When public discourse on moral issues is good for the culture and when it’s not

In a marvelous little book written twenty-three years ago entitled Amusing Ourselves to Death, writer, educator and communications theorist, Neil Postman theorized—on the very eve of the internet-based media and communications revolution—that Americans were (already) so overloaded with (televised) information “that the content of much of our public discourse has become dangerous nonsense.”[1] A valid observation in many respects in 1985, it is even truer today. Toward the end of the book, Postman observes:

[Everyone] in America…is entitled to an opinion, and it is certainly useful to have a few when a pollster shows up. But these are opinions of a quite different order from eighteenth or nineteenth-century opinions. It is probably more accurate to call them emotions rather than opinions, which would account for the fact that they change from week to week, as the pollsters tells us.

Those of us who like to read books like Postman’s are often wont to explain that what regularly passes for ‘opinion’ or ‘argument’ in the public square is little more than knee-jerk emotional reaction and the expression of blind personal preference. We often point to ‘emotivism’ or, more broadly ‘moral relativism’ as the root causes of these moral ailments.

I recently came across a pair of essays by Robert T. Miller, assistant professor at the Villanova University School of Law, which challenged this, perhaps, simplistic analysis. (You can read Miller’s article at the First Things blog ‘On the Square’ here: part I and part II).

Is the problem really that ‘relativism’ or ‘emotivism’ has occasioned the perceived dearth of reasoned discourse on moral matters in the public square? Miller makes the excellent point that commentators on cultural health of many stripes (he is off the mark, however, exclusively to fault “Catholic thinkers”) wrongly bundle many principled moral arguments into the category of “relativism”, arguments “none of which … involves a wholesale rejection of rational argumentation on normative issues.”[2]He rightly concludes by insisting:

Religious believers who are committed to participating in the public square need to understand these arguments and be prepared to answer them. They cannot escape this hard work by invoking the bogeyman of ethical relativism.

Miller makes the further valid observation when he notes that:

Generally speaking, our society is more concerned with producing and responding to arguments than probably any other in the history of the world. Whether the issue is abortion or gay rights, tax policy or the trade deficit, global warming or third-world debt, everyone seems ready to adduce arguments in support of some position or other. In learned periodicals like the Journal of Philosophy or the Harvard Law Review, on the editorial pages of the New York Times and the Washington Post, in the rough-and-tumble opinion journalism of National Review and The Nation, in the postings of bloggers and the ramblings of barroom blowhards, we find nothing but arguments about morals and politics.

Postman was absolutely right when he foresaw that the TV sound-bite would in many ways disqualify television as a medium for the serious exchange of thought. He did not, however, foresee the advent of the blogosphere or other e-media that have actually enhanced, as Miller observes, our ability to engage in reasoned moral discourse.

One might, nevertheless, be tempted to infer from Miller’s sanguine account of things that, because there is plenty of argumentation going on in the public square, everything is fine and dandy with the state of our moral culture. Such a conclusion—I hope this is not Miller’s contention—would be breathtakingly naïve. True, our culture is awash in what passes for “argumentation” on pressing moral issues. But should we be so quick to discover here a putative sign of moral health? I think the matter requires much closer examination.

As Alasdair MacIntyre has cogently observed, today “on every substantive social and moral issue intellectuals appear on opposing sides.” And whether it comes in the form of formal argumentation, or the mere assertion of cultural convention, “there are too many rival conventions, too many conflicting anecdotes; and the repetition of assertions and denials does not constitute conversation.”[3] An abundance, that is, of argumentation in the public square, does not, in and of itself, assure or guarantee a healthy moral fabric for contemporary culture.

In fact our current overabundance of argumentation suffers from two endemic flaws which can be, over time, potentially lethal for Western liberal democracy as we know it. Both of these have been masterfully examined by MacIntyre. The first is that we encounter these arguments entirely untethered from the moral traditions which generated them. The second is that, given that we find them entirely out of their relevant contexts, history has suffered for centuries from our very evident inability to adjudicate between those rival arguments. Next week, I will explore these two flaws in greater detail, and examine how they give rise to another profound error, namely, the assurance that ‘public reason’ as it is sometimes called can give rise a viable public consensus on difficult and contested moral issues.

For now, let’s just say that even poor argumentation in the public square is preferable to none at all. We can never tire of promoting a form of civility which demands coherent argumentation, namely, that conclusions follow validly from principled premises. After all, western culture in the tradition of liberal democracy has, by and large, remained faithful to its roots in the democratic ideal born in Athens two and a half millennia ago, favoring moral discourse that employs reasoned argumentation over moral speak which simply gives utterance to emotional aversions or preferences. Would I be too naïve to think that most Americans still believe that moral converse in the public square should be based on the use of human reason? I hope not.



[1] Postman’s thesis in this book and other essays is that the media we use to express ourselves in cultural contexts has an astounding impact on the contentlimits of what we are able to express. Writes Postman: and

For although culture is a creation of speech, it is recreated anew by every medium of communication—from painting to hieroglyphs to the alphabet to television. Each medium, like language itself, makes possible a unique mode of discourse by providing a new orientation for thought, for expression, for sensibility… But the forms of our media… are rather like metaphors, working by unobtrusive but powerful implication to enforce their special definitions of reality. Whether we are experiencing the world through the lens of speech or the printed word or the television camera, our media-metaphors classify the world for us, sequence it, frame it, enlarge it, reduce it, color it, argue a case for what the world is like.

[2]As one example, he points to a perceived tendency among Catholic moralists to lump the moral theory of Consequencialism into the broader category of “moral relativism.” Consequencialism is a moral theory which proposes to guide moral determinations based on our putative ability to perform a kind of moral calculus that would weigh potential negative vs. positive outcomes (consequences) of our actions. “Right” moral answers would be based on that calculus. Miller is quite correct to insist that “[Consequencialism], though mistaken, contains nothing that threatens the rational discussion of normative questions.” Though emerging from false premises, consequentialist arguments, and more broadly speaking, the entire set of theories, along with Consequencialism that fall under the umbrella term of Proportionalism, can be quite coherent. Proportionalism, in fact, began as a reaction against more subjectivist tendencies in moral theory and was motivated by the desire to get back to a kind of moral reasoning that was principled and objective.

[3] "Moral Arguments and Social Contexts: A Response to Rorty," Journal of Philosophy, 80 (1983), 590-91. Reprinted in Hermenteutics and Practice, Robert Hollinger, ed. (Notre Dame, Indiana: University of Notre Dame Press, 1985), 296.