Tuesday, April 29, 2008

Developmental Biology

What it is and why you should know about it.
DATE: April 29, 2008
TIME: 1:00 PM EST

Scientists have grown a human ear on the back of a laboratory mouse using cartilage cells from a cow. In the peritoneal cavity of a mouse, scientists have coaxed a severed human fetal limb to grow into a tiny human hand. Scientists have also confected a human jaw bone in the laboratory, elevating "plastic surgery" to new heights.

Scientists can grow human neurons (brain cells) in a Petri dish and use them to test for drug toxicity. They can also grow sections of human brain within laboratory animals to study human neurogenesis.

Scientists have created hybrid animals like the "geep" (through the fusion of a goat and a sheep embryo), and they are rapidly garnering the technology to grow synthetic strands of DNA, insert them into living organisms, and alter them to breed heretofore unimaginable hybrid organisms. Scientists are also acquiring the technology to coax stem cells to become sperm and egg cells so that, one day, homosexual and lesbian couples might be able to be the genetic parents of their own offspring through IVF.

This all appears to be a mix of the macabre, the medically promising, the morally good and the morally perilous. Welcome to the world of developmental biology.

At the risk of over simplifying, we can describe developmental biology as the study of how the organism as a whole governs and guides its self-development and self-maintenance as a living being. Now, this marks a relatively recent development in biology. In previous decades, biology was characterized by what we might call a "parts-to-whole" approach as the field was characterized by its reduction of process to biochemical underlay, and endeavored to unlock the secrets of these fundamental biological dynamics, culminating in the monumental sequencing of the human genome.

With the advent of developmental biology, the field assumes a "whole-to-part" approach as it now endeavors to study and harness the laws which govern the genesis of whole organisms. Of paramount interest here is to discover how human embryos "do it," how a one celled human zygote brings about the development of an entire human organism.

Perspectives on the future possibilities of this science hold out the prospect of medical breakthroughs that were unimaginable only years ago: the elimination of certain birth defects, the generation of human organs in the laboratory, recovery of motility after spinal cord trauma, a cure for Parkinson's disease, and so on. The scientific acquisition of such knowledge is now the true holy grail of the science of developmental biology. But again, of paramount importance toward the acquisition of such knowledge is the conducting of research directly on human embryos.

This is why efforts to defend embryonic human life will only be realistic and effective if they take into account the full reality of this rapidly emerging field.

Undoubtedly, we must acknowledge the legitimate aspirations of this field: to further human knowledge by acquiring an understanding of the dynamics of organismic development, and to put that knowledge at the service of humanity. As opponents of embryo destructive research, we must also understand that there is no such thing as turning this field back or of saying, "stop Brave New Word, I want to get off!" Nor, in principle, is there reason to desire this.

Notwithstanding the more harrowing scenarios I described above, and the way Hollywood plays on our deep-seated suspicions of such science [think of The Island, or more recently I am Legend], I would actually suggest, however, that we have nothing to fear in principle from developmental biology.

I say, in principle.

Are there potential perils in developmental biology? Are these extraordinarily dangerous in some respects? Of course. But those dangers in themselves do not constitute reasons for foregoing the progress of human knowledge in this particular field. Human knowledge is a fundamental human good; but from the garden of Eden onward, history has witnessed that it is the free use of knowledge--not the knowledge itself--which can lead to evil outcomes.

We are, nonetheless, at a genuine turning point in human history. As Stanford's Dr. William Hurlbut, member of the President's Council on Bioethics, has affirmed,

In reflecting on these dilemmas, it was immediately clear that we are at a defining moment in the progress of science. The choices our society makes now regarding embryonic stem cells (and other ethically controversial uses of biomedical technology) will put into place both the conceptual principles and practical foundations for future techniques of research and patterns of clinical practice. Once established, these moral precedents and scientific techniques will serve as the platform on which further practice will be built layer upon layer; like the foundations of a building, these will be difficult to retract or revise.[1]

So, outside of setting up a commune somewhere north of Saskatoon, I would suggest that our only way forward--in order to preserve the integrity of human dignity at all stages of life in the age of developmental biology--is to work toward an adequate delineation of what many have called the "boundaries of humanity."

This means working to discover solutions that will allow the science of developmental biology to go forward, while at the same time precluding the direct use of human embryos or at least substantially minimizing that use by offering ethically and scientifically acceptable alternatives. The Westchester Institute has been dedicated in full to just such a project for the past three years, and we will continue.

Our efforts have been in the direction of sustained and painstaking moral and scientific consideration of what distinguishes a human organism from non-human, non-organismic biological artifacts. It can be morally licit to create the latter in the laboratory under certain conditions. But such discernment is presenting itself to be very difficult. It requires a delineation of the biologically and metaphysically minimum requirements for organismic existence. It requires us to attempt to define the set of primary, necessary and sufficient indicators of what constitutes a living human organism. Such efforts hold out the hope that such demarcation will one day offer us sound scientific and philosophical insights on which to arrive at moral judgments regarding the creation, use, and moral status of an array of biologically confected entities of genetically human origin.

Again, our efforts here are motivated by the concern that many of developmental biology's pet projects would become so much easier if scientists could just work directly on human embryos to harness the laws that govern the genesis of entire organisms, organismic systems and parts.

If-if a majority of Americans didn't consider it morally repugnant to manufacture human embryos solely for research purposes.

How long before they are finally disabused of such an antiquated "moral taboo?" Perhaps as soon as January 20, 2009.

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.

[1] William D. Hurlbut, "Framing the Future: Embryonic Stem Cells, Ethics and the Emerging Era of Developmental Biology," Pediatric Research, 59, 4 (2006) 6R. Dr. Hurlbut is Consulting Professor of Neurology and Neurological Sciences, StanfordMedicalCenter, Stanford University.

Copyright 2008 The Westchester Institute for Ethics and the Human Person.

Benedict at Ground Zero

A moment "firmly etched" in his memory and ours.
DATE: April 22, 2008
TIME: 9:00 AM EST

Benedict’s sojourn among us last week was composed of countless “moments.”

There were, first and foremost, all those personal “moments”: “He looked right at me!” “He smiled at me!” “He grabbed me by the hand!”

There were the moments of ecumenism.

There was that moment of fraternity with his brother bishops which did not lack candor and admonishment—albeit highly measured—for how some bishops had failed to be the shepherds they should have been in the handling of sexual abuse by priests. And then the moment Benedict met with some of the victims of that abuse.

There were plenty of light moments as well. About all he had to do was light up with that shy grin of his to send electricity through his audience. There was also that particularly warm moment when he greeted physically disabled young people at St. Joseph’s seminary in Yonkers.

And then, of course, there was the moment at Ground Zero—a moment on which I now want to reflect in greater depth.

“My visit this morning to Ground Zero” Pope Benedict told 3000 well wishers present to see him off at JFK airport on Sunday night, “will remain firmly etched in my memory, as I continue to pray for those who died and for all who suffer in consequence of the tragedy that occurred there in 2001.”

We can only hope that Benedict’s uninterrupted moments of silent prayer before a small reflecting pool built for the occasion has brought the family members of those who perished on September 11th closer to closure—a word we were hearing a lot on Sunday. One thinks especially of the families of approximately 1100 victims of the attack who never recovered so much as a fragment of the bodies of their lost loved ones. Benedict blessed them and he blessed the ground—the hallowed ground—in which those bodies, as one family member of those 1100 put it, are simply understood to rest.

Theologian and commentator George Weigel wrote last week in Newsweek magazine about another kind of “moment” Benedict may have already had—not necessarily during this apostolic journey to the US, but perhaps already somewhere in his three-year-old papacy.

Weigel was recalling the June 1979 visit of Pope John Paul II to Poland. Wrote Weigel:

Cold-war historians now recognize June 2–10, 1979, as a moment on which the history of our times pivoted. By igniting a revolution of conscience that gave birth to the Solidarity movement, John Paul II accelerated the pace of events that eventually led to the demise of European communism and a radically redrawn map in Eastern Europe. There were other actors and forces at work, to be sure; but that John Paul played a central role in the communist crackup, no serious student of the period doubts today.

Weigel’s salient point, however, is that few people were able to discern the significance of that trip at the time. There were certainly many reasons for this, but a deeper reason, suggests Weigel, might lie “in the filters through which many people read history today.” He notes that, according to one such filter, religious and moral conviction is irrelevant to shaping the flow of history. Nearly thirty years since that historic trip, history itself has demonstrated the stark inadequacy of such a filter.

Whether or not Benedict’s own “June 1979 moment” has already come, only time will tell. I don’t expect his presence at Ground Zero will necessarily play itself out as that moment, but then again, who knows? In addition to the peace and—we can only hope—further healing it brought to the families of the victims, how can we fail to grasp other significant aspects of this event?

In the person of Benedict, faith and reason met at Ground Zero on Sunday—the faith of the leader of one billion Catholics in the world, and the reason of one of contemporary history’s most educated men. Benedict has been unflinching in his contention that faith without reason (religious fanaticism) and reason without faith (secularism) are dangerous paths for humanity. Might this event occasion an even more intense dialogue between Islamic and Christian intellectuals on the place of religion in public life, its ability to shape culture, and our common need to respect religious freedom? Might Benedict’s presence at Ground Zero give renewed vigor to those agents of culture who are striving to disabuse Americans of our own brand of secularism which relegates religion—at best—to the realm of the personally therapeutic and quirky, if not seeing it as something inherently divisive and even dangerous. We can only hope—precisely what Benedict would have us do.

The iconic image of the 81-year-old Pope lost in prayer before a reflecting pool at Ground Zero was, in the end, a poignant reminder that we live in a time, we might say a season, of great consequence for humanity. Time and again, almighty God—faithful to his creatures to the very end—has raised up men and women to lead us through remarkable seasons of the Church and of human history.

Isn’t this why, come to think of it, Robert Bolton’s play A Man for All Seasons–the story of just one such individual, Sir Thomas More—has garnered such a timeless appropriateness and meaning? We have good reason to believe that Benedict is a man for our season, and that his moment has now come.

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.

Copyright 2008 The Westchester Institute for Ethics and the Human Person.

What Will Benedict Tell America?

Ten things I'd love to hear him say.
DATE: April 15, 2008
TIME: 9:35 AM EST

In the (highly unlikely) event that I get a phone call later today from the Pope’s secretary asking me for input on the speeches that Benedict will deliver here this week, here are some talking points I would offer for his consideration. These are the things I would love to hear Benedict say:

First, to Catholics in America:

  • Contrary to recent news reports, I have not come to “give you a boost,” or a “shot in the arm” or to lead a pep-rally for you. Actually, I’ve come to challenge you to be more authentically and thoroughly Catholic. Millions of your brothers and sisters throughout the world actually embrace the fullness of Catholic teaching, especially on moral issues, without picking and choosing—as too many of you do, cafeteria style—which doctrines you like and which ones you don’t. Those who embrace the fullness of Catholic teaching are not mindless and subservient automatons. Rather, they have considered the reasons behind such teaching and found those reasons thoughtful and convincing.
  • I’ve also come to remind you that the Catholic Church is bigger than the Church in the United States. It’s important for you to recognize that the needs of the Church throughout the world are too great, and our shared mission too big, to be lost to self-absorption.
  • You presidents of—supposedly—Catholic universities: do the human family a favor and please be authentically Catholic in your campus life and academic culture. Such “catholicity” on a Catholic campus does not translate into accommodating--in the name of “tolerance”—customs, behaviors, art forms, student associations or doctrines on campus or in the classroom whose core messages and philosophies are antithetical to the Gospel. (And, yes, I am referring to “The Vagina Monologues” among other things.) Tolerance, by the way, is not the core virtue of Catholicism; and there’s much more to being Catholic than working for social justice.
  • My brother priests: please recognize, if you haven’t already, that the level of religious knowledge and practice in your parishes is often near zero. Treat your parish ministry as a genuine mission field. Far too many Catholics hold to a feel good, design-your-own brand of Christianity which is a hybrid of Catholic faith and modern therapeutic, self-absorbed Emotivism. If you fail to preach the whole Word of God, then the situation will continue to worsen until the actual Catholic faith is only a faint memory in the minds of most of the laity.

Then, to all Americans:

  • Don’t be afraid of asking the big questions (about God, truth, and ultimate reality). Instead, fear the peril of falling into that existential boredom so characteristic of Europeans these days.
  • Be the leaders, culturally and politically, in rejecting the idea that science should be untethered from moral restraints.
  • Keep the discussion about world religions honest, and don’t let a misguided understanding of “tolerance” lead you to accept anti-Christian bigotry and hatred. And just because I give you reasons for the values that I uphold, it doesn’t mean I am trying to “impose” my values on you.
  • Remember that the moral principles which sustain a healthy society (sanctity of life, marriage, etc.) are not simply faith-based, but are in fact naturally human and rational.
  • Remember that "democracy" is not a magic word. “Democracy cannot be idolized to the point of making it a substitute for morality or a panacea for immorality. Fundamentally, democracy is a ’system’ and as such is a means and not an end. Its ’moral’ value is not automatic, but depends on conformity to the moral law to which it, like every other form of human behaviour, must be subject: in other words, its morality depends on the morality of the ends which it pursues and of the means which it employs. But the value of democracy stands or falls with the values which it embodies and promotes.”[Note to his Holiness: you will likely recognize this last paragraph; it’s from Evangelium Vitae, n.70.]
  • A religiously pluralistic society can co-exist peacefully without asking people of faith to suspend their commitment to the truth of their doctrine. Religious dialogue does not consist in everyone agreeing to abandon their particular truth claims in order to come together and profess that no one's vision of ultimate reality is any better than anyone else's. A mutual commitment to the truth and a healthy respect for our common struggle for it is the sounder basis of inter-religious dialogue and tolerance. Don’t allow your religious and creedal beliefs to deteriorate into a tyranny of relativism.

§

Hmmm. Maybe I should go ahead and send these talking points along just in case. Now, where the heck did I put the Holy Father’s fax number?

___

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.

Tuesday, April 8, 2008

When Do We Die?

A long-established criterion for determining death is under growing scrutiny.


Thirty-six hours after Zack Dunlap had an accident last November with his souped-up ATV, doctors performed a PET scan on Zack and found there was no blood flowing to his brain. After informing his parents, the doctors declared Zack brain-dead. Then followed the call to the organ harvesting team to come and retrieve organs from Zack. As they were being flown in by helicopter to the Wichita Falls, Texas hospital where Zack lay presumably dead, nurses began disconnecting tubes from his inert body. It was only then that one of Zack’s relatives who happens to be a nurse tested Zack for reflexes. Not only did Zack respond to pain, he was later able to tell a stunned television audience and Today Show host Natalie Morales that he heard the doctor declare him brain-dead—and how much that ticked him off.

Stories like Zach’s seem to be more prevalent of late, and more disturbing. They occasion reasonable doubt about three related issues: the reliability of the brain-death (BD) criterion as a standard for determining death; the degree of rigor with which such determinations are made; and whether the medical establishment is not dangerously biased toward organ harvesting as opposed to long-term, potentially regenerative care for persons who meet the loosest standard for BD.

Until recently, the general consensus had been that BD—the irreversible and complete cessation of all brain function—constituted a sufficient criterion for establishing that a human individual has, in fact, died. However, the consensus surrounding BD has been challenged of late. Opponents, most notably Dr. Alan Shewmon, Chief of the Department of Neurology at Olive View Medical Center, UCLA, point to cases of individuals who have been declared brain-dead and have “survived” with the aid of artificial respiration/nutrition for weeks, months, and even years. Shewmon has published a controversial study of such survivors that has posed a diametric challenge to the neurological standard for determining death. In testimony before the President’s Council on Bioethics, Shewmon observed:

Contrary to popular belief, brain death is not a settled issue. I've been doing informal Socratic probing of colleagues over the years, and it's very rare that I come across a colleague, including among neurologists, who can give me a coherent reason why brain destruction or total brain non-function is death.

There's always some loose logic hidden in there somewhere, and those who are coherent usually end up with the psychological rationale, that this is no longer a human person even if it may be a human organism.

The American Academy of Neurology (AAN) established a set of Guidelines for the determination of brain death in 1995 which currently remain a point of reference for many hospitals and physicians throughout the country. The AAN guidelines lay out diagnostic criteria for making a clinical diagnosis of BD. The guidelines note that the three “cardinal findings” of brain death are coma or unresponsiveness, absence of brainstem reflexes, and apnea (the cessation of breathing). It further outlines a series of clinical tests or observations for making these findings. The guidelines also note that certain conditions can interfere with clinical diagnosis, and recommends confirmatory tests if such conditions are present. Finally, the guidelines recommend repeated clinical evaluation after a six-hour interval (noting that such time period is arbitrary) using a series of confirmatory tests that are described in the document.

A recent study published in the journal Neurology, noting widespread variations in the application of the AAN Guidelines, drew these conclusions:
Major differences exist in brain death guidelines among the leading neurologic hospitals in the Unites States. Adherence to the American Academy of Neurology guidelines is variable. If the guidelines reflect actual pract ice at each institution, here are substantial differences in practice which may have consequences for the determination of death and initiation of transplant procedures.
Such variability in applying a uniform criterion of BD, in addition to the growing number of survivors of BD, must give us pause. And so must the growing societal pressure to donate organs—notwithstanding the genuine hopes that organ transplants holds for millions of people.

That pressure arises from the fact that the numeric gap between available organ donors and patients who need organ transplants continues to grow every year. A recent survey indicated that in 2006 over 98,000 organs were needed for patients on US transplant waiting lists.

Complicating matters, the number of available organs through donation from brain dead patients has remained stable for a number of years. And while organ donor cards and growing use of advance medical directives have occasioned a slight increase in the numbers of cadavaric transplants, more organs are needed than are currently available.

Consequently, transplantation services are pressed to find new and ethically acceptable ways to increase the number of available organ donors. Some advocates of a less rigorous application of BD have gone so far as to openly consider the moral licitness of removing organs from anencephalic newborns, and from persons diagnosed as being permanently comatose or in a permanent vegetative state (PVS). And some members of the medical profession believe the solution lies in redefining BD so as to make it less restrictive.

One such approach would define BD (and consequently death itself) as cessation of all higher level (cortical) brain functioning—even if there were activity in other areas of the brain. Such was the proposal suggested by Dr. Robert Veatch of the Kennedy Institute of Ethics at Georgetown University in his testimony before the President’s Council on Bioethics two years ago. “We could shift to a new definition of death that would classify some of these permanently comatose persons as dead,” affirmed Veatch. “In fact, a large group of scholars now in rejecting a whole brain definition has [endorsed] … a higher brain definition where some of these patients would be legally classified as dead.”

“But would the ordinary citizen accept such a definition?” he then asked. In response, he pointed to a study done at Case Western Reserve University looking at the opinions of ordinary citizens in the State of Ohio. The results were startling. Of a population of 1,351 citizens who participated, 57% considered the person in permanent coma to be dead, and 34% considered the person in a permanent vegetative state to be dead. Furthermore—again on Veatch’s interpretation of the data—with regard to the propriety of harvesting organs, in the case of a more rigorous application of the BD criterion, 93% percent thought it acceptable take organs. But in the case of permanent coma, 74% would procure organs, and even in the case of PVS, fully 55% percent would procure organs.

Veatch ended by exhorting those present: “I suggest that it's time to consider the enormous lifesaving potential of opening the question about going to a higher brain definition of death or, alternatively, making exceptions to the dead donor rule.”

Food for thought—and potentially for nightmares.

Admittedly, proponents of BD would question, in cases of survival after a BD determination such as that of Zach Dunlap, whether the criterion was applied strictly enough when they were declared brain dead. That’s a legitimate question.

But research like Dr. Shewmon’s and the growing list of survivors of BD are not only generating uneasiness in the medical field but also among potential organ donors who fear succumbing to some physician’s premature diagnosis of death. It seems to me that such uneasiness is warranted, and that the time has come for a much more rigorous moral and medical evaluation of the propriety of the BD criterion.
________

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.

Copyright 2008 The Westchester Institute for Ethics and the Human Person.

Tuesday, April 1, 2008

Morality and the Emerging Field of Moral Psychology

Is there such a thing as a ‘moral instinct’?

In my March 11th column I began an exploration of some of the postulates of the emerging field of moral psychology. I would like to finish those reflections here by offering a more extensive critique of a lengthy article that ran in the New York Times Magazine in January entitled “The Moral Instinct,” authored by Harvard psychologist Steven Pinker.

Moral psychology is an emerging field of research that delves into questions that have long captivated the curiosity of a broad array of disciplines in the Arts and Sciences: To what extent do our own bodies influence and determine our moral judgments and behavior? Are there genetic predispositions for everything from altruism to serial killing? How are we to make sense out of the uniquely human endeavor of formulating moral judgments? Can an understanding of neurobiology and genetics shed any light on this?

These are not only valid questions, they are important and fascinating ones.

Steven Pinker, today a dominant voice in this field, suggests in his Times Magazine essay that moral psychology is going to allow us to get at “what morality is.” His essay is an extensive exposé in layman’s terms of one of the fundamental theses of moral psychology, namely, that there is a “distinctive part of our psychology for morality” and when this psychological state is turned on, we begin to moralize. Pinker then notes two hallmarks of this moralizing psychological mindset: first, moral rules invoked in the state of moralization are claimed to be universal; second, persons who transgress those rules are considered punishable.

Now, I would suggest that Pinker and colleagues have not happened upon anything particularly remarkable here. If truth be told, they are simply noting a plainly obvious aspect of human nature: human beings moralize, we make value judgments. Cross-culturally and diachronically, human beings express an understanding of right and wrong behavior which we reward and punish respectively. That’s why I cannot help but agree with Pinker when he affirms:

In fact, there seems to be a Law of Conservation of Moralization, so that as old behaviors are taken out of the moralized column, new ones are added to it. Dozens of things that past generations treated as practical matters are now ethical battlegrounds, including disposable diapers, I.Q. tests, poultry farms, Barbie dolls and research on breast cancer.

Pinker is also absolutely correct when he affirms that most people do not engage in moral reasoning, but in moral rationalization. “They begin with the conclusion, coughed up by an unconscious emotion, and then work backward to a plausible justification.” Indeed, principled and painstaking moral reasoning is today an ever more a refined and atypical art.

But I would suggest that Pinker and colleagues err in educing this paucity of sound moral reasoning as evidence that morality is inherently unreasonable, that its sources are to be found exclusively in non-rational, non-cognitive depths of evolution-driven human psychology.

The apparent gap between (presumably irrational) moral convictions and their corresponding (rationalized) justifications does not constitute evidence for Pinker’s repeatedly un-argued assertion that we indeed have a “moral sense”, that is, a set of built in moral categories which are the product our own psychological evolution. Pinker and colleagues, by the way, are certainly not the first thinkers to suggest that human beings make moral determinations based on the operation of something they call a moral sense. As a putative explanation of morality, moral sense theory dates back at least to the mid 18th century.

The upshot of Pinker’s essay, however, is that after going to extreme lengths to suggest precisely this—that our experience of morality is ultimately anchored in this “figment of the brain” he calls the moral sense—he will end by denying this very premise or at least severely qualifying it. I’ll get to that in a minute.

Not withstanding my critique, there is actually plenty of interesting material in Pinker’s essay, just as there are plenty of good and valuable insights to expect from the field of moral psychology—much or all of which will be perfectly compatible with (or at least accountable for) from within Aristotelian-Thomistic natural law theory.

For instance, Pinker spends much of the essay speculating on the significance of a set of instinctive moral intuitions which researchers suggest are shared in diverse ethnic and cultural traditions the world over. Pinker observes that such observations appear to lend credence to the theory that humans are hardwired with a “universal moral grammar”. Explaining an analogy from the political philosopher John Rawls, Pinker notes that just as the linguist Noam Chomsky suggested that infants are outfitted with a “universal grammar” enabling them by default to analyze speech by way of built in categories of grammatical structure, so too, an innate universal moral grammar “forces us to analyze human action in terms of its moral structure, with just as little awareness.”

Pinker goes on to explain how those moral structures could consist in such things as the following: the impulse to avoid harming others; the propensity toward altruism and fairness; our inclination to respect authority; our avoidance of physical filth and defilement as well as our avoidance of potentially risky sexual behavior; our willingness to share and sacrifice without expectation of payback.

Some moral psychologists have reduced these to a specific set of categories—namely ‘harm’, ‘fairness’, ‘community’, ‘authority’ and ‘purity’—which they understand to work as fundamental building blocks of our moral experience. These five categories, explains Pinker, “are good candidates for a periodic table of the moral sense not only because they are ubiquitous, but also because they appear to have deep evolutionary roots.” He adds, for good measure—and again without argumentation—that these five moral categories are “a legacy of evolution.”

Now even though, reading between the lines, we discover that Pinker must be quite convinced that these categories are the product of evolution (“figments of our brain”), he nonetheless sustains that “far from debunking morality, the science of the moral sense can advance it, by allowing us to see through the illusions that evolution and culture have saddled us with and to focus on goals we can share and defend.” By this he appears to suggest we should simply learn how to cope with inborn moral sense, quirks and all (such as our taboos against homosexual attraction, and our “yuck” response to the prospect of things like human cloning), leveraging our understanding of both its virtues and defects, in order to cobble together a kind of shared set of moral values and societal prerogatives we can all live with. Indeed, affirms Pinker, we must get around the quirkiness of that built in moral sense because it can potentially “get in the way of doing the right thing.”

Now, that begs a huge question, doesn’t it?

If not in terms of our built in moral sense, then in virtue of what exactly is Pinker proposing that we can know “the right thing” to do? His affirmation can only makes sense—contradicting what would appear to be a core assumption of his article, namely, that the moral sense is all we’ve got—if there is some other moral agency in us with which we can judge, refine, correct, or ignore our built in moral sense.

To be sure, it is entirely plausible that we are endowed with something like a moral sense, with certain built in predispositions toward empathy, altruism and the like, and that we can even discover something like these behaviors in non-human primates. Natural law theory can accommodate this rather easily, and on strikingly similar grounds as those on which Pinker holds his evolutional moral sense suspect: only human reason can function as the immediate criterion for adjudicating the reasonableness of such built in categories and tendencies in every given moral scenario in which they would come into play.

But if that’s the case, then the moral sense, as a figment of our brain, is as fascinating as it is impotent to explain all that Pinker purports it to explain about morality. Indeed, if I understand Pinker correctly at the end of his essay, he is affirming that, when the day is done, moral determinations will be guided by reason, and not by any moral sense at all. So, why, I must ask, didn’t he just say that in the first place?

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.

Copyright 2008 The Westchester Institute for Ethics and the Human Person.

Tuesday, March 18, 2008


The “Vagina Monologues” to be presented (again) at Notre Dame

Last week I began a two-part column exploring and critiquing some of the postulates of the emerging field of ‘neuromorality.’ I will get back to that after Easter. I felt compelled to interrupt that topic, however, because I was so disturbed to hear that The Vagina Monologues would once again be presented (for a sixth year, in fact) at Notre Dame University. [I wish to note, however, that this is not altogether unrelated to the topic of how brain relates to morality.]

First of all, and for the record, here are some facts as I understand them from a friend who teaches at N.D. University president Fr. John Jenkins has set the following conditions for presentation of the play:

  • The play may only be presented if sponsored by an academic department. This year’s presentation is sponsored by the departments of sociology and anthropology; last year, no department sponsored it, so it had to be presented off campus;
  • The presentations may only be done in an academic setting (a classroom, not a theatre);
  • Immediately following each of the six scheduled presentations, there will be a mandatory panel discussion during which Catholic doctrine will be clearly expounded on the issues raised by the play;
  • The presentations shall not be aimed at fundraising for any purpose;

While noting that Fr. Jenkins would clearly be in his right, as president, simply to disallow the play, I also recognize that he has his own prudential reasons for allowing it. I also hasten to add that this six-year old saga should not be construed as a bad reflection on Notre Dame as a whole. I lectured at the Law School there in January and I have friends on the faculty. I must say I was delighted with what I saw and experienced on campus: wonderful things are happening at Notre Dame.

That said, when I first heard news late last week about this year’s presentations of the Monologues on campus, I was immediately reminded of that wonderful quote from G.K. Chesterton:

The modern world is not evil; in some ways the modern world is far too good. It is full of wild and wasted virtures. When a religious scheme is shattered (as Christianity was shattered at the Reformation), is not merely the vices that are let loose. The vices are, indeed, let loose, and they wander and do damage. But the virtures wander more wildly, and the virtures do more terribel damage. The modern world is full of the old Christitan virtures gone mad.

Now, President Jenkins was quoted as saying, in support of his decision to allow the play on
campus again this year, that:

It is an indispensable part of the mission of a Catholic university to provide a forum in which multiple viewpoints are debated in reasoned and respectful exchange—always in dialogue with faith and the Catholic tradition—even around highly controversial topics.

Well, amen to that

But here, is it not the case that the virtue, if you will, of ‘reasoned and respectful exchange’ has gone a little mad? I don’t deny, of course, that it’s possible to have a reasonable discussion even about different forms of moral depravity. But no matter what the topic, reasonable exchange of thought presupposes many things, among them a prudent setting, and a morally inoffensive presentation of the facts. Fr. Jenkins has made an effort to supply the former in requiring that the play be presented in an academic setting, but the latter condition remains unmet.

Now, how would we conduct a reasonable dialogue, say, of the exploitation of women through pornography? By gathering faculty and students together in a classroom to view and discuss blowups of Playboy centerfolds? Without having viewed the Monologues myself, I know enough about it to know that it is crudely offensive in a similar way and renders the very idea of a substantive, genuinely reasoned discussion preposterous.

It is therefore a striking instance of serendipity that not three weeks after the presentation of the Monologues on the Notre Dame Campus, Pope Benedict will be meeting (as reported last Friday by The Washington Post) with more than 200 top Catholic school officials from across the country.

What can we expect the Pope will say at the meeting? I expect his remarks will echo much of the substance of his papal address at the University of Regensburg (about which I’ve written in previous columns). Which is to say, Pope Benedict will likely make affirmations to the effect—and to echo the words of Fr. Jenkins—that “an indispensable part of the mission of a Catholic university is to provide a forum for reasoned and respectful exchange of ideas.” And no matter what else he might say, we have here the very reason why any institute of higher learning should refrain from making a mockery of reasoned discourse, and refuse demands for anti-cultural trash such as the Monologues.

To be sure, sponsorship of the Monologues is not by a long shot the only or even most egregious instance of unreasonable nonsense being passed off as culture at Catholic or secular universities. Nonetheless, it is central to the mission of intellectual stewardship that faculty and administrators at institutes of higher learning muster the backbone to say ‘no’ to unreason, and to say ‘no’ when necessary to very vocal minorities or majorities, no matter how vocal or how vicious.

* * *

And turning now to a superlatively more worthwhile topic, to all those taking the time to read this column today, I want to extend my warmest best wishes and the assurance of my prayers for a very blessed Holy Week and celebration of Easter.

Rev. Thomas V. Berg, L.C. is Executive Director of the Westchester Institute for Ethics and the Human Person.

Tuesday, March 11, 2008

Morality as Genetic Predisposition and Neurobiology

A look at the emerging field of moral psychology

Within the intersecting disciplines of psychology, neurobiology, philosophy of mind, ethics, and cognitive science, a new field of inquiry has emerged of late. Although it goes by different names, including such recent coinages as ‘neuromorality’, the field is perhaps best referred to as moral psychology. I have touched on this topic in a previous column, but given the recent preponderance of media fixation on this topic, I thought it was time to take a closer look.


One might consider moral psychology as an emerging field of research that delves into questions that have long captivated the curiosity of a broad array of disciplines in the Arts and Sciences, some for several centuries: To what extent do our own bodies influence and determine our moral judgments and behavior? Are there genetic predispositions for everything from altruism to serial killing? How are we to make sense out of the uniquely human endeavor of formulating moral judgments? Can an understanding of neurobiology and genetics shed any light on this?


For many researchers in this field, such questions boil down to the challenge of mapping out what some would call the "neuro-anatomy of moral judgment." Across the country, moral psychologists, working in tandem with behavioral psychologists, evolutionary biologists and persons in related fields, believe they are hot on the trail of figuring out how humans are "wired for morality."


As a token example of the work in this field, we could note the investigations of Harvard university psychology professor Marc Hauser, author most recently of Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong (2006). Hauser was featured last May in the Harvard University Gazette On-line:


In a talk April 26, psychology professor Marc Hauser argued that our moral sense is part of our evolutionary inheritance. Like the “language instinct” hypothesized by linguistic theorist Noam Chomsky, the capacity for moral judgment is a universal human trait, “relatively immune” to cultural differences. Hauser described it as a “cold calculus,” independent of emotion, whose workings are largely inaccessible to our conscious minds.


Hauser, along with other leaders in this emerging field would have us conclude that morality is ultimately explainable in empirical terms: genetic evolution and inheritance, brain anatomy, neuronal activity, mixed with our environment and education—this and little more.


And that is where the trouble with neuromorality begins. It is indeed unfortunate that the pioneers of the new moral psychology—given all the potential for truly breathtaking and worthwhile insights which their discipline can provide—appear to be all too ready to succumb to that intellectual hubris that would reduce the broader whole of understanding to one very narrow vantage point. And this is already leading to untenable extremes.


Case in point is a recent lengthy exposé that ran in New York Times Magazine entitled “The Moral Instinct,” authored by another Harvard psychologist Steven Pinker. I will engage in a lengthier critique of Pinker’s article next week, but just to give you a better taste of some of the unfortunate excesses of neuromorality, allow me to share and comment on the following amazing paragraphs. Writes Pinker:



The gap between people’s convictions and their justifications is also on display in the favorite new sandbox of moral psychologists, a thought experiment devised by the philosophers Philippa Foot and Judith Jarvis Thomson called the Trolley Problem. On your morning walk, you see a trolley car hurtling down the track, the conductor slumped over the controls. In the path of the trolley are five men working on the track, oblivious to the danger. You are standing at a fork in the track and can pull a lever that will divert the trolley onto a spur, saving the five men. Unfortunately, the trolley would then run over a single worker who is laboring on the spur. Is it permissible to throw the switch, killing one man to save five? Almost everyone says ‘yes’.


Consider now a different scene. You are on a bridge overlooking the tracks and have spotted the runaway trolley bearing down on the five workers. Now the only way to stop the trolley is to throw a heavy object in its path. And the only heavy object within reach is a fat man standing next to you. Should you throw the man off the bridge? Both dilemmas present you with the option of sacrificing one life to save five, and so, by the utilitarian standard of what would result in the greatest good for the greatest number, the two dilemmas are morally equivalent. But most people don’t see it that way… When pressed for a reason, they can’t come up with anything coherent, though moral philosophers haven’t had an easy time coming up with a relevant difference, either (p. 35, emphasis my own).


What this is supposed to show is that our deepest convictions about right and wrong are not based on reasons, but on deep-seated tendencies, hardwired into our brains by our DNA and evolutionary history. The fact that people have a hard time coming up with reasons for their moral convictions is educed as evidence that either there are no reasons, or that any reasons given are utterly relative and may or may not reflect the deeper workings of our DNA driven psychological dispositions.

Pinker’s interpretation of the Trolley Problem —and presumably that of most people in the survey—fails to distinguish between intending to harm and allowing a foreseeable harm on reasonable grounds. The former is immoral; the latter might constitute a licit option depending on the case. Which is to say, the natural law tradition clearly provides reasons why it might be licit to pull a lever and divert the trolley onto a spur (the first case), and reasons why it would never be licit to throw the fat man down onto the tracks (the second case). The fact that persons surveyed had trouble articulating reasons for their moral convictions should not suggest that morality is ultimately irrational—determined within the deep recesses of our genetically predisposed subconscious—but simply that most people today have little or no formal training in ethics, let alone natural law theory. But more on this next week.

To conclude, the field of moral psychology is in many ways fascinating. It will undoubtedly make many valuable contributions not only to our philosophical understanding of human nature and morality, but also to our cultural considerations about how to educate our young people to live sound moral lives. It will do a grave disservice to the same, however, if moral psychologists aim to reduce entire fields of human understanding (in this case moral knowledge) to "nothing but" the stuff of neurological function and evolutionary biology.