Published June 1, 2002
A new book by Francis Fukuyama is always an event. That has been true since the publication of his essay “The End of History” in The National Interest in 1989 and the book that grew out it in 1992. A decade later, the “end of history” debate still has some life to it. Meanwhile, Fukuyama himself has modified this thesis. While he continues to see signs of a “convergence toward liberal democracy around the globe,” he now acknowledges that there’s a potent joker in the deck: science. “Much of late twentieth-century technology,” Fukuyama observes in the preface to his new book, Our Posthuman Future: Consequences of the Biotechnology Revolution (Farrar, Strauss, and Giroux), “Like the so-called Information Revolution, was quite conducive to the spread of liberal democracy. But we are nowhere near the end of science, and indeed seem to be in the midst of a monumental advance in the life sciences.”
Fukuyama is currently Bernard Schwartz Professor of International Political Economy at the Paul H. Nitze School Advanced International Studies, Johns Hopkins University. Michael Cromartie, who interviewed him in the July/August 1999 issue of Books & Culture on the publication of his previous book, The Great Disruption: Human Nature and the Reconstitution of Social Order, met with Fukuyama in Washington to alk about his new book and the challenges posed by biotechnology.
Conversation with Fukuyama
What are the promises and the dangers of the biotechnology revolution?
The promising stuff is pretty clear. There is the possibility of therapies for all sorts of genetically linked diseases, perhaps including cancer. I don’t see any moral objection at all to using biotechnology in this way, for therapeutic ends. I think that it becomes more problematic when it’s used for enhancement purposes, when it is used to consciously take an otherwise normal human being and make him or her into something very different. I think that’s where the danger lies.
To me, the single greatest danger will come when genetic engineering for human beings–so-called germline engineering–becomes possible, and then you are talking about a period completely unprecedented in human history, where human beings will consciously be able to take over the evolutionary process.
How far away is that?
It could be another generation or even longer before this stuff becomes realizable, but they do it in agricultural biotech all the time: slip a gene out of a plant and put it in another plant. Cloning is significant not so much in itself–actually, I don’t think there are many people who will want to clone themselves–but because it marks a turning point: the ability to produce a human being whose genetic profile you know completely in advance. It’s a kind of down payment on a lot of things that will be coming up.
But in the book I talk about other concerns that are even closer at hand. For example, a better cognitive neuroscience will lead to a much greater understanding of the biological basis of human behavior, and thus will offer the potential to manipulate human behavior in ways that we haven’t been able to de before. That world has already started to arrice with drugs like Prozac and Ritalin, which are just the tip of the iceberg. I think most things that people anticipate happening through genetic engineering will happen through drugs first.
All of this collectively has me worreid because it suggests new experiments in social engineering–which are unlikely to be more successful than our previous efforts at social engineering. We’ll have all sorts of unanticipated consequences and will cause potentially a lot of hardship as we try to exploit these technologies to make people behave the way we want them. So that in a nutshell is my core fear.
I’m tempted to argue, What do we have to lose by embracing technologies that allow us to become smarter, feel better, live longer? But you suggest in the book that the benefits of such technologies are unlikely to be evenly distributed.
That’s already happening as a result of better public health and dropping fertility. The median age in most of Europe by the year 2050 absent massive immigration is going to be something close to 60 years old. Meanwhile in the Middle East, North Africa, and sub-Saharan Africa, the media age is going to be 21 and 22–where it’s always been through human history. And so you’re going to have this little island of well-to-do elderly people surrounded by vast numbers of people who are a good deal younger and poorer, all wanting to move to the island.
But biotechnology carries the potential for much greater ineuality. If the human race in effect starts speciating into different genetic types, this will bring about the world that Nietzsche wanted to welcome in. If you can change the essential nature of human beings, then I think you change–perhaps destroy–the basis on which we assign human beings rights.
That would be “posthuman future” you’re warning us about in the book. But aren’t you saying that such a future is inevitable; in fact, you are saying just the opposite.
That’s right. We have a choice–we can regulate this new technology, just as we have already chosen to regulate human experimentation, for example. We greatly slow down the rate of scientific advance by making it really hard for doctors to injure patients. If you could do clinical trials where you could take advntage of poorly educated subjects as they did in the Tuskegee Syphilis experiment, you could move a lot faster. But we’ve decided for ethical reasons we don’t want to do that, despite all the great things we could gain as a result. It’s a matter of political will, exercised for the common good.
The history of the United States is a history of blacks, women, and other struggling to get admitted into the charmed circle of people who are endowed with political rights and considered full human beings. Despite some ongoing problems, that battle has been won–that’s one of the great accomplishments of twentieth-century politics. We declared in 1776 this principle that all men are created equal and then we gradually fulfilled it. Genetic engineering has the power to undo all that we hvae accomplished, in fact creating different classes of people who would not be by nature equal and therefore not entitled to equal rights.
Why is a clear understanding of human nature so critical as we wrestle with these questions?
Human nature is not infinitely malleable. For us to flourish as human beings, we have to live according to our nature, satisfying the deepest longings that we as natural beings have. The modern language of rights is just a translation of Aristotle’s language of human goods. For example, our nature gives us tremendous cognitive capabilities, capability for reason, capability to learn, to teach ourselves things, to change our opinions, and so forth. What follows from that? A way of life that permits such growth is better than a life in which this capacity is shriveled and stunted in various ways. That’s what’s meant by the basis of rights in nature. It’s to understand natural goods and promote the fullest expression of them.
The problem with modern ethical doctrines is that they either reduce human ends simply to utilitarian ones–pain and pleasure, or some other reductionist view of what human beings seek–or elevate individual autonomy to be the good of all human goods, so that as long as you have autonomy you don’t need any of these substantive human goods.
But this is an area, isn’t it–because the dangers of runaway biotechnology are so staggering–where there can be real coalitions crossing political lines? It seems that people on the Right and on the Left can agree that certain boundaries ought to be put on certain biotechnological advances.
That’s happening laready. I think in the cloning bill you’ve already seen an alliance of social conservatives and religious conservatives with environmentalists and other kinds of progressivists and certain feminists. On the other hand, there is a fundamental split between libertarian conservatives and people who take religious seriously. Libertarians believe in the complete sovereingty of the individual and individual preferences. There’s really no higher moral basis on which an individual’s choices can be criticized. This fundamental difference in outlook has been papered over in the existing conservative coalition, but the biotechnology revolution will expose it.
How are we going to navigate this complex moral terrain? Is there a single standard that you suggest to guide our decisions? Some core principle that we can all agree is worth defending?
My own preference would be a slogan like “Hands off human nature.” You don’t want to do things that really change core human behaviors. I think in practice the way you would implement that is by saying that you really want to try to draw some distinction between therapy and enhancement, that you want to preserve the uses of biomedicine to heal the sick, including people that have genetic disorders and genetic diseases, but you don’t want to do things that turn people into gods or subhumans, in effect.
Are there limitations to this idea that human nature can provide the normative ground in the coming age of biotechnology?
Well, there’s religion. I have a long chapter, which I regard as the central chapter in my book, on human dignity, in which I try to explain what that means in secular terms. But you could do it just as easily by saying that man was created in the image of God. It may have been easier to simply take that line of argument.
No, I like the way you did it. You said, “For religious people, man is endowed with dignity by God, and now I’m going to make an argument for those of you who don’t believe that.
And you end up in the same place, really, because you are making the argument for human dignity. In this exceptionally diverse society, you’re not going to convince a majority of people strictly by appeal to religious principles. There may have been a time when you could have, but now it’s probably not the case, and so you have to build coalitions. I think that is what’s going on now. You have to go at it in a variety of ways.