Free speech in a university is a very different thing from free speech in Congress or Parliament, freedom of the press, or free speech in the street. Each milieu has its own conventions and traditions, and each must protect its freedoms for its own purposes and with a view to its own particular good. In everyday conversation, it is not as a rule advisable that all aspects of a question be openly discussed, and laws of libel, public order, and sedition protect people from hurtful or provocative language.
Those laws have been radically extended in recent times, with the invention of “hate speech” as a quasi-legal category, and legislation like the UK 2006 “Racial and Religious Hatred Act,” which makes it an offense to “stir up hatred” toward racial and religious groups. The emerging consensus is that, in the arena of everyday encounters, untrammeled freedom of speech has costs that might well outweigh its benefits, and the law has the right to intervene on behalf of public order.
What, however, should be the rule governing free speech in a university? A modern university is very different from the medieval institution from which it descends. The medieval university contained faculties of law and medicine, and it extended its reach into mathematics and the natural sciences. But it was built around the study of the dogmas and authorities of the Church. A large part of its intellectual labor was devoted to identifying and extirpating heresies, and although you could do this only if you were free to express those heresies in words and to examine the arguments given in support of them, you were not in any real sense free to affirm them. It would be quite misleading to say that the medieval university was devoted to the advancement of free inquiry, since freedom stopped dead at the exit from faith—even if that exit could be discovered only by a kind of free inquiry.
There are universities in existence today that resemble the medieval pattern—Al-Azhar in Cairo is an evident example, and an unusual one in that it has itself survived from the earliest medieval times and was the model for the universities that sprang up much later in Christian Europe. For the most part, however, our universities underwent a radical change in their social and intellectual agenda at the Enlightenment, when theology was displaced from its central position in the curriculum, and the humanities—the studia humaniora—came to replace the studia divina. Although skepticism, atheism, and heresy were still off the agenda, this was largely because they were regarded as errors rather than as crimes. By the time the University of Berlin was founded under Humboldt’s direction in 1810, it was assumed on every side that universities were places of free inquiry, whose purpose was to advance knowledge regardless of where it might lead, and to make knowledge available to the rising generation. This emphasis on knowledge applied not only in the sciences, where free inquiry is in any case of the essence, but also in the humanities.
Two interesting intellectual disciplines emerged during the course of the eighteenth century: the comparative study of religions and the philological study of the scriptures. While neither of those studies was directed against the tenets of the Christian faith, they both had the effect of removing some of the carefully protected certainties at the heart of it. By the beginning of the nineteenth century, it was only an ill-informed person who could believe the Bible to be literally the word of God, or the Christian religion to be the unique form of religious devotion. When Mill issued his famous defense of free opinion, in On Liberty (1859), it was widely accepted that the free expression of dissenting views is important in all areas of inquiry and not just in the natural sciences. To quote Mill’s now famous words:
The peculiar evil of silencing the expression of an opinion is that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error.
That is fine, so far as it goes; but what if it is not truth that people are seeking, but some other benefit, such as membership, solidarity, or consolation? Is freedom of opinion the same benefit in the search for consolation as it is in the search for truth? Clearly not. Religions, Durkheim taught us, offer membership, and that is their social function. They fill the void in the human heart with the mystical presence of the group, and if they do not provide this benefit they will wither and die, like the religions of the ancient world during the Hellenistic period. It is in the nature of a religion to protect itself from rival groups and the heresies that promote them. It is therefore not an accident that heretics are marginalized, murdered, or burned at the stake.
Of course, we Christians no longer engage in those practices, since we have learned the art of putting our religion on hold when dealing with those who do not share it, thereby clearing as much space as possible for the free discussion of alternatives. But this ongoing compromise, between religion and free inquiry, is foreign to many other worldviews. We now have living among us people who believe that errors of religion are punishable by death and that those who carry out this punishment win special favor with the Almighty.
Interestingly enough, however, it is not every error of religion that calls down this response. This fact is of the first importance in understanding our changed circumstances today. A Glasgow shopkeeper, Asad Shah, was recently savagely murdered by a young man called Tanveer Ahmed. Mr. Shah’s offense was that he belonged to the Ahmadi sect of Islam, a branch of the Shi’a that welcomes open relations with nonbelievers and extends a Sufi-like goodwill toward those who have yet to obtain salvation—a fact not unconnected with Mr. Shah’s status as a loved and respected neighbor of the people among whom he had settled. As the murderer was led away to life imprisonment, crowds of fellow Sunnis gathered outside the court to proclaim their support, while Mr. Ahmed himself, who openly confessed to the crime, expressed no regret for having committed it. On the other hand, Mr. Ahmed insisted that he felt no aggression toward Christians, Jews, or adherents of some other religion. He was offended by a heresy within Islam, not by the existence of a rival faith. In a peculiar way, trapped as he was by a quasi-genetic imperative of which he was merely the contemptible slave, he wished to vindicate his action in the eyes of his fellow Sunnis, and was entirely indifferent to the rest of the world. It was not error that offended him but deviation in the heart of his own inherited community.
The example is one of many, and we should learn from it. The heretic offends not because he has acquired the wrong beliefs in the course of his religious inquiries. Christians, Jews, and atheists are all in error, as far as Mr. Ahmed was concerned. But their errors were not Mr. Ahmed’s concern, and in no way offensive to him. Mr. Shah, however, was a heretic, one whose errors are not just errors but crimes, since they attack the group from a place within its spiritual territory. Heretics are essentially subversive: to accept what they say is to acknowledge that, in some deep sense, the group is arbitrary, that it might have been put together in another way, and that those currently regarded as members and side-by-side with you in life might have been strangers, even enemies, in the search for spiritual and geographical Lebensraum. This thought is subversive of the whole religious project, since it tells you that, after all, truth is not what religion is about, that any old doctrine might have served just as well, provided the benefits of membership flowed from it. In effect, though not in intention, the heretic relativizes what must be believed absolutely if it is to be believed at all.
The fear of heresy is not exhibited only in the realm of religious belief. If you look at the history of the communist movement, you will be reminded of the often genocidal disputes over Arianism and Pelagianism in the ancient world, and of the religious inquisitions of the late medieval period, in which heresies were singled out and named—sometimes for the person who first committed them or made them prominent. The Second International gave us “Menshevism” and “left deviationism,” which were followed by “infantile leftism,” “social fascism,” and in due course “Trotskyism,” all to be contrasted with the “Marxism-Leninism” that was eventually settled upon as orthodoxy. Particularly amusing is the accusation brought against Dr. Zhivago for relying on his diagnostic intuition: “neo-Schellingism.” Once again the real danger was for the heretic within, rather than for the outsider who could, at the time, safely laugh at what was happening—though the time came, as it is coming with Islamism today, when nobody could laugh safely. And maybe this is happening, too, in our universities, as the undefined and indefinable heresies are captured by labels and stuck with all the force required on the chosen victim: racism, sexism, ageism, speciesism, and so on, all potentially career-ending offenses.
The fear of heresy arises whenever groups are defined by a doctrine. No matter how absurd the doctrine may be, if it is a test of membership then it must be protected from criticism. And the more absurd it is, the more vehement the protection. Most of us can live with false accusations, but when a criticism is true we hasten to silence the one who utters it. In just that way, it is the most vulnerable religious doctrines that are the most violently protected. If you mock the claim of Muslims that theirs is a “religion of peace,” you run the greatest of risks: the Islamist proves his devotion to peace by killing those who question it.
In universities today, however, students—and certainly the most politically active among them—tend to resist the idea of exclusive groups. They are particularly insistent that distinctions associated with their inherited culture—between sexes, classes, and races; between genders and orientations; between religions and lifestyles—should be rejected, in the interests of an all-comprehending equality that leaves each person to be who she really is. A great negation sign has been placed in front of all the old distinctions, and an ethos of “non-discrimination” adopted in their stead. And yet this seeming open-mindedness inspires its proponents to silence those who offend against it. Certain opinions—namely, those that make the forbidden distinctions—become heretical. By a move that Michael Polanyi described as “moral inversion,” an old form of moral censure is renewed, by turning it against its erstwhile proponents. Thus, when a visiting speaker is diagnosed as someone who makes “invidious distinctions,” he or she is very likely to be subjected to intimidation for being a supporter of old forms of intimidation.
There may be no knowing in advance how the new heresies might be committed, or what exactly they are, since the ethic of nondiscrimination is constantly evolving to undo distinctions that were only yesterday part of the fabric of reality. When Germaine Greer made the passing remark that, in her opinion, women who regarded themselves as men were not, in the absence of a penis, actually members of the male sex, the remark was judged to be so offensive that a campaign was mounted to prevent her speaking at the University of Cardiff. The campaign was not successful, partly because Germaine Greer is the person she is. But the fact that she had committed a heresy was unknown to her at the time, and probably only dawned on her accusers in the course of practicing that morning’s “Two Minutes Hate.”
More successful was the campaign in Britain to punish Sir Tim Hunt, the Nobel Prize‒winning biologist, for making a tactless remark about the difference between men and women in the laboratory. A media-wide witch hunt began, leading Sir Tim to resign from his professorship at University College London; the Royal Society (of which he is a fellow) went public with a denunciation, and he was pushed aside by the scientific community. A lifetime of distinguished creative work ended in ruin. That is not censorship, so much as the collective punishment of heresy, and we should try to understand it in those terms.
The ethic of nondiscrimination tells us that we must not make any distinctions between the sexes and that women are as adapted to a scientific career as men are. That view is unquestionable in any territory claimed by the radical feminists. I don’t know whether it is true, but I doubt that it is, and Sir Tim’s tactless remark suggested that he does not believe it either. How would I find out who is right? Surely, by considering the arguments, by weighing the competing opinions in the balance of reasoned discussion, and by encouraging the free expression of heretical views. Truth arises by an invisible hand from our many errors, and both error and truth must be permitted if the process is to work. Heresy arises, however, when someone questions a belief that must not be questioned from within a group’s favored territory. The favored territory of radical feminism is the academic world, the place where careers can be made and alliances formed through the attack on male privilege. A dissident within the academic community must therefore be exposed, like Sir Tim, to public intimidation and abuse, and in the age of the Internet this punishment can be amplified without cost to those who inflict it. This process of intimidation casts doubt, in the minds of reasonable people, on the doctrine that inspires it. Why protect a belief that stands on its own two feet? The intellectual frailty of the feminist orthodoxy is there for all to see in the fate of Sir Tim.
Is there any reason for thinking that universities have a special role in these matters, either to support free speech in general or to create a space where it can occur? The answer, I think, is yes, and both University College London and the Royal Society displayed, in their refusal to protect Sir Tim from the cloud of twittering morons, the sad state of the academic world today, which is losing all sense of its role as guardian of the intellectual life—losing it precisely through giving way before the orthodoxies of nondiscrimination. As Jonathan Haidt has eloquently argued, at the very moment when universities are advocating diversity as a fundamental academic value—meaning by “diversity” all that I have included under the term “nondiscrimination”—the true diversity for which a university should make a stand, namely diversity of opinion, has been steadily eroded and in many places destroyed entirely.
The reasons for the ethic of nondiscrimination, and for the moral inversion that has made it into a fierce form of discrimination, directed against whoever transgresses its fluid and unpredictable boundaries, lie deep. As Rusty Reno has eloquently argued in Resurrecting the Idea of a Christian Society, the Enlightenment, which sought for a world in which reason had a head start over prejudice in all public debate, also sowed the seeds of its own destruction, in exalting individual autonomy above every form of obedience. I am my own author was the Enlightenment premise; I can be what I choose to be, provided I do no harm to others. Social conventions, traditional forms of life, divisions of roles and communal identities, even the differences in social status associated with the biological division of labor between the sexes—all such things are of no significance compared with my free choice whether to give credence to them. Little by little, as the old authorities slipped away or lost their aura, more and more of human life was stripped of the rules, customs, and distinctions that make sense of it, and more and more did everything in life, everything that might matter to me and constitute my personal happiness, become an object of choice, in which only I have the right of action, and nobody else has the right to interfere.
Hence nobody now may impose upon me an identity that I myself have not chosen. My nature as a self-created being is inviolable. Your disapproval of my lifestyle is your problem, not mine; should you object to my homosexuality, that proves only that you suffer from homophobia, a disorder of the soul that is also a hangover from an outmoded form of life. There is therefore no room now for argument about homosexuality, still less for criticism. Your objection to Islam and the presence in our midst of its adherents is your problem—a sign of Islamophobia, a mental disease that unaccountably swept across the Western world on September 11, 2001. Racism, sexism, homophobia, Islamophobia—all the -isms and -phobias that call down the damning tirades of the orthodox—are the residue of old and vanquished forms of life, last gasps of Western civilization in its vain attempt to cling to its empire among the living. That is what Germaine Greer came up against: a new and unexpected extension of the morality of self-choice, which tells us that we are guilty of transphobia if we deny a person the right that it can decide for herself what gender he is.
This is all very well, you might say, but it does not yet constitute an assault on free speech. And that is true. It is perfectly possible to accept the latest adventure in nondiscrimination while allowing others to speak out against it. However, it doesn’t work that way. The furor over the “transgender” issue comes into the general category of identity politics. It is about who you are, not what you think. So thinking the wrong thing, still more saying the wrong thing, is an act of aggression, the equivalent of racist abuse or sexual harassment in the work place. The nondiscrimination movement is about extending to others the freedom to choose their own identity; to criticize this is to constrain other people in their deepest being, in those “existential choices” that determine who they are: it is an act of aggression and not just a comment. Hence it must be punished. More, it must be rooted out, with full-scale purges and witch hunts and the official purification of the language of scholarship. At this moment the Students’ Union at the School of Oriental and African Studies, in the University of London, a school that was one of the pioneers in the study of oriental religion and philosophy, is agitating to remove Descartes, Hume, Kant, and the rest from the philosophy curriculum, since they were simply apologists for their “colonial context.”
Hence the ethic of nondiscrimination ends up as an assault on free speech in just the same way as does the ethic of religious discrimination—fear of the heretic. This suggests to me that we are dealing with a feature of human nature that lies too deep for any lasting remedy. Nonbelonging is an identity-forming stance, just as much as belonging. Threaten the identity that results and you must be exposed, shamed, and if possible silenced.
One of the most remarkable features of the new kinds of identity, however, is the persecution of the heretic through a gesture of self-persecution. There is an initial martyrdom moment as the would-be victims see an opportunity to “take offense” and to put their vulnerability on display. Traditional education had much to say about the art of not giving offense. Modern education has a lot more to say about the art of taking offense. This, in my experience, has been one of the achievements of gender studies, which has shown students how to take offense at behavior, at words, at institutions and customs, and even at facts when “gender identity” is in question. It did not take much education to make old-fashioned women take offense at the presence of a man in the women’s bathroom. But it takes a lot of education to teach a woman to take offense at a women’s bathroom that biological males who declare themselves to be women are not free to use. But the education is there, and for a mere $200,000 in an Ivy League university you can acquire it.
In a similar spirit, students today are being encouraged—and again gender studies is at the forefront of the movement—to demand “safe spaces” where their carefully nurtured vulnerabilities will not be “triggered” into crisis. The correct response to this, which is to invite students to look for a safe space elsewhere, is not one that universities seem to consider, since after all each student is an addition to the income account, and censorship costs nothing.
Saving the University as an Institution
This brings me, at last, to the place of the university in the exercise of free speech. It seems to me that the battles between those who unwittingly give offense and those who are experts in taking it can be conducted on the street, in the restaurant, the bar, and the family (if families are still allowed) without losing the precious thing our civilization passed on to us, which is the love of truth and the ability to face up to it, whether or not it consoles us. It is my belief—hard to justify and as much the product of my experience as of any philosophical argument—that an institution in which the truth can impartially be sought, without censorship, and without penalties imposed on those who disagree with the prevailing orthodoxy, is a social benefit beyond anything that can now be achieved by controlling permitted opinion. I can accept that there might be laws, conventions, and manners limiting the expression of opinion in the world at large, in those places where this or that group has staked a claim to its identity. I can accept that you must tread softly when it comes to religion, sexual mores, and the expression of loyalties that conflict with your own. But if the university renounces its calling in the matter of truth-directed argument, then we not only lose a great benefit from which all of us profit; we lose the university as an institution. It becomes something else—a center of indoctrination without a doctrine, a way of closing the mind without the great benefit that is conferred by religion, which also closes the mind, but closes it around a community-creating narrative. We should recall that, when the totalitarian movements of the twentieth century began their wars and genocides, the universities were first among their targets—the places where discussion was most urgently to be controlled. The behavior of the communist and anarchist student cells in Russia, and the Brown Shirts in Germany, was repeated by the student revolutionaries of May 1968 in France and by many student activists today.
Indeed, my own experience of universities has not, in this matter, been altogether encouraging. I do not think there is very much censorship in our universities, other than that imposed impromptu by the students and acquiesced in by a weak establishment. But it has been true for a long time that there are orthodoxies in a university that cannot easily be transgressed without penalty, and that the penalty is not imposed on scholarly or academic grounds but on grounds that could fairly be described as ideological.
It will always be true that a public doctrine holds sway in any civilized community, and that the universities will be expected to conform to it, however obliquely. In our case, however, it is the universities that have created the orthodoxy. The left-liberal worldview concealed within the humanities as they are taught today as an unacknowledged and unquestionable premise is, as we were reminded in the Brexit vote and in the election of Donald Trump, not orthodoxy in the surrounding community. But it is an astute career move to conform to it, whether or not you agree. Moreover it endorses and is endorsed by the community of nonbelonging that is emerging among the students. The left-liberal worldview is not, on the whole, concerned with the wider situation of the world, for all its global pretensions. It is concerned with us, with the Western inheritance. It is an exercise in self-castigation, designed to show in all matters—history, literature, art, religion—the glaring moral faults of a civilization that has depended on distinctions of sex, race, class, orientation, and the rest in order to manufacture a false image of its superiority. At the same time, the current orthodoxy carefully refrains from any comparative judgments: gender studies will give you an earful of spite about the treatment of women and homosexuals in Western societies, but carefully pass over the treatment of women and homosexuals in Islam. After all, it is important not to incur the charge of Islamophobia. The university must become a “safe space” for Muslims, as well as for other vulnerable and marginalized groups—hence the successful campaign to force Brandeis University to withdraw the honorary degree offered to Ayaan Hirsi Ali. She had spoken truths about Islam and was therefore a threat to Muslim students and an invasion of the “safe space” that the university was obliged to offer them.
Now I, too, would like the university to be a safe space, but a safe space for rational argument about the pressing issues of our time. In our world today, grotesque falsehoods are constantly repeated for fear of offending the vigilantes of Islam or the thought police of political correctness. We cannot freely discuss the nature of Islam, its sacred text and guiding myths, and its legal status in a secular society. The charge of Islamophobia is designed precisely to shut down debate about the matters that most need to be debated—for example, whether it is true that, for a Muslim, apostasy means death, adultery means stoning, or that secular law and the nation-state mean, as Sayyid Qutb has said they mean, blasphemy against the Koran. By not discussing these things, we do a great disservice to our Muslim fellow citizens in not opening avenues to their integration in the only community they really have. Nor can we freely discuss any of the iconic issues singled out as defining political correctness—such as sex, gender, orientation. We are wandering in a world of utter relativity but bound by orders that are absolutes—the order not to refer to this, not to laugh at that, and in the presence of all uncertain things to stay silent. In all this we are losing our sense that some things really matter, and matter because they are true and not just because some group of benighted people believe them, or some other group has decided to enforce them. If a university stands for anything, surely it stands for that idea of truth, as a guiding light in our darkness and the source of real knowledge.
The end of the law is, not to abolish or restrain, but to preserve and enlarge freedom. For in all the states of created beings capable of laws, where there is no law the” is no freedom. For liberty is to be free from restraint and violence from others; which cannot be where there is no law: and is not, as we are told, a liberty for every man to do what he lists. (For who could be free when every other man’s humour might domineer over him?) But a liberty to dispose, and order as he lists, his person, actions, possessions, and his whole property, within the allowance of those laws under which he is, and therein not to he the subject of the arbitrary will of another, but freely follow his own.
Individual liberty in modern times can hardly be traced back farther than the England of the seventeenth century. It appeared first, as it probably always does, as a by-product of a struggle for power rather than as the result of deliberate aim. But it remained long enough for its benefits to be recognized. And for over two hundred years the preservation and perfection of individual liberty became the guiding ideal in that country, and its institutions and traditions the model for the civilized world.
This does not mean that the heritage of the Middle Ages is irrelevant to modern liberty. But its significance is not quite what it is often thought to be. True, in many respects medieval man enjoyed more liberty than is now commonly believed. But there is little ground for thinking that the liberties of the English were then substantially greater than those of many Continental peoples. But if men of the Middle Ages knew many liberties in the sense of privileges granted to estates or persons, they hardly knew liberty as a general condition of the people. In some respects the general conceptions that prevailed then about the nature and sources of law and order prevented the problem of liberty from arising in its modern form. Yet it might also be said that it was because England retained more of the common medieval ideal of the supremacy of law, which was destroyed elsewhere by the rise of absolutism, that she was able to initiate the modern growth of liberty.
This medieval view, which is profoundly important as background for modern developments, though completely accepted perhaps only during the early Middle Ages, was that “the state cannot itself create or make law, and of course as little abolish or violate law, because this would mean to abolish justice itself, it would be absurd, a sin, a rebellion against God who alone creates law.”‘ For centuries it was recognized doctrine that kings or any other human authority could only declare or find the existing law, or modify abuses that had crept in, and not create law. Only gradually, during the later Middle Ages, did the conception of deliberate creation of new law- legislation as we know it-come to be accepted. In England, Parliament thus developed from what had been mainly a law-finding body to a law-creating one. It was finally in the dispute about the authority to legislate in which the contending parties reproached each other for acting arbitrarily-acting, that is, not in accordance with recognized general laws- that the cause of individual freedom was inadvertently advanced. The new power of the highly organized national state which arose in the fifteenth and sixteenth centuries used legislation for the first time as an instrument of deliberate policy. For a while it seemed as if this new power would lead in England, as on the Continent, to absolute monarchy, which would destroy the medieval liberties. The conception of limited government which arose from the English struggle of the seventeenth century was thus a new departure, dealing with new problems. If earlier English doctrine and the great medieval documents, from Magna Carta, the great “Constitutio Libertatis,” downward, are significant in the development of the modern, it is because they served as weapons in that struggle.
Yet if for our purposes we need not dwell longer on the medieval doctrine, we must look somewhat closer at the classical inheritance which was revived at the beginning of the modern period. It is important, not only because of the great influence it exercised on the political thought of the seventeenth century, but also because of the direct significance that the experience of the ancients has for our time.
Though the influence of the classical tradition of the modern ideal of liberty is indisputable, its nature is often misunderstood. It has often been said that the ancients did not know liberty in the sense of individual liberty. This is true of many places and periods even in ancient Greece, but certainly not of Athens at the time of its greatness (or of late republican Rome); it may be true of the degenerate democracy of Plato’s time, but surely not of those Athenians to whom Pericles said that “the freedom which we enjoy in our government extends also to our ordinary life [where], far from exercising a jealous surveillance over each other, we do not feel called upon to be angry with our neighbour for doing what he likes” and whose soldiers, at the moment of supreme danger during the Sicilian expedition, were reminded by their general that, above all, they were fighting for a country in which they had unfettered discretion to live as they pleased.”” What were the main characteristics of that freedom of the “freest of free countries,” as Nicias called Athens on the same occasion, as seen both by the Greeks themselves and by Englishmen of the later Tudor and Stuart times?
The answer is suggested by a word which the Elizabethans borrowed from the Greeks but which has since gone out of use.” “Isonomia” was imported into England from Italy at the end of the sixteenth century as a word meaning “equality of laws to all manner of persons”; shortly afterward it was freely used by the translator of Livy in the Englished form “Isonomy” to describe a state of equal laws for all and responsibility of the magistrates.” It continued in use during the seventeenth century” until “equality before the law,” “government of law,” or “rule of law” gradually displaced it.
The history of the concept in ancient Greece provides an interesting lesson because it probably represents the first instance of a cycle that civilizations seem to repeat. When it first appeared,” it described a state which Solon had earlier established in Athens when he gave the people “equal laws for the noble and the base” and thereby gave them “not so much control of public policy as the certainty of being governed legally in accordance with known rules.””‘ Isonomy was contrasted with the arbitrary rule of tyrants and became a familiar expression in popular drinking songs celebrating the assassination. of one of these tyrants.” The concept seems to be older than that of demokratia, and the demand for equal participation of all in the government appears to have been one of its consequences. To Herodotus it is still isonomy rather than democracy which is the “most beautiful of all names” of a political order. The term continued in use for some time after democracy had been achieved, at first in its justification and later, as has been said increasingly in order to disguise the character it assumed; for democratic government soon came to disregard that very equality before the law from which it had derived its justification. The Greeks- clearly understood that the two ideals, though related, were not the same: Thucydides speaks without hesitation about an “isonomic oligarchy, and Plato even uses the term “isonomy” in deliberate contrast to democracy rather than in justification of it. By the end of the fourth century it had come to be necessary to emphasize that “in a democracy the laws should be masters.
Against this background certain famous passages in Aristotle, though he no longer uses the term “isonomia,” appear as a vindication of that traditional ideal. In the Politics he stresses that “it is more proper that the law should govern than any of the citizens,” that the persons holding supreme power “should be appointed only guardians and servants of the law,” and that “he who would place supreme power in mind, would place it in God and the laws. He condemns the kind of government in which “the people govern and not the law” and in which “everything is determined by majority vote and not by law.” Such a government is to him not that of a free state, “for, when government is not in the laws, then there is no free state, for the law ought to be supreme over all things. ” A government that “centers all power in the votes of the people cannot, properly speaking, be a democracy: for their decrees cannot be general in their extent.”” If we add to this the following passage in the Rhetoric, we have indeed a fairly complete statement of the ideal of government by law:” “It is of great moment that well drawn laws should themselves define all the points they possibly can, and leave as few as possible to the decision of the judges [for] the decision of the lawgiver is not particular but prospective and general, whereas members of the assembly and the jury find it their duty to decide on definite cases brought before them.
There is clear evidence that the modern use of the phrase “government by laws and not by men” derives directly from this statement of Aristotle. Thomas Hobbes believed that it was “just another error of Aristotle’s politics that in a well-ordered commonwealth not men should govern but the law, whereupon James Harrington retorted that “the art whereby a civil society is instituted and preserved upon the foundations of common rights and interest . . . [is], to follow Aristotle and Livy, the empire of laws not of men.”
In the course of the seventeenth century the influence of Latin writers largely replaced the direct influence of the Greeks. We should therefore take a brief look at the tradition derived from the Roman Republic. The famous Laws of the Twelve Tables, reputedly drawn up in conscious imitation of Solon’s laws, form the foundation of its liberty. The first of the public laws in them provides that “no privileges, or statutes shall be enacted in favour of private persons, to the injury of others contrary to the law common to all citizens, and which individuals, no matter of what rank, have a right to make use of.”” This was the basic conception under which there was gradually formed, by a process very similar to that by which the common law grew,” the first fully developed system of private law-in spirit very different from the later Justinian code, which determined the legal thinking of the Continent.
This spirit of the laws of free Rome has been transmitted to us mainly in the works of the historians and orators of the period, who once more became influential during the Latin Renaissance of the seventeenth century. Livy-whose translator made people familiar with the term “Isonomia” (which Livy himself did not use) and who supplied Harrington with the distinction between the government of law and the government of men-Tacitus and, above all, Cicero became the chief authors through whom the classical tradition spread. Cicero indeed became the main authority for modern liberalism,” and we owe to him many of the most effective formulations of freedom under the law. To him is due the conception of general rules or leges legum, which govern legislation, the conception that we obey the law in order to be free, and the conception that the judge ought to be merely the mouth through whom the law speaks. No other author shows more clearly that during the classical period of Roman law it was fully understood that there is no conflict between law and freedom and that freedom is dependent upon certain attributes of the law, its generality and certainty,,and the restrictions it places on the discretion of authority.
This classical period was also a period of complete economic freedom, to which Rome largely owed its prosperity and power. From the second century A.D., however, state socialism advanced rapidly. In this development the freedom which equality before the law had created was progressively destroyed as demands for another kind of equality arose. During the later empire the strict law was weakened as, in the interest of a new social policy, the state increased its control over economic life. The outcome of this process, which culminated under Constantine, was, in the words of a distinguished student of Roman law, that “the absolute empire proclaimed together with the principle of equity the authority of the empirical will unfettered by the barrier of law. Justinian with his learned professors brought this process to its conclusions. Thereafter, for a thousand years, the conception that legislation should serve to protect the freedom of the individual was lost. And when the art of legislation was rediscovered, it was the code of Justinian with its conception of a prince who stood above the law that served as the model on the Continent.
In England, however, the wide influence which the classical authors enjoyed during the reign of Elizabeth helped to prepare the way for a different development. Soon after her death the great struggle between king and Parliament began, from which emerged as a by-product the liberty of the individual. It is significant that the disputes began largely over issues of economic policy very similar to those which we again face today. To the nineteenth-century historian the measures of James I and Charles I which provoked the conflict might have seemed antiquated issues without topical interest. To us the problems caused by the attempts of the kings to set up industrial monopolies have a familiar ring: Charles I even attempted to nationalize the coal industry and was dissuaded from this only by being told that this might cause a rebellion.
Ever since a court had laid down in the famous Case of Monopolies that the grant of exclusive rights to produce any article was “against the common law and the liberty of the subject,” the demand for equal laws for all citizens became the main weapon of Parliament in its opposition to the king’s aims. Englishmen then understood better than they do today that the control of production always means the creation of privilege: that Peter is given permission to do what Paul is not allowed to do.
It was another kind of economic regulation, however, that occasioned the first great statement of the basic principle. The Petition of Grievances of 1610 was provoked by new regulations issued by the king for building in London and prohibiting the making of starch from wheat. This celebrated plea of the House of Commons states that, among all the traditional rights of British subjects, “there is none which they have accounted more dear and precious than this, to be guided and governed by the certain rule of law, which giveth to the head and the members that which of right belongeth to them, and not by any uncertain and arbitrary form of government…. Out of this root has grown the indubitable right of the people of this kingdom, not to be made subject to any punishment that shall extend to their lives, lands, bodies, or goods, other than such as are ordained by the common laws of this land, or the statutes made by their common consent in parliament”.
It was, finally, in the discussion occasioned by the Statute of Monopolies of 1624 that Sir Edward Coke,, the great fountain of Whig principles, developed his interpretation of Magna Carta that became one of the cornerstones of the new doctrine. In the second part of his Institutes of the Laws of England, soon to be printed by order of the House of Commons, he not only contended (with reference to the Case of Monopolies) that “if a grant be made to any man, to have the sole making of cards, or the sole dealing with any other trade, that grant is against the liberty and freedom of the subject, that before did, or lawfully might have used that trade, and consequently against this great charter”; but he went beyond such opposition to the royal prerogative to warn Parliament itself “to leave all causes to be measured by the golden and straight mete-wand of the law, and not to the incertain and crooked cord of discretion”.
Out of the extensive and continuous discussion of these issues during the Civil War, there gradually emerged all the political ideals which were thenceforth to govern English political evolution. We cannot attempt here to trace their evolution in the debates and pamphlet literature of the period, whose extraordinary wealth of ideas has come to be seen only since their re-publication. We can only list the main ideas that appeared in recent times more and more frequently until, by the time of the Restoration, they had become part of an established tradition and, after the Glorious Revolution of 1688, part of the doctrine of the victorious party.
The great event that became for later generations the symbol of the permanent achievements of the Civil War was the abolition in 1641 of the prerogative courts and especially the Star Chamber which had become, in F. W. Maitland’s often quoted words, “a court of politicians enforcing a policy, not a court of judges administering the law.”” At almost the same time an effort was made for the first time to secure the independence of the judges. In the debates of the following twenty years the central issue became increasingly the prevention of arbitrary action of government. Though the two meanings of “arbitrary” were long confused, it came to be recognized, as Parliament began to act as arbitrarily as the king,” that whether or not an action was arbitrary depended not on the source of the authority but on whether it was in conformity with pre-existing general principles of law. The points most frequently emphasized were that there must be no punishment without a previously existing law providing for it, that all statutes should have only prospective and not retrospective operation,” and that the discretion of all magistrates should be strictly circumscribed by law.” Throughout, the governing idea was that the law should be king or, as one of the polemical tracts of the period expressed it, Lex, Rex.
Gradually, two crucial conceptions emerged as to how these basic ideals should be safeguarded: the idea of a written constitution, and the principle of the separation of powers. When in January, 1660, just before the Restoration, a last attempt was made in the “Declaration of Parliament Assembled at Westminister” to state in a formal document the essential principles of a constitution, this striking passage was included: “There being nothing more essential to the freedom of a state, than that the people should be governed by the laws, and that justice be administered by such only as are accountable for maladministration, it is hereby further declared that all proceedings touching the lives, liberties and estates of all the free people of this commonwealth, shall be according to the laws of the land, and that the Parliament will not meddle with ordinary administration, or the executive part of the law: it being the principle [sic] part of this, as it hath been of all former Parliaments, to provide for the freedom of the people against arbitrariness in government.” If thereafter the principle of the separation of powers was perhaps not quite “an accepted principle of constitutional law,” it at least remained part of the governing political doctrine.
All these ideas were to exercise a decisive influence during the next hundred years, not only in England but also in America and on the Continent, in the summarized form they were given after the final expulsion of the Stuarts in 1688. Though at the time perhaps some other works were equally and perhaps even more influential, John Locke’s Second Treatise on Civil Government is so outstanding in its lasting effects that we must confine our attention to it.
Locke’s work has come to be known mainly as a comprehensive philosophical justification of the Glorious Revolution, and it is mostly in his wider speculations about the philosophical foundations of government that his original contribution lies. Opinions may differ about their value. The aspect of his work which was at least as important at the time and which mainly concerns us here, however, is his codification of the victorious political doctrine, of the practical principles which, it was agreed, should thenceforth control the powers of governments
While in his philosophical discussion Locke’s concern is with the source which makes power legitimate and with the aim of government in general, the practical problem with which he is concerned is how power, whoever exercises it, can be prevented from becoming arbitrary: “Freedom of men under government is to have a standing rule to live by, common to every one of that society, and made by the legislative power erected in it; a liberty to follow my own will in all things, where that rule prescribes not: and not to be subject to the inconstant, uncertain, arbitrary will of another man.” It is against the “irregular and uncertain exercise of the power” that the argument is mainly directed:’ the important point is that “whoever has the legislative or supreme power of any commonwealth is bound to govern by established standing laws promulgated and known to the people, and not by extemporary decrees; by indifferent and upright judges, who are to decide controversies by those laws; and to employ the forces of the community at home only in the execution of such laws. Even the legislature has no “absolute arbitrary power,”,”cannot assume to itself a power to rule by extemporary arbitrary decrees, but is bound to dispense justice, and decide the rights of the subject by promulgated standing laws, and known authorized judges,” while the “supreme executor of the law … has no will, no power, but that of the law”. Locke is loath to recognize any sovereign power, and the Treatise has been described as an assault upon the very idea of sovereignty. The main practical safeguard against the abuse of authority proposed by him is the separation of powers, which he expounds somewhat less clearly and in a less familiar form than did some of his predecessors. His main concern is how to limit the discretion of “him that has the executive power,” but he has no special safeguards to offer. Yet his ultimate aim throughout is what today is often called the “taming of power”: the end why men “choose and authorize a legislative is that there may be laws made, and rules set, as guards and fences to the properties of all the members of society, to limit the power and moderate the dominion of every part and member of that society.”
It is a long way from the acceptance of an ideal by public opinion to its full realization in policy; and perhaps the ideal of the rule of law had not yet been completely put into practice when the process was reversed two hundred years later. At any rate, the main period of consolidation, during which it progressively penetrated everyday practice, was the first half of the eighteenth century. From the final confirmation of the independence of .judges in the Act of Settlement of 1701, through the occasion when the last bill of attainder ever passed by Parliament in 1706 led not only to a final restatement of all the arguments against such arbitrary action of the legislature but also to a reaffirmation of the principle of the separation of powers, the period is one of low but steady extension of most of the principles for which the Englishmen of the seventeenth century had fought.
A few significant events of the period may be briefly mentioned, such as the occasion when a member of the House of Commons (at a time when Dr. Johnson was reporting the debates) restated the basic doctrine of nulla poena sine lege, which even now is sometimes alleged not to be part of English law: “That where there is no law there is no transgression, is a maxim not only established by universal consent, but in itself evident and undeniable; and it is, Sir, surely no less certain that where there is no transgression there can be no punishment.” Another is the occasion when Lord Camden in the Wilkes case made it clear that courts are concerned only with general rules and not with the particular aims of government or, as his position is sometimes interpreted, that public policy is not an argument in a court of law. In other respects progress was more slow, and it is probably true that, from the point of view of the poorest, the ideal of equality before the law long remained a somewhat doubtful fact. But if the process of reforming the laws in the spirit of those ideals was slow, the principles themselves ceased to be a matter of dispute: they were no longer a party view but had come to be fully accepted by the Tories.” In some respects, however, evolution moved away rather than toward the ideal. The principle of the separation of powers in particular, though regarded throughout the century as the most distinctive feature of the British constitution became less and less a fact as modern cabinet government developed. And Parliament with its claim to unlimited power was soon to depart from yet another of the principles.
The second half of the eighteenth century produced the coherent expositions of the ideals which largely determined the climate of opinion for the next hundred years. As is so often the case, it was less the systematic expositions by political philosophers and lawyers than the interpretations of events by the historians that carried these ideas to the public. The most influential among them was David Hume, who in his works again and again stressed the crucial points and of whom it has justly been said that for him the real meaning of the history of England was the evolution from a “government of will to a government of law.”At least one characteristic passage from his History of England deserves to be quoted. With reference to the abolition of the Star Chamber he writes: “No government, at that time, appeared in the world, nor is perhaps to be found in the records of any history, which subsisted without the mixture of some arbitrary authority, committed to some magistrate; and it might reasonably, beforehand, appear doubtful, whether human society could ever arrive at that state of perfection, as to support itself with no other control, than the general and rigid maxims of law and equity. But the parliament justly thought, that the King was too eminent a magistrate to be trusted with discretionary power, which he might so easily turn to the destruction of liberty. And in the event it has been found, that, though some inconveniencies arise from the maxim of adhering strictly to law, yet the advantages so much overbalance them, as should render the English forever grateful to the memory of their ancestors, who, after repeated contests, at last established that noble principle.”
Later in the century these ideals are more often taken for granted than explicitly stated, and the modern reader has to infer them when he wants to understand what men like Adam Smith and his contemporaries meant by “liberty.” Only occasionally, as in Blackstone’s Commentaries, do we find endeavors to elaborate particular points, such, as the significance of the independence of the judges and of the separation of powers, or to clarify the meaning of “law” by its definition as “a rule, not a transient sudden order from a superior or concerning a particular person; but something permanent, uniform and universal.”
Many of the best-known expressions of those ideals are, of course, to be found in the familiar passages of Edmund Burke. But probably the fullest statement of the doctrine of the rule of law occurs in the work of William Paley, the “great codifier of thought in an age of codification.’ It deserves quoting at some length: “The first maxim of a free state,” he writes, “is, that the laws be made by one set of men, and administered by another; in other words, that the legislative and the judicial character be kept separate. When these offices are unified in the same person or assembly, particular laws are made for particular cases, springing often times from partial motives, and directed to private ends: whilst they are kept separate, general laws are made by one body of men, without. foreseeing whom they may affect; and, when made, must be applied by the other, let them affect whom they will…. When the parties and interests to be affected by the laws were known, the inclination of the law makers would inevitably attach to one side or the other; and where there were neither any fixed rules to regulate their determinations, nor any superior power to control their proceedings, these inclinations would interfere with the integrity of public justice. The consequence of which must be, that the subjects of such a constitution would live either without constant laws, that is, without any known preestablished rules of adjudication whatever; or under laws made for particular persons, and partaking of the contradictions and iniquity of the motives to which they owed their origin.”
“Which dangers, by the division of the legislative and judicial functions, are in this country effectually provided against. Parliament knows not the individuals upon whom its acts will operate; it has no case or parties before it; no private designs to serve: consequently, its resolutions will be suggested by the considerations of universal effects and tendencies, which always produce impartial, and commonly advantageous regulations.”
With the end of the eighteenth century, England’s major contributions to the development of the principles of freedom come to a close. Though Macaulay did once more for the nineteenth century what Hume had done for the eighteenth and though the Whig intelligentsia of the Edinburgh Review and economists in the Smithian tradition, like J. R. MacCiilloch and N. W. Senior, continued to think of liberty in classical terms, there was little further development. The new liberalism that gradually displaced Whiggism came more and more under the influence of the rationalist tendencies of the philosophical radicals and the French tradition. Bentham and his Utilitarians did much to destroy the beliefs which England had in part preserved from the Middle Ages, by their scornful treatment of most of what until then had been the most admired features of the British constitution. And they introduced into Britain what had so far been entirely absent-the desire to remake the whole of her law and institutions on rational principles.
The lack of understanding of the traditional principles of English liberty on the part of the men guided by the ideals of the French Revolution is clearly illustrated by one of the early apostles of that revolution in England, Dr. Richard Price. As early as 1778 he argued: “Liberty is too imperfectly defined when it is said to be ‘a Government of LAWS and not by MEN.’ If the laws are made by one man, or a junto of men in a state, and not by common CONSENT, a government by them is not different from slavery.” Eight years later he was able to display a commendatory letter from Turgot: “How comes it that you are almost the first of the writers of your country, who has given a just idea of liberty, and shown the falsity of the notion so frequently repeated by almost all Republican Writers, ‘that liberty consists in being subject only to the laws?” From then onward, the essentially French concept of political liberty was indeed progressively to displace the English ideal of individual liberty, until it could be said that “in Great Britain, which, little more than a century ago, repudiated the ideas on which the French Revolution was based, and led the resistance to Napoleon, those ideas have triumphed.” Though in Britain most of the achievements of the seventeenth century were preserved beyond the nineteenth, we must look elsewhere for the further development of the ideals underlying them.
The Constitution of Liberty University of Chicago Press, 1960 pp 162-175
Nowhere has the outlook of the left entered more firmly into the national culture than in France, the motherland of revolution. Whatever power has reigned in the skies of politics, French intellectual life has tended to adopt the ways and manners of the Jacobins. Even the exceptions – Chateaubriand, de Maistre, de Tocqueville, Maurras – have focused their attention on the standard of revolution, hoping to glimpse some strategy that would fortify their restorationist designs. And every movement away from the left – Ultramontanism, Action Franacise, Nouvelle Droite – has felt called upon to match the theoretical absolutism of its opponents. It has taken up the socialist challenge to present a rival system, a rival intellectual machine, with which to generate answers to all the problems of modern man.
No doubt this desire for system, and for universalist answers, shares some of the character of Roman Catholicism. But far more important in the thinking of the left has been the Enlightenment rationalism, which seeks to penetrate through human subterfuge, and to display the hidden core of unreason that lies within our acts. The modern gauchiste shares the rationalist’s suspicion of human institutions, and his contempt for superstition. But he is distinguished by a boundless cynicism. He no longer believes that the process of ‘discovery’ – whereby the ploys of unreason are exposed – will present the opportunity for some new and ‘rational’ alternative. The Reason of the Jacobins is also an illusion, and the only advice that the gauchiste is disposed, in the end, to offer, is the advice given by Genet and Sartre: be true to nothing, so as to be true to yourself. There are no solutions, only problems, and our duty is to ensure that we are not deceived.
In the ensuing quest for authenticity the gauchiste has a permanent need for an enemy. His system is one of destruction. He knows the illusoriness of values, and finds his identity in a life lived without the easy deceptions which rule the lives of others. Since he has no values, his thought and action can be given only a negative guarantee. He must fortify himself by unmasking the deceptions of others. Moreover, this unmasking cannot be done once and for all.
It must be perpetually renewed, so as to fill the moral vacuum which lies at the centre of existence. Only if there is some readily identifiable and, so to speak, renewable opponent, can this struggle for authenticity – which is in fact the most acute struggle for existence – be sustained. The enemy must be a fount of humbug and deception; he must also possess elaborate and secret power, power sustained through the very system of lies which underscores his values. Such an enemy deserves unmasking, and there is a kind of heroic virtue in his assailant, who frees the world from the stranglehold of secret influence.
It is to the French aristocracy that we owe the contemptuous label by which this enemy is known. The renewable opponent is the ‘bourgeois’: the pillar of the community, whose hypocritical respectability and social incompetence have inspired every variety of renewable contempt. Of course this creature has undergone considerable transformation since Moliere first ridiculed his social pre tensions. During the nineteenth century he acquired a complex dual character. Marx represented him as the principal agent and principal beneficiary of the French Revolution – the new enslaver, whose tentacles reach into every pocket of influence and power – while the cafe intellectuals continued, in more bitter accents, the scathing mockery of the aristocracy. Epater les bourgeois became the signature of the disaffected artist, the guarantee of his social credentials, whereby he demonstrated his aristocratic entitlement, and his conempt for the usurpatory dominion of the rising middle class. Under the dual influence of Marx and Flaubert, the bourgeois emerged from the nineteenth century as a monster transformed out of all recognition from his humble origins. He was the ‘class enemy’ of Leninist dogma, the creature whom we are commanded by history to destroy; he was also the repository of all morality, all convention, all codes of conduct that might hamper the freedom and crush the ebullience of la vie boheme. The Marxian theory of ideology tried to knit the two halves of the portrait together, describing the ‘comfortable’ values as the social disguise of real economic power. But the theory was vague and schematic, lacking the concrete quality which is necessary for a rewarding and renewable contempt. Much of the efforts of the French left in our century have therefore been devoted to completing the portrait. The aim has been to create the perfect enemy: the object against which to define and sharpen one’s authenticity, an authenticity guaranteed by its transformation into wit.
The invention of the ideal bourgeois was finally accomplished in 1952, with the publication of the masterpiece of modern satanism, Sartre’s Saint Genet, in which the ‘bourgeois’ is characterised by an extraordinary complexity of emotions, ranging from rooted heterosexuality to a hostility to crime. The bourgeois finally emerges as the champion of an illusory ‘normality’, concerned to forbid and to oppress all those who, in challenging his normality, challenge also the social and political dominion which it both validates and conceals.
The anti-bourgeois sentiment which lies at the root of French left-wing thinking, explains its rejection of all roles and functions that are not creations of its own. Its main power base has been, not the university, but the cafe: to occupy positions of influence within the ‘structures’ of the bourgeois state is incompatible with the demands of revolutionary rectitude. Whatever influence the gauchiste enjoys must be acquired through his own intellectual labour, in producing words and images which challenge the status quo. The cafe becomes the symbol of his social position. He observes the passing show, but does not join it. Instead, he waits for those who, attracted by his gaze, separate themselves from the crowd and ‘come over’ to his position.
By the same token we must recognise the intimate dependency that exists between the gauchiste and the true middle class. In a certain manner, the gauchiste is the confessor of the middle class. He presents to it an idealised image of its sinful condition. The ‘bourgeois’ of recent iconography is a myth. But he bears a resemblance to the ordinary city-dweller who, seeing himself distorted in this portrait, is troubled by the thought of moral possibilities. He enthusiastically confesses to purely hypothetical crimes. He begins to extol the gauchiste as the absolver of his corrupted conscience. The gauchiste therefore becomes the redeemer of the class whose illusions he has been appointed to unmask. Hence, despite his rudeness – which is, in truth, no more than the necessary virtue of his profession – he enjoys abundant social privilege. He is born aloft on the shoulders of the bourgeoisie whose habits he tramples, and enjoys again the aristocrat’s place in the sun. At the best Parisian parties he will appear in person: but even the meanest reception will take place against bookshelves loaded with his writings. So close, indeed, is this symbiotic relation between the gauchiste and his victim as to resemble that previous, seemingly indissoluble, bond between aristocrat and peasant. The major difference is this: the aristocrat both exalted the peasant in his words (creating the idealized ‘shepherd’, the spectacle of whose virtues would refresh the wearied courtier), and at the same time abused and oppressed him in his actions. The gauchiste judiciously reverses the priorities: he does no more than bark at the hand which feeds him. In this he shows, indeed, a greater wisdom, and a healthy instinct for survival. I have singled out Michel Foucault, the social philosopher and historian of ideas, as representative of the French intellectual left. It must be pointed out at once that Foucault’s position has been constantly shifting, and that he shows a sophisticated contempt for all convenient labels. He is also a critic (although, until his last years, a fairly muted critic) of modern communism. Nevertheless, Foucault is the most powerful and most ambitious of those who aim to ‘unmask’ the bourgeoisie, and the position of the left has been substantially reinforced by his writings. It is impossible to do full justice here to his achievement. His imagination and intellectual fluency have generated abundant theories, concepts and apercues, and the compendious, synthesizing poetry of his style is nothing if not disarming. Foucault is unable to encounter opposition without at once rising, under the impulse of his intellectual energy, to the superior ‘theoretical’ perspective, from which opposition is seen in terms of the interests which are advanced by it. Opposition relativised is also opposition discounted. It is not what you say, but that you say it, which awakens Foucault’s criticism. ‘D’ouparles – tu?’ is his question, and his stance remains outside the reach of every answer.
The unifying thread in Foucault’s work is the search for the secret structures of power. Power is what he wishes to unmask behind every practice, behind every institution, and behind language itself. He originally described his method as an ‘archaeology of knowledge’, and his subject – matter as truth – truth considered as the product of ‘discourse’, taking both form and content from the language in which it is conveyed. A problem of terminology immediately arises, and proves to be something more than a problem of terminology. What is meant by a ‘knowledge’ that can be overthrown by new experience, or by a ‘truth’ that exists only within the discourse which frames it? The language here is Hegelian, and the method implied is that of idealism. Foucault’s ‘truth’ is created and re – created by the experiences through which we ‘know’ it. Like Hegel, therefore, Foucault is able to derive some surprising and even disquieting results, from a historical method which dramatises change as a mute obedience to a changing consciousness.
Thus in Les mots et les choses (1966) we are told that man is a recent invention: truly an original idea! On inspection it turns out that Foucault means no more than this: that it is only since the Renaissance that the fact of being a man (rather than, say, a farmer, a soldier or a nobleman) has acquired the special significance that we now attribute to it. By such arguments we could show that the dinosaur too is a recent invention. Of course, there is a point to Foucault’s remark. He means to emphasise the extent to which the sciences which have taken man as their object are recent inventions, already outmoded as forms of ‘knowledge’. The idea of man is as fragile and transitory as any other idea in the history of human understanding, and must give way under the impulse of a new episteme (structure of ‘knowledge’) to something which we cannot name. Each episteme, for Foucault, is the servant of some rising power, and has had, as its principal function, the creation of a ‘truth’ which serves the interest of power. Thus there are no received truths which are not also convenient truths.
There are many insights in Foucault’s early writings. But the Hegelian method – which identifies reality with a way of apprehending it – must lead us to doubt that they are hard – won. There is a cheat involved in this method, which allows its proponent to jump across to the finishing line of historical enquiry, without running the hard track of empirical analysis. (Consider what would really have to be proved, by someone who believed man to be an artefact, and a recent one at that – more recent even than the medieval and Renaissance humanists who extolled his virtues.) A proper assessment of Foucault’s thought must therefore try to separate its two components: the Hegelian sleight of hand (which would lead us too simply to dismiss him), and the ‘diagnostic’ analysis of the secret ways of power. It is the second which is interesting, and which is expressed in Foucault’s claim that each successive form of ‘knowledge’ is devoted to the creation of a discourse favourable to, and symbolic of, the structures of pre vailing power.
In Histoire de la Jolie a l’age classique (1961) Foucault gives the first glimpse of this thesis. He traces the confinement of madmen to its origins in the seventeenth century, associating this confinement with the ethic of work and the rise of the middle classes. Foucault’s idealism – his impatience with explanations that are merely causal – leads him constantly to thicken his plot. Thus he says, not that the economic reorganisation of urban society brought about confinement, but that ‘it was in a certain experience of labour that the indissolubly economic and moral demand for confinement was formulated’. But this should be seen largely as an embellishment to categories of historical explanation which derive ultimately from Marx.
The madman is ‘other’ in what Foucault calls the ‘classical’ age because he points to the limits of the prevailing ethic, and alienates himself from its demands. There is a kind of virtuous disdain in his refusal of convention. He must therefore be brought to order. Through confinement madness is subject to the rule of reason: the madman now lives under the jurisdiction of those who are sane, confined by their laws, and instructed by their morality. The re course of reason in this close encounter is to reveal to madness its own ‘truth’ – the truth through which reason ‘knows’ it. To lack reason is, for ‘classical’ thought, to be an animal. The madman must therefore be made to act the part of an animal. He is used as a beast of burden, and by this confrontation with his own ‘truth’ is finally made whole. Each successive age finds a similar ‘truth’ through which the experience of madness is transcended into sanity (i.e. into that condition which is condoned and fostered by prevailing power). But Foucault suggests that this stock of ‘truths’ is now exhausted. The book ends with a satanistic encomium of madness, in which Foucault appeals to the gods of the modern French Olympus – Goya, de Sade, Holderlin, Nerval, Van Gogh, Artaud, Nietzsche – to testify to this exhaustion. This encomium gains no substance from the studies that precede it, and consists largely in the ritual rehearsal of what has become, in France, a critical commonplace. Thus, although it is impossible for a sane reader to detect literary merit in de Sade, for example, it is liturgically necessary to sing his praises, as le plus gros des epateurs. A second-rate hack thereby becomes the literary representative of post-Revolutionary France.
It was clear to the eighteenth century, according to Foucault, that, while madness was able to express itself, it had no language in which to do so besides that which reason provides. The only phenomenology of madness lies in sanity. Surely then, the eighteenth century had at least one sound intuition about the nature of unreason? The province of language and the province of reason are coextensive, and if madness contains its own ‘truths’, as Foucault claims, these are essentially inexpressible. How then can we rightly imagine a ‘language’ of unreason in which the truths of madness are expressed, and to which we must now attune our ears? The idea of such a language is the idea of a delirious monologue, which neither the man of reason, nor the madman himself, could understand. The voice of madness is a voice that belongs to no-one, since it violates the grammar of the self. It could bear no resemblance to the remorseless logic of The Twilight of the Idols, or to the precise symbolism of Les chimeres. Foucault’s heroes would have been unable to use this language, even in their final dis solution, and if we can understand them it is without its aid.
For the nineteenth century, according to Foucault, the experience of ‘unreason’ characteristic of the ‘classical’ period becomes dissociated: madness is confined within a moral intuition, and the fantasy of an unceasing monologue of madness, in a language in accessible to reason, is forgotten. This idea is to be resuscitated, however, at the beginning of the twentieth century, in the Freudian theory of the unconscious thought – processes that determine the behaviour of the irrational man. In the nineteenth century, madness has become a threat to the whole structure of bourgeois life, and the madman, while superficially innocent, is profoundly guilty in his failure to submit to familiar norms. The greatest offence of madness is against the ‘bourgeois family’, as Foucault calls it, and it is the experience of this family that dictates the paternalistic structure of the asylum. The ethos of judgement and reprobation in the asylum is part of a new attitude to madness – madness is at last observed. It is no longer thought that the madman has anything to say or symbolise; he is an anomaly in the world of action, responsible only for his visible behaviour.
In the asylum the man of reason is presented as an adult, the madman as a child, so that madness may be construed as an incessant attack against the Father. The madman must be brought to recognise his error, and reveal to the Father a consciousness of his guilt. Thus there is a natural transition from the ‘confession in crisis’ characteristic of the asylum, to the Freudian dialogue, in which the analyst listens to and translates the language of unreason, but in which madness is still forced to see itself as a disobedience and a transgression. Finally, Foucault intimates, it is because psychoanalysis has refused to suppress the family structure as the only one through which madness can be seen or known, that its introduction of a dialogue with madness leads to no understanding of its interlocutor.
Beneath all this fascinating analysis – part insight, part rhetoric – it is possible to discern a persistent and discredited historical perspective. Despite his apparent scholarship, Foucault remains wedded to the mythopoeic guide to modern history presented in The Communist Manifesto. The world divides conveniently into the ‘classical’ and the ‘bourgeois’ eras, the first beginning at the late Renaissance and ending with the ‘bourgeois revolution’ of 1789. It is only thereafter that we witness the characteristic features of modern life: the nuclear family, transferable property, the legally constituted state, and the modern structures of influence and power. Engels made an heroic attempt to give credence to the ‘bourgeois family’, and this has proved useful to left-wing demonology. But Engels’s icon is now threadbare and faded, and only marginally more persuasive than the idea that the French Revolution involved a transition from feudal to capitalist modes of production, from an ‘aristocratic’ to a ‘bourgeois’ social structure, from entailed to transferable property. Less persuasive still is the idea that the ‘classical’ outlook of Racine and La Fontaine is the principal index of post Renaissance, pre-Revolutionary culture in France. All this is based on an elaborate and, to tell the truth, culpable simplification of historical data, the prime aim of which is not truth but propaganda. Foucault’s rhetoric is supposed to mesmerise us into a sense of some intrinsic connection between ‘bourgeois’, ‘family’, ‘paternalistic’ and ‘authoritarian’. Historical facts – such as that the peasant family is more authoritarian, the aristocratic family more paternalistic, than the family known as ‘bourgeois’; or that the middle class shows an ability to relax the temper of domestic life which has seldom been matched at the upper or lower ends of the social spectrum – all such facts are kept out of mind. The reader finds no argument over evidence, no search for instance or counter-instance, which could sow the seeds of doubt. For facts have an abrasive quality. They blur the figures and erase the lineaments of the necessary icon. When the image fades, so too does the idea: we can no longer believe that the secret power which created the categories of mental illness, which confined the innocent sufferer, and which moralised him into ‘abnormality’, also generated the family and its egregious norms. Far less can we believe that the nature of this power is summarised in the single word ‘bourgeois’, although doubtless that word has liturgical value, as designating the object of acceptable contempt.
The schematic historiography survives in Foucault’s later works. In particular, he makes abundant use of the concept of a ‘classical’ epoque. But the enemy who stalks through his pages seems somehow to have lost his respectable clothing. He appears as naked power, without style, dignity of status. If the term ‘bourgeois’ is sometimes applied to him it is a flourish, like an insult thrown by the wrestler to his opponent. There is no longer the same liberating confidence in the enemy’s identity. Nevertheless, the method and the results remain, and each of Foucault’s books repeats the hidden agenda of his Histoire de la Jolie.
In Naissance de la clinique: Une archeologie du regard medical (1963), Foucault extends the ideas of ‘observation’, and ‘normality’, so as to explain, not only the confinement of madmen, but also the confinement of the sick. (He will shortly extend the analysis further, to prisons and punishment. If he stops short of schools and universities it is not for want of conviction.) That patients should be gathered together for observation shows a need to divide the world into the normal and the abnormal, and to confront the abnormal with an image of its ‘truth’. The need is also for a classification of illness, a ‘measured language’ which places each disease within the competence of the observer. Now there is truth in those ideas: who would deny that the growing understanding of disease implied isolation, observation and selective treatment? But what a simple truth, and what an innocent occurrence! Clearly it needs unmasking. So here, in characteristic language, is what the hospital – surely one of the more benign of human accomplishments – becomes:
Over all these endeavours on the part of clinical thought to define its methods and scientific norms hovers the great myth of a pure Gaze that would be pure Language: a speaking eye. It would scan the entire hospital field, taking in and gathering together each of the singular events that occurred within it; and as it saw, as it saw ever more clearly, it would turn into speech that states and teaches; the truth, which events, in their repetitions and convergences, would outline under its gaze, would, by this same gaze and in the same order, be reserved, in the form of teaching, to those who do not know and have not yet seen. This speaking eye would be the servant of things and the master of truth.
There is an accomplished rhetoric here, a rhythmic movement which, feeding on the simple fact of scientific observation, becomes a haunting and persecuted awareness of the hidden source of power. Behind this concept of the Gaze (a Sartrean term more familiar, perhaps, to French than to English readers) lurks a great suspicion, the same suspicion of human decencies that inhabits the pages of Being and Nothingness. It tells us not to be deceived, not to believe that anything is undertaken, or anything achieved, except in the interests of power.
The idea takes a further step in Foucault’s most brilliant book Surveiller et punir, subtitled ‘the birth of the prison’. (The surveiller of the title is hard to translate: it refers, once again, to the Gaze of the guardians.) It is natural that the near-simultaneous rise of the prison system, the hospital, and the lunatic asylum, will not go unnoticed by the suspicious iconographer of bourgeois man. And there is something persuasive in Foucault’s initial analysis of the transition from the exemplary punishments of our ancestors, to the system of physical confinement. To call the first ‘classical’, the second ‘bourgeois’ is of little interest. But it is surely illuminating to see the earlier system as embodying a kind of corporal language of crime. The aim of torture was to imprint the crime on the patient’s body, in the living language of pain, so as to symbolise the criminal’s intention. Foucault contrasts the prison system, which, he argues, was founded in a juridical conception of individual rights, under which punishment has the character of a forfeit. The contracting individualist can be legitimately made to suffer in no other way. And, as Foucault elegantly remarks, even capital punishment under the new regime of prison has a juridical character:
The guillotine takes life almost without touching the body, just as prison deprives of liberty or a fine reduces wealth. It is intended to apply the law not so much to a real body capable of feeling pain as to a juridical subject, the possessor, among other rights, of the right to exist. It had to have the abstraction of the law itself.
Foucault proceeds to draw the usual abundance of surprising and not so surprising conclusions. It is surprising to be told that punishment is an element in the genealogy of the human soul, so that the Cartesian ego is precisely what is conjured on the rack: the gazing subject who exists as the observer of this pain. It is surprising to learn that the modern soul is a product, if not of the prison system, at least of the juridical idea of the subject, as a complex of legal rights.
It is less surprising to be told that criminal justice operates in the ‘production of truth’, and that it is part of one of those systems of ‘knowledge’ which, for Foucault, go hand in hand with power. Nor is it surprising to find that punishment undergoes the same tran sition as medicine, from a system of symbolism, to a system of surveillance. In an impressive description of Bentham’s ‘panopticon’ (a machine a corriger, in which all prisoners could be observed from a single post), Foucault relates the discipline of prison to the newly emerging power of the invisible over the visible, which is, if I understand him, the power expressed in law. The law is the invisible possessor of that ‘normalising gaze’ which both singles out the criminal as an abnormal specimen, and also deprives him of his rights until such a time as he should once again be able to take up the burden of normality.
There then occurs one of those forced, marxisant, explanations which mar the poetry of Foucault’s far from unimaginative writing. We are told that the prison disciplines exhibit a ‘tactics of power’, with three fundamental purposes: to exert power at lowest cost, to extend power as far and as deeply as possible, and ‘to link this “economic” growth of power with the output of the apparatuses (educational, military, industrial or medical), within which it is exercised’. All of which is meant to suggest a connection between prison and the ‘economic take-off of the West’, which ‘began with the techniques that made possible the accumulation of capital’. Such impulsive observations are produced not by scholarship, but by the association of ideas, the principal idea being the historical morphology of the Communist Manifesto. And if we are asked why that discredited (and somewhat adolescent) morphology is still accepted by so sophisticated a modern thinker, the answer is to be found, I believe, in its providing the preliminary sketches for the portrait of the enemy. It inspires such passages as the following:
Is it surprising that the cellular prison, with its regular chronologies, forced labour, its authorities of surveillance and registration, its experts in normality, who continue to multiply the functions of the judge, should have become the modern instrument of penalty? Is it surprising that prisons resemble factories, schools, barracks, hospitals, which all resemble prisons?
No, it is not surprising. For if we unmask human institutions far enough, we will always find that hidden core of power by which Foucault is outraged and fascinated. The only question is whether this unmasking reveals the truth about its subject, or whether it is not, on the contrary, a new and sophisticated form of lying. We must ask ourselves whether the idealist who observes ‘at the very centre of the carceral city, the formation of the insidious leniencies, unavowable petty cruelties, small acts of cunning, calculated methods, techniques, “sciences” that permit the fabrication of the disciplinary individual’ – whether such an observer is not in fact also the inventor of what he observes.
But it is not easy to unmask this observer. That his writings exhibit mythomania, and even paranoia, is, I believe, indisputable. But that they systematically falsify and propagandise what they describe is more difficult to establish. A writer who can glibly declare that ‘the bourgeoisie could not care less about delinquents, about their punishment and rehabilitation, which economically have little importance’; that ‘the bourgeoisie is perfectly well aware that a new constitution or legislature will not suffice to establish its hegemony’; that ‘ . . . “dangerous” people had to be isolated (in prison, in the Hospital General, in the galleys, in the colonies) so that they could not act as a spearhead for popular resistance’ – such a writer is clearly more concerned with rhetorical impact than with historical accuracy. However, I believe that it would be a mistake to dismiss Foucault on the evidence of such pronouncements. As I have argued, we must separate Foucault’s analysis of the workings of power from the facile idealism which opens such easy paths to theory. And paranoia is no more than a localised idealism – a specific and focused manifestation of the des ire that reality be subservient to thought, that the other have an identity entirely determined by one’s own response to him. What is important is, not the disposition to find, in human thought and action, the smiling masks of persecution, but rather the idea that, by unmasking them as forms of power, we come closer to an under standing of their nature. It is precisely this which I doubt.
In a pair of lectures delivered in 1976, Foucault deliberates over what he means by ‘power’, and distinguishes two approaches: the Reichian (which argues that ‘the mechanisms of power are those of repression’), and the Nietzschean, which holds that the ‘basis of the relationship of power lies in the hostile engagement of forces’. In an obscure and muddled account of this distinction, Foucault aligns himself with the second approach, and he tries to show (in L’His toire de la sexualite, vol. 1, 1976) how this conception of power enables us to see even sexual relations as instances of the ‘hostile engagement of forces’. But it is significant that Foucault offers no real explanation of what he means by ‘power’. The ‘Reichian’ and the ‘Nietzschean’ approaches are entirely compatible, and both are explained in terms – ‘repression’, ‘force’ – which are at least as obscure as the ‘power’ which they are supposed to illuminate.
The problem becomes more and more acute. We are repeatedly told that Foucault is concerned with power in its ‘capillary’ form, the form which ‘reaches into the very grain of individuals’. But we are never told who or what is active in this ‘power’: or rather, we are told, but in terms that carry no conviction. In an interview, Foucault admits that, for him, ‘power is coextensive with the social body’.And it is, of course, indisputable that social order, like every order, embodies power. A society, like an organism, can sustain itself only by constant inter action among its parts. And all interaction is an exercise of power: the power of a cause to produce its effect. But that is merely trivial. What is not trivial is the entirely unwarranted and ideologically inspired idea of dominance with which Foucault glosses his conclusions. He at once assumes that if there is power, then it is exercised in the interests of some dominant agent. Hence, by sleight of hand, he is able to present any feature of social order – even the disposition to heal the sick – as a covert exercise of dominion, which aims to further the interests of ‘those in power’. Foucault writes: ‘I believe that anything can be deduced from the general phenomenon of the domination of the bourgeois class’. It would be truer to say that he believes that the general thesis of the domination of the bourgeois class can be deduced from anything. For, having decided, along with the Communist Manifesto, that the bourgeois class has been dominant since the summer of 1789, Foucault deduces that all power subsequently embodied in the social order has been exercised by that class, and in its interests. Any fact of social order will necessarily bear the fingerprints of bourgeois domination. The triviality of the argument needs no comment; what is astounding is the philosophical naivety from which it stems.
As an instance of an old Marxian confusion (the confusion which identifies a class as the product of power, and then power as the pursuit of a class), Foucault’s analysis may be left to one side. But it is necessary to remind ourselves of its important political con sequences. In a remarkable discussion with a group of 1968 Maoists, Foucault draws some of the political morals from his analysis of law, as yet another ‘capillary’ mode of power, yet another way of ‘introducing contradictions among the masses’. The revolution, he assures us, ‘can only take place via the radical elimination of the judicial apparatus, and anything which could reintroduce the penal apparatus, anything which could reintroduce its ideology and enable this ideology surreptitiously to creep back into popular practices, must be banished’. He recommends the banishment of adjudication, and every form of court, and gestures, in the negative manner characteristic of Utopian thinking, towards a new form of ‘proletarian’ justice, which will not require the services of a judge. With characteristic effrontery, he tells us that the French Revolution was a ‘rebellion against the judiciary’: and such, he implies, is the nature of every honest revolution.
But what does this mean in practice? It means that there shall be no third party present at the trial of the accused, no-one with the responsibility to sift the evidence, no – one to mediate between the parties, no – one to look impartially on the facts, or on the con sequences of judgement. It means that the criminality of the act will be as unpredictable as the penalty which it incurs, for no Jaw could exist which would determine the outcome. It means that all ‘justice’ will be reduced to a ‘struggle’ between opposing factions, a trial by ordeal in which, presumably, he who speaks with the voice of the proletariat will take the prize. And in order to prove that he speaks with the voice of the proletariat, the victor need do one thing, and one thing only: overcome his opponent. Having done so he will call himself judge, and sanctify his action with the ideology of ‘proletarian justice’. And we know how this proletarian judge will then comport himself. In short, it is only the greatest naivety, about human nature and human history, that can permit Foucault to believe that his ‘proletarian justice’ is a form of justice, or that, in working towards it, he is freeing society from the blight of power. On the contrary, all social order is composed of Foucault’s ‘power’, and a rule of law, which is the highest form of order, is simply the best and most mitigated form of it.
The example is minatory. What is true of adjudication is true of other institutions. The attempt to remove the ‘mask’ from human institutions simply reduces them to a single commodity: a ‘power’ which, considered in itself, is neither good nor evil. It also removes those dimensions of human thought and action which enable the relative virtues of our institutions to be assessed. Hence it points to a far greater tyranny than the one against which it is wielded. It seems to me that Foucault’s political naiveties are a direct result of a false idea of ‘essence’, according to which the essence of human things lies never on the surface, but always in the ‘ hidden’ depths. The search for this ‘depth’ is, in fact, the greatest shallowness. Foucault’s ‘unmasking’ reveals, not the essence of human thought and action, but merely the underlying substance out of which all human institutions, and life itself, are made. To reduce everything to this ‘hidden’ core is in effect to reduce it to nothing. And we should not be surprised to find that it is precisely this nothing which then becomes the hidden god.
The Order of Things, An Archaeology of Human Sciences, tr. Anon., (London 1970).
Madness and Civilisation, A History of Insanity in the Age of Reason, tr. R. Howard (New York 1965).
The Birth of the Clinic: An Archaeology of Medical Perception, tr. A. M. Sheridan (London 1973).
For the truly great projects, architects are necessary, and can take credit for magnificent structures like London’s St Paul’s Cathedral and Istanbul’s Suleymanyie Mosque. Nevertheless, most architects of the buildings we love remain anonymous, and those who designed the great Gothic cathedrals owe their achievements as much to the guilds of stonemasons as to their own astonishing plans. Moreover, by far the greatest number of buildings that we admire had no architect at all. Think of the medieval houses that compose the hilltop towns of Italy, the great stone tenements of Edinburgh, the backwaters of Venice, the thousands of village churches scattered over Europe, and just about every other building stitched into the fabric of those places that we visit because they provide the soothing experience of a deep settlement and a shared home.
Reflecting on these matters I long ago drew the conclusion that the first principle of architecture is that most of us can do it. You can teach music, poetry, and painting. But what you learn will never suffice to make you into a composer, a poet, or a painter. There is that extra thing, which the romantics called “genius”, without which technique will never lead to real works of art. In the case of architecture not only is the part that can be taught sufficient in itself, but also the belief that you need something else—genius, originality, creativity, etc.—is the principal threat to real success.
The pursuit of genius in architecture is what has most contributed to the unstitching of our urban fabric, giving us those buildings in outlandish shapes and unsightly materials that take a chunk of the city and make it into somewhere else, as Morphosis did with New York’s Cooper Square, or Zaha Hadid with the Port Authority Building in Antwerp.
These buildings that stand out when they should be fitting in declare the genius of their creators, with no consideration paid to the offense suffered by the rest of us. China is now littered with this stuff, and as a result there is no city in that country that has the remotest resemblance to a settlement.
In response it will be said that we need to accommodate our growing populations, and to make efficient use of the land available for building, and how can we do this without architects? The refutation of this lies in the garden shed and the trailer. Almost all of us are capable of designing such a thing, and placing it in agreeable surroundings and conciliatory relation to its neighbors. The trailer park usually achieves a density of population far greater than the estate of tower-block apartments, and leaves the residents free to embellish their individual holdings with agreeable details, flower pots, even classical windows and doorways, along the edges of incipient streets.
In my experience the most poignant illustration of these truths is provided by the gecekondu (= built in one night) around Ankara. An old Ottoman law, inherited from the Byzantine Empire and therefore from Rome, tells us that, if you have acquired a piece of land to which no one has a proven right of ownership and if you build a dwelling there in one night, you can assume a permanent right of residence. When Atatürk declared the ancient city of Ankara to be the capital of the new Turkey he set the architects to work, building tower blocks and modern highways in regimented patterns that chill the heart and repel everyone who is not obliged by his work to reside there. Meanwhile all around the capital, on the bare hills to which no one had a claim of ownership, there arose by an invisible hand some of the most harmonious settlements created in modern times: houses of one or two stories, in easily handled materials such as brick, wood, corrugated iron, and tiles, nestling close together since none can lay claim to any more garden than the corners left over from building, each fitted neatly into the hillside and with tracks running among them along which no car can pass.
In time the residents cover them with stucco and paint them in those lovely Turkish blues and ochres; they bring electricity and water and light their paths not with glaring sodium lights but with intermittent bulbs, twinkling from afar like grounded galaxies. They join together to form charitable associations, so as to build mosques in the ancient style and neighborhood schools beside them.
These suburbs are the most unpolluted (in every respect) that the modern world has produced, and contain more residents per square mile than any of the architect-designed banlieux around Paris. And they are produced in just the way that sheds are produced, by people using their God-given ability to knock things together so as to put a roof over their head.
The observation is often made that political conservatives do not have anything much to say about the arts,
either believing, with the libertarians, that in this matter people should be free to do as they please, or else fearing, like the traditionalists, that a policy for the arts will always be captured by the Left and turned into an assault on our inherited values. Of course, there is truth in both those responses; but they are not the whole truth, and in my view one reason for the precarious state of the arts in our public culture today is that conservatives – who often come out near the top in fair elections – have failed to develop a clear cultural policy and to understand why, philosophically, such a policy matters.
There is a kind of conservatism that sees all political questions as reducible to economics, with the free market as the ruling principle and the expansion of consumer choice as the only coherent political program. This way of looking at things can be taken a lot farther than at first sight appears. There is an economic justification, after all, for the traditional two-parent family, which produces well-adjusted children who are able to fend for themselves and make a positive contribution to the economy, and who are unlikely to be lifelong dependents on the welfare state. But is that all, or even the most important thing, to be said in favor of the traditional family? Surely its nature as an arena of peace, well-being, and love is far more important, and if it were ever proved that single-parent families and child labor were economically more productive, this would not be a conclusive argument, or any argument at all, against the old arrangement. The traditional family has an intrinsic as well as an instrumental value, and that is the real reason so many conservatives defend it. They defend it because they have a vision of human fulfillment that goes well beyond the economic, to embrace all those values – moral, spiritual, and personal – that shape human beings as higher than the animals and especially worthy of our protection.
Still, let’s stay with economics for a moment. If a hard-nosed free-marketer asks you what the economic benefit of a symphony orchestra is, how would you answer him? Orchestras depend on donations – but that is okay, he will say, since donations are part of the market economy. But private donations are seldom, if ever, enough. Even if it receives no direct subsidy from the government, the symphony hall will be granted charitable privileges and planning exemptions that violate what Hayek once called the “harsh discipline of the market.” Then, we must look at the long-term economic benefit, and here again matters are not so simple. A city with a symphony hall attracts upwardly mobile new residents. It sets a standard in entertainment and leisure that others might try to live up to; it contributes to a flourishing downtown life of a kind that will attract the middle classes; and so on. Its long-term economic benefit probably vastly outweighs the short-term economic cost, even if no one is in a position to measure it.
But again, all that is irrelevant to the true question, which concerns intrinsic and not instrumental values. The real reason people are conservatives has little or nothing to do with economics, even if they are aware that economic prosperity is a good thing, and necessary for the support of other things that they value. The real reason people are conservatives is that they are attached to the things that they love, and want to preserve them from abuse and decay. They are attached to their family, their friends, their religion, and their immediate environment. They have made a lifelong distinction between the things that nourish and the things that threaten their security and peace of mind.
In my writings I have made a point of emphasizing this. Conservatism, for me, is the philosophy and the politics of attachment. Its starting point is a loved way of life, and the institutions and settlements that have grown from it. Standing against conservatism has been another state of mind altogether, which sometimes masks itself as love, but always love for the ideal, the nonexistent, the “yet to be,” in the cause of which we are invited to pull down and destroy the things that are. Radical politics is merciless toward the actual, especially when the actual enshrines the old way of life, the old institutions, and the old hierarchies that have arisen from our attachments.
Conservatives hold on to things not only because they are attached to them, but also because they do not see the sense in radical change, until someone has told them what it will lead to. You criticize the traditional family? Then tell us about the alternative, and please give us the details: Tell us how children grow up in this new arrangement, how they find security, love, and satisfaction, how they acquire the sense of responsibility, how they live with others, how they reproduce and how they die.
One of the things to which we are attached is our culture: not the everyday culture only, of which the family is, or has been, an integral part, but the high culture, in which the intellectual and artistic treasures of our civilization are enshrined. When you are truly attached to something, it is no longer of merely instrumental value for you. It is not a means but an end, which does not mean that it has no consequences – of course it has – but rather that you are interested in the thing itself, for its own sake, finding fulfilment and joy in it.
To find things to which you can be attached in this way is to find a meaning in life, and the real cause of the destructiveness of radical causes is, I believe, a certain lack of meaning in the lives of those who promote them.
At this point someone will respond that it is scarcely democratic to devote resources to conserving something that is a minority taste, or to teaching things that promote minority interests. As soon as you defend intrinsic values you are exposed to the charge of elitism, and conservatives shy away from attracting this charge, since they know that all the things they most value are unequally distributed, and that it is therefore probably best to shut up about them and just hope that they will be reproduced nevertheless.
This, in my view, is a mistake. We should make the case for the things we love, even if we think that people will misunderstand them. That is why people defend the U.S. Constitution, even though so few really understand the subtle thinking embodied in that document. People defend the Constitution because they love it, and the sight of someone defending what he loves has a softening effect on those who might otherwise oppose him. Opposition retreats a little in the face of sincere conviction.
So here is what I would say about classical music and the institutions that sustain it. For many people music is simply a matter of enjoyment, irrelevant to the greater things in life, and a matter of personal taste with which we cannot argue. John likes hard rock, Mary likes bluegrass, Fred likes hip-hop, Judith likes modern jazz, and so on. Once you enter the realm of classical music, however, you realize that such simple views no longer apply. You are in the presence of a highly learned, highly structured art form, in which human thought, feeling, and posture are explored in elaborate tonal arguments. In learning to play the music of Bach or Beethoven, for example, you are acutely aware that you are being put to the test by the music that you are playing. There is a right and a wrong way to proceed, and the right way involves learning to express, to control, to respond in mature and persuasive ways. You are undergoing an education in emotion, and the skills you learn do not remain confined to your fingers: They penetrate the whole body and brain, to become part of your world.
Moreover, this kind of education is inseparable from the art of judgment. In learning classical music, you are learning to discriminate, to recognize the authentic examples, to distinguish real from fake emotion, and to glimpse both the depths of suffering and the heights of joy of which human beings are capable. Not everyone can excel in this form of education, just as not everyone can be a mathematician, a motor mechanic, or a basketball star.
But the existence of people who are real practitioners of classical music, who can perpetuate this precious repository of emotional knowledge, is just as important to the rest of us as it is to them. They set a standard of dedication and refinement. They create around themselves an aura of seriousness and peace, and the art that they learn is one on which we all depend when it comes to expressing our most solemn and committed emotions.
Also, it is probably a prejudice to think that it is only a minority who are capable of learning and appreciating classical music. Not only are the harmonic achievements of classical music fundamental to hymns, folk songs, musicals, and jazz, but the four-part choir, which we owe to Renaissance polyphony, remains a staple of musical-institution building all across American society. Recently I was asked to give the commencement speech at a charter school in Arizona. The leaving class of 50 students assembled in their gowns to sing their farewell to the school – children of different abilities and backgrounds, who nevertheless all joined in the song, which was a difficult four-part hymn of praise to friendship in the American revivalist tradition.
To my way of thinking, there cannot be a coherent conservatism, either in everyday life or in politics, that does not take high culture seriously. It really matters to the future of our societies that classical music should survive, not as a museum exhibit but as a live tradition of performance and enjoyment, radiating its grace and graciousness across our communities, and providing us all, whether as performers or as listeners, with a sense of the intrinsic value of being here, now, and among our fellows. From that primary experience of togetherness, of which music is not the only but surely the most exhilarating instance, countless other benefits flow, in the form of solidarity, mutual support and responsibility, and the growth of real communities.
Conservatives therefore ought to pay more attention than they do to the survival of musical skills, and to the place of music in the school and university curriculum. They ought to see that the symphony hall, the musical stage, and the instrumental ensemble are all institutions that they should promote, not as optional extras but as the very essence of what they value most, which is human life itself.
Group mentality has invaded the world of education in ways that threaten the young
How many writers, educators, and opinion formers, urgently wishing to convey the thoughts and feelings that inspire them, have found themselves confronted with the cry “that’s not relevant?” In the world of mass communication today, when people are marshaled into flocks by social media, intrusions of the unusual, the unsanctioned, and the merely meaningful are increasingly resented if they come from outside the group. And this group mentality has invaded the world of education in ways that threaten the young.
It began long before Facebook and Twitter. Indeed it began with John Dewey, and his call for “child-centred education.” The influence of John Dewey over American thought in general, and education in particular, has never ceased to amaze me. If any writer has set out to illustrate what Schopenhauer meant by “unscrupulous optimism” it is Dewey, who disguised his middlebrow complacency behind a mask of wisdom, like an agony aunt for an old-fashioned women’s magazine. What could be more evidently a travesty of the nature and duties of the teacher than the idea that it is children and their interests that set the agenda for the classroom? And yet what idea is more likely to recruit the tender hearted, the ignorant, and the lazy? What a gift to the idle teacher, and what an assault on the child!
From the educational philosophy of Dewey sprang the “relevance revolution” in schooling. The old curriculum, with its emphasis on hard mathematics, dead languages, ancient history, and books that are too long to read, is portrayed as an offence to modern children, a way of belittling their world and their hopes for the future. To teach them to spell correctly, to speak grammatically, to adopt the manners and values of their parents and grandparents is to cut them off from their only available sphere of action. And in the place of all that so-called knowledge, which is nothing in itself save a residue of the interests of the dead, they should be given, we are told, their own curriculum, addressed to the life that is theirs.
The immediate effect of the relevance revolution was to introduce into the classroom topics relevant to the interests of their teachers – topics like social justice, gender equality, nuclear disarmament, third-world poverty, gay rights. Whole subjects were concocted to replace the old curriculum in history, geography, and English: “peace studies,” “world studies,” “gender studies,” and so on. The teaching of dead languages virtually ceased, and today in Britain, and doubtless in America too, it is a rare school that offers lessons in German, indeed in any modern language other than French or Spanish. Of course, it could be that less and less teachers are available with the knowledge required by the old curriculum. But it is a sad day for education when the loss of knowledge is described, instead, as a gain – when the old curriculum, based on subjects that had proved their worth over many decades, is replaced by a curriculum based purely on the causes and effects of the day. At any rate, to think that relevance, so understood, shows a respect for children that was absent from the old knowledge-based curriculum is to suffer from a singular deficiency in sympathy.
Respect for children means respect for the adults that they will one day become; it means helping them to the knowledge, skills, and social graces that they will need if they are to be respected in that wider world where they will be on their own and no longer protected. For the teacher, respect for children means giving them whatever one has by way of knowledge, teaching them to distinguish real knowledge from mere opinion, and introducing them to the subjects that make the mind adaptable to the unforeseen. To dismiss Latin and Greek, for example, because they are not “relevant” is to imagine that one learns another language in order, as Matthew Arnold put it, “to fight the battles of life with the waiters in foreign hotels.” It is to overlook the literature and history that are opened to the enquiring mind by these languages that changed the world; it is to overlook the discipline imparted by their deep and settled grammar. Ancient languages show us vividly that some matters are intrinsically interesting, and not interesting merely for their immediate use; understanding them the child might come to see just how irrelevant to the life of the mind is the pursuit of “relevance.”
Moreover the pursuit of irrelevant knowledge is, for that very reason, a mental discipline that can be adapted to the new and the unforeseeable. It is precisely the irrelevance of everything they knew that enabled a band of a thousand British civil servants, versed in Latin, Greek, and Ancient History, to govern the entire Indian sub-continent – not perfectly, but in many ways better than it had been governed in recent memory. It is the discipline of attending in depth to matters that were of no immediate use to them that made it possible for these civil servants to address situations that they had never imagined before they encountered them – strange languages, alphabets, religions, customs, and laws. It is no accident that it was a classical scholar – the judge Sir William Jones, founder of the Asiatic Society of Bengal in 1788 – who did the most to rescue Sanskrit literature from oblivion, who introduced the world, the Indian world included, to the Vedas, and who launched his contemporaries on the search for the principles and repertoire of classical Indian music.
All this is of great importance to the teacher who wishes to introduce children to the tradition of Western music, and to the listening culture of the concert hall. Hand-in-hand with the relevance revolution came the idea of the “inclusive” classroom – the classroom in which “no child is left behind,” whether or not adapted to the matter in hand. Music has suffered greatly from this, since it is a subject that can be properly taught only to the musical, and which therefore begins from an act of selection. Furthermore even the musical are subjected outside school to a constant bombardment of music in which banal phrases, assembled over the three standard chords and the relentless four in a bar, have filled the ear with addictive clichés. How, in such circumstances, does a musical education begin?
The classical repertoire, it goes without saying, is not “relevant” to the pop-trained ear. It is the creation of another and earlier world, one in which people encountered music only if they, or others in their vicinity, were involved in making it. It was a performance art, which brought people together in a uniquely coordinated way, and which was inseparable in its origins from the habit of improvising around a tune. Music was played, but also listened to, danced to, sung to, and studied for its intrinsic meaning. It was fundamental to the curriculum from the moment when Plato founded the Academy. From the rise of musicology at the Enlightenment to the Conservatoires and Colleges of Music today, music has been taught as a branch of accumulated knowledge, the significance of which can rarely be grasped by the untutored ear, and certainly not by the ear of the average child. Music as an academic discipline is about as “relevant” as Greek or Sanskrit. And no matter how hard we scholars emphasize the use of the useless, we will be dismissed in the name of relevance, and told that our curriculum means nothing to the young musical person today.
To counter this argument it is not enough to point to all the ways in which a relevant curriculum debases learning by making ignorance into the measure of what should be taught. For what we dismiss as ignorance is often the smoothed and adapted outer form of accumulated knowledge, like the simple manners of ordinary people that seem inept in sophisticated company only because some forms of sophistication depend upon hiding this reservoir of social knowledge. In like manner folk music and the traditions of improvisation from which it arises are forms of collective knowledge, and the same can be said for much pop music, including some of that which has carved grooves of addiction in the young musical ear.
The real objection to relevance is that it is an obstacle to self-discovery. Some sixty years ago I was introduced to classical music by teachers who did not waste time criticising my adolescent taste and who made no concessions to my age or temperament. They knew only that they had received a legacy and with it a duty to pass it on. If they did not do so the legacy would die. They discovered in me a soul that could make this legacy its own. That was enough for them. They did not ask themselves whether the classical repertoire was relevant to the interests that I then happened to have, any more than mathematicians ask whether the theorems that they teach will help their students with their accounting problems. Their assumption was that, since the musical knowledge that they wished to impart was unquestionably valuable, it could only benefit me to receive it. But I could not understand the benefit prior to receiving it. To consult my desires in the matter would have been precisely to ignore the crucial fact, which was that, until introduced to classical music, I would not know whether it was to be a part of my life.
Once we see the logic of my teachers’ position we must recognize that, if we know what music is, we have a duty to help young people to understand it, regardless of its “relevance.” We should do this as it has always been done, through encouraging our students to make music together. In the not too distant past every school had a choir whose members were taught to sing in parts and to read music in order to do so. This practice opened the ears of the choristers at once to the experience of voice-led harmony. From that it was a small step to lessons in harmony and counterpoint, and thence to classes in music appreciation.
If there is a point to musicology as a university discipline it surely lies here. The immense knowledge contained in the classical repertoire cannot be imparted in a day, and even when the young ear has begun to appreciate and the young fingers to perform the masterpieces of the repertoire, fully to understand all that they contain by way of emotional and dramatic knowledge is the study of many years. This knowledge fully justifies devoting a faculty of the university to collecting, augmenting, and transmitting. But, whatever else we say of it, this knowledge is not now and never was or will be relevant.
We must surely understand Boulez as the instigator of a false conception of the nature of music itself.
De mortuis nil nisi bonum: of the dead, nothing unless good. But you can take it too far, re-inventing someone who was a power-hungry manipulator, by allowing no one to speak for him save his partisans, many of whom owe their careers to promoting him. As the French say, on a ras le bol with Pierre Boulez, whose death in January has called forth such a spate of idolatrous prose that the sceptics among us have begun to wonder whether French culture is not after all as dead as its critics say it is, if this minor composer and intellectual impresario can be lauded as its greatest recent product. Yet no one in the official channels of cultural appraisal has sown a seed of doubt.
Boulez has three achievements to his name. First, his compositions, presented to the world as next in line to the serialism of Webern, and the “place we have got to” in our musical evolution; secondly, his presence in French culture, diverting government subsidies away from anything that might seem to endorse ordinary musical taste towards the acoustic laboratory of the avant-garde; thirdly, his work as a conductor, for whom clarity and precision took precedence over sentiment. His dominating presence in French musical life is proof that, once the critics have been silenced, the self-appointed leader will be accepted at his own valuation. Condemning all competitors as “useless”, and hinting at a revelation, a “system”, that authorised his doings as the musical Zeitgeist, Boulez was able to subdue whatever timid protests might greet his relentless self-promotion. His disciples and acolytes have spoken abundantly of his charm, and it is clear that, once the period of initial belligerence was over, and his opponents had been despatched to the dust-heap of history, Boulez was a smiling and benevolent occupant of his self-made throne. But did he rule from that throne over fertile territory, or was this sovereignty an expensive illusion?
Boulez’s manipulation of the French subsidy machine has been explored and exposed by Benoît Duteurtre, in a book published in 1995: Requiem pour une avant-garde. Duteurtre tells the story of the steady takeover by Boulez and his entourage of the channels of musical and cultural communication, the new power networks installed in the wake of May 1968, the vilification of opponents, the anathematising of tonal music and its late offshoots in Messiaen, Duruflé, and Dutilleux, and the cultural coup d’état which was the founding of IRCAM. This institution, created by and for Pierre Boulez at the request of President Pompidou in 1970, reveals in its name – Institut de Recherche et Coordination Acoustique/Musique – that it does not distinguish between sound and music, and sees both as matters for “research”. Maintained by government funds in the basement of its architectural equivalent, the Centre Pompidou, IRCAM has been devoted to “sound effects” created by the avant-garde elect, whose products are largely, to coin a phrase, “plink selon plonk”. Absorbing a substantial proportion of a budget that might have been used to sustain the provincial orchestras of France, IRCAM has produced a stream of works without survival value. Despite all Boulez’s efforts, musical people still believe, and rightly, that the test of a work of music is how it sounds, not how it is theorized.
Boulez did, from time to time, produce music that passed that test. He had a fine ear, and no one can doubt that every note in every score was intensely thought about – but thought about, and thought about as sound. Boulez’s was an acoustical, rather than a musical art, with meticulous effects and sonorities produced in unusual ways, according to arcane theories that are inscribed on the hidden side of notes held close to the chest. He burst into the concert hall as a young man in order to heckle the last attempts at tonal composition, dismissing all who were not serialists, and presenting his seminal Le marteau sans maître in 1955 as showing the direction in which serialism must go.
The instrumentation of that work – alto voice, flute, guitar, vibraphone, viola and percussion – reflects the composer’s obsession with timbre and sonority, used here to prevent simultaneities from coalescing as chords. With time signatures changing almost every bar – a 2:4 here, a 5:16 there and so on – and grace notes dropped into every staff, the score resembles a palimpsest from an alchemist’s recipe-book, and the composer’s refusal to describe the serial organisation, insisting that it is obvious and apparent to the ear, has led to a quantity of learned literature. The writers of this literature largely assume that Le marteau is a masterpiece and the turning point of post-war music, because Boulez himself has said so – not in so many words, for he was far too modest for that, but because he pointed speechlessly to its evident perfection.
In a hard-hitting article the American composer and musicologist Fred Lerdahl has told us what the fuss is all about.* The inability of the critics to discern the organisation, serial or otherwise, of Le marteau, Lerdahl argues, is the direct result of the fact that the listening ear is organized by another grammar than the one here used (ostensibly, at least) by the composer. Here is what WIKIPEDIA has to say about this episode:
Despite having been published in 1954 and 1957, analysts were unable to explain Boulez’s compositional methods until Lev Koblyakov in 1977. This is partially due to the fact that Boulez believes in strict control tempered with “local indiscipline,” or rather, the freedom to choose small, individual elements while still adhering to an overall structure compatible with serialist principles. Boulez opts to change individual notes based on sound or harmony, choosing to abandon adherence to the structure dictated by strict serialism, making the detailed serial organization of the piece difficult for the listener to discern.
The Wikipedia article chooses there to close the discussion, with a reference to Lerdahl’s article. But it misses the real point. As Lerdahl argues, serialism construes music as an array of permutations. The musical ear looks for prolongations, sequences, and variations, not permutations, which are inherently hard to grasp. Hence music (music of our classical tradition included) presents events that grow organically from each other, over a repeated measure and according to recognizable harmonic sequences. The “moving forward” of melodic lines through musical space is the true origin of musical unity and of the dramatic power of serious music. And it is this “moving forward” that is the first casualty when permutations take over. Add the “plink selon plonk” of the acoustical laboratory and the result is heard as arbitrary – something to be deciphered, rather than something to be absorbed and enjoyed in the manner of a conversation.
You can test this quite easily by comparing one of the many modernist masterpieces that Boulez condemned with a rival composition by the great man himself. From the beginning, in Le marteau, to the interminable instrumental twiddles of Pli selon pli, Boulez gives us music that has little or no propulsion from one moment to the next. The fundamental musical experience – fundamental not just to our classical tradition but to all music that has been sung, played, and danced from the beginning of time – is that of virtual causality, whereby one moment seems to produce the next out of its own inner dynamic. This is the primary experience on which all rhythmic, melodic, and harmonic invention depends, and it is absent – deliberately absent – from Boulez.
To say this is not to display an attachment, whether or not “bourgeois” or “reactionary”, to the old forms of tonality. It is to make an ontological observation: to say what music essentially is. So take a piece every bit as adventurous in its sonorities as Boulez, in which traditional tonality is marginalised, but which nevertheless adheres to the principle of virtual causality in musical space – say the violin concerto of Dutilleux, or the powerful chaconne movement from the same composer’s first symphony. At once we are in another world, a world that we know, moving with the sounds we hear, and hearing them not merely as sounds, but as movements in a space mapped out in our own emotions. I have to use metaphors in order to describe this experience – for reasons that I make clear in The Aesthetics of Music. But they are metaphors that we all instinctively understand, since they invoke the phenomenon of music itself.
There is a reason for referring to Dutilleux, apart from the fact that it is the 100th anniversary of his birth. For he was, in his own way, every bit as adventurous as Boulez, with the same desire to take music forward into the modern world, to build on past achievements, and to take inspiration from the great achievements of French music, painting, and poetry at the beginning of the modern period. In the 1960s and 70s he was dismissed by Boulez and his entourage as a “bourgeois” composer, smeared as a “Nazi collaborator” (in fact he was active in the resistance), and excluded from the privileges of the true avant-garde. But his music, unlike Boulez’s, has a regular place in concert programs, and speaks to the ordinary musical listener in accents that are both new and (with a certain justified effort) comprehensible.
If we look back at Boulez’s presence in French culture, during the years around 1968 when he was the Gauleiter of the avant-garde, we must surely understand him as the instigator of a false conception of music – not only of the place of music in high culture, and in the civilisation that is our greatest spiritual possession, but of the nature of music itself. He deliberately, and in my view uncomprehendingly, undid the distinction between musical tone and acoustical sound; he mathematized and scientized a practice that is meaningful only if it is seen as a creative art, and he justified every kind of intellectual pretension, just so long as it was intellectual, and just so long as it could be seen as the latest attempt to épater le bourgeois.
Of course he was a true musician too. Faced with real music he had an instinctive grasp of how it might be performed so as to reveal all the currents of thought contained in it. As a conductor he set an example that many have wished to follow, and with reason. Still, even there, his personality showed itself. His meticulous version of Wagner’s Ring cycle shows a conductor who appreciates in thought what can be understood only in emotion. And this version will always be appreciated as a monument to our times, a kind of revenge on Wagner, which is also, when taken together with Chéreau’s marxisant production, a revenge on Germany. Seeing Boulez in that way, I think, we reduce him to his real size, and can begin to appreciate his true historical significance, as a by-product of a disastrous war.
* “Cognitive Constraints in Compositional Systems”, in John A. Sloboda, ed., Generative Processes in Music, Oxford 1988.
Fleeing the congestion and mayhem of New York City in the early summer of 1893, Antonin Dvorák, along with his wife and six children, alighted from a train in the little Bohemian settlement of Spillville, Iowa. Perhaps by then his worldwide fame had spread even there, but this quiet town in the middle of nowhere was hardly preoccupied with celebrities.
Early in the morning of his first full day in Spillville, Dvorák visited the local Catholic church as parishioners gathered for morning Mass. He sat at the organ and played “O Lord before Thy Majesty,” a hymn well known to the Bohemian settlers who quickly joined in. Thereafter, he was a fixture, playing for the service at daily Mass throughout the two summers he was to spend vacationing from his temporary post as head of the new National Conservatory of Music in New York. He had found a home in America.
Antonin Dvorák was born in 1841 and grew up in a provincial village called Nelahozeves, the Czech equivalent of Spillville, Iowa. He was a peasant, the son of an innkeeper and butcher. He apprenticed as a butcher himself. His early music education came from the lively folk music of his village and simple church tunes. The school master taught him the violin and singing. Though he was to go on to study in Prague, he learned his real lessons elsewhere. In language that might make a sophisticate gag, this pious man explained that he “studied with the birds, flowers, trees, God, and myself.” This kind of simplicity, Dvorák responded as fully to the tragedies in his life as to directness, and naturalness marked the man and his music.
The general characteristics of Dvorák’s music could be taken from his own description of the features of Negro melodies that he recommended to Americans as the basis for a “great and noble school of music. They are pathetic, tender, passionate, melancholy, bold, merry [and] gay.” It is precisely this range of emotions that Dvorák expressed in his own works with a healthy exuberance and geniality that would have left Sigmund Freud scratching his head. Dvorák must have been the least neurotic composer of the Romantic era.
Music flowed from Dvorák. Probably not since Schubert had a composer been endowed with such a natural and abundant gift for melody, song, and dance. Dvorák had trouble stemming the flow. He was ruthless with himself in burning manuscripts of undeserving works. There remain nine symphonies—two of which were discovered and published selected verses from the Book of Psalms. Their composition posthumously—ten operas, thirty-six chamber works, assorted concerti, symphonic poems, oratoria, including the Stabat Mater, Requiem, and Te Deum, and sixty-eight songs.
Dvorák’s gifts were recognized by others for their “heavenly naturalness,” as one critic put it. The famous conductor Hans Richter called him “a composer by the grace of God.” Brahms opined: “I’d be delighted to think up a main theme as good as those that Dvorák has discarded.” Dvorák returned the solicitude, expressing his concern over Brahms’s’ lack of faith — “such a great man! such a great soul! And he believes in nothing.”
Dvorák’s music expresses joy, especially in all the sentiments associated with home. Dvorák’s favorite workplace was the kitchen, amidst the domestic racket of his large family. The distinction between sentiment and sentimentality can be a fine one in Dvorák’s work because of the emotional warmth with which he wrote. Occasionally he went overboard. Such trespasses are easily forgiven because they always err on the side of sweetness and are never banal.
Dvorák responded as fully to the tragedies in his life as to its joys. A man of deep faith, he often expressed his grief in religious works. When his daughter died in 1876, he turned to the famous text of the Stabat Mater. It became, during his life, his most famous choral work. The London performance at the Royal Albert Hall, with a chorus consisting of 800 voices conducted by Dvorák, created a sensation. The Stabat Mater reminds one how earnestly Dvorák wished to be a successful opera composer. Some of the vocal quartets and choruses are gloriously operatic.
Modern civilisation has given man undreamt of powers largely because, without understanding it, he has developed methods of utilising more knowledge and resources than any one mind is aware of. The fundamental condition from which any intelligent discussion of the order of all social activities should start is the constitutional and irremediable ignorance both of the acting persons and of the scientist studying this order, of the multiplicity of particular, concrete facts which enter this order of human activities because they are known to some of its members. As the motto on the title page expresses it, ‘man has become all he is without understanding what happened’. This insight should not be a cause of shame but a source of pride in having discovered a method that enables us to overcome the limitations of individual knowledge. And it is an incentive deliberately to cultivate institutions which have opened up those possibilities.
The great achievement of the 18th century social philosophers was to replace the naïve constructivist rationalism of earlier periods, which interpreted all institutions as the products of deliberate design for a foreseeable purpose, by a critical and evolutionary rationalism that examined the conditions and limitations of the effective use of conscious reason.
We are still very far, however, from making full use of the possibilities which those insights open to us, largely because our thinking is governed by language which reflects an earlier mode of thought. The important problems are in large measure obscured by the use of words which imply anthropomorphic or personalised explanations of social institutions. These explanations interpret the general rules which guide action directed at particular purposes. In practice such institutions are successful adaptations to the irremediable limitations of our knowledge, adaptations which have prevailed over alternative forms of order because they proved more effective methods for dealing with that incomplete, dispersed knowledge which is man’s unalterable lot.
The extent to which serious discussion has been vitiated by the ambiguity of some of the key terms, which for lack of more precise ones we have constantly to use, has been vividly brought home to me in the course of a still incomplete investigation of the relations between law, legislation, and liberty on which I have been engaged for some time. In an endeavour to achieve clarity I have been driven to introduce sharp distinctions for which current usage has no accepted or readily intelligible terms. The purpose of the following sketch is to demonstrate the importance of these distinctions which I found essential and to suggest terms which should help us to avoid the prevailing confusion.
I. Cosmos and Taxis
The achievement of human purposes is possible only because we recognise the world we live in as orderly. This order manifests itself in our ability to learn, from the (spatial or temporal) parts of the world we know, rules which enable us to form expectations about other parts. And we anticipate that these rules stand a good chance of being borne out by events. Without the knowledge of such an order of the world in which we live, purposive action would be impossible.
This applies as much to the social as to the physical environment. But while the order of the physical environment is given to us independently of human will, the order of our social environment is partly, but only partly, the result of human design. The temptation to regard it all as the intended product of human action is one of the main sources of error. The insight that not all order that results from the interplay of human actions is the result of design is indeed the beginning of social theory. Yet the anthropomorphic connotations of the term ‘order’ are apt to conceal the fundamental truth that all deliberate efforts to bring about a social order by arrangement or organisation (i.e. by assigning to particular elements specified functions or tasks) take place within a more comprehensive spontaneous order which is not the result of such design.
While we have the terms ‘arrangement’ or ‘organisation’ to describe a made order, we have no single distinctive word to describe an order which has formed spontaneously. The ancient Greeks were more fortunate in this respect. An arrangement produced by man deliberately putting the elements in their place or assigning them distinctive tasks they called taxis, while an order which existed or formed itself independent of any human will directed to that end they called cosmos. Though they generally confined the latter term to the order of nature, it seems equally appropriate for any spontaneous social order and has often, though never systematically, been used for that purpose. The advantage of possessing an unambiguous term to distinguish this kind of order from a made order should outweigh the hesitation we may feel about endowing a social order which we often do not like with a name which conveys the sense of admiration and awe with which man regards the cosmos of nature.
The same is in some measure true of the term ‘order’ itself. Though one of the oldest terms of political theory, it has been somewhat out of fashion for some time. But it is an indispensable term which, on the definition we have given it – a condition of affairs in which we can successfully form expectations and hypotheses about the future – refers to objective facts and not to values. Indeed, the first important difference between a spontaneous order or cosmos and an organisation (arrangement) or taxis is that, not having been deliberately made by men, a cosmos has no purpose. This does not mean that its existence may not be exceedingly serviceable in the pursuit of many purposes: the existence of such an order, not only innature but also in society, is indeed indispensable for the pursuit of any aim. But the order of nature and aspects of the social order not being deliberately created by men, cannot properly be said to have a purpose, though both can be used by men for many different, divergent and even conflicting purposes.
While a cosmos or spontaneous order has thus no purpose, every taxis (arrangement, organisation) presupposes a particular end, and men forming such an organisation must serve the same purposes. A cosmos will result from regularities of the behaviour of the elements which it comprises. It is in this sense endogenous, intrinsic or, as the cyberneticians say, a ‘self regulating’ or ‘self-organising’ system. A taxis, on the other hand, isdetermined by an agency which stands outside the order and is in the same sense exogenous or imposed. Such an external factor may induce the formation of a spontaneous order also by imposing upon the elements such regularities in their responses to the facts of their environment that a spontaneous order will form itself. Such an indirect method of securing the formation of an order possesses important advantages over the direct method: it can be applied in circumstances where what is to affect the order is not known as a whole to anyone. Nor is it necessary that the rules of behaviour within the cosmos be deliberately created: they, too, may emerge as the product of spontaneous growth or of evolution.
It is therefore important to distinguish clearly between the spontaneity of the order and the spontaneous origin of regularities in the behaviour of elements determining it. A spontaneous order may rest in part on regularities which are not spontaneous but imposed. For policy purposes there results thus the alternative whether it is preferable to secure the formation of an order by a strategy of indirect approach, or by directly assigning a place for each element and describing its function in detail.
Where we are concerned solely with the alternative social orders, the first important corollary of this distinction is that in a cosmos knowledge of the facts and purposes which will guide individual action will be those of the acting individuals, while in a taxis the knowledge and purposes of the organiser will determine the resulting order. The knowledge that can be utilised in such an organisation will therefore always be more limited than in a spontaneous order where all the knowledge possessed by the elements can be taken into account in forming the order without this knowledge first being transmitted to a central organiser. And while the complexity of activities which can be ordered as a taxis is necessarily limited to what can be known to the organiser, there is no similar limit in a spontaneous order.
While the deliberate use of spontaneous ordering forces (that is, of the rules of individual conduct which lead to the formation of a spontaneous general order) thus considerably extends the range and complexity of actions which can be integrated into a single order, it also reduces the power anyone can exercise over it without destroying the order. The regularities in the conduct of the elements in a cosmos determine merely its most general and abstract features. The detailed characteristics will be determined by the facts and aims which guide the actions of individual elements, though they are confined by the general rules within a certain permissible range. In consequence, the concrete content of such an order will always be unpredictable, though it may be the only method of achieving an order of wide scope. We must renounce the power of shaping its particular manifestations according to our desires. For example, the position which each individual will occupy in such an order will be largely determined by what to us must appear as accident. Though such a cosmos will serve all human purposes to some degree, it will not give anyone the power to determine whom it will favour more and whom less.
In an arrangement or taxis, on the other hand, the organiser can, within the restricted range achievable by this method, try to make the results conform to his preferences to any degree he likes. A taxis is necessarily designed for the achievement of particular ends or of a particular hierarchy of ends; and to the extent that the organiser can master the information about the available means, and effectively control their use, he maybe able to make the arrangement correspond to his wishes in considerable detail. Since it will be his purposes that will govern the arrangement, he can attach any valuation to each element of the order and place it so as to make its position correspond to what he regards as its merits.
Where it is a question of using limited resources known to the organiser in the service of a unitary hierarchy of ends, an arrangement or organisation (taxis) will be the more effective method. But where the task involves using knowledge dispersed among and accessible only to thousands or millions of separate individuals, the use of spontaneous ordering forces (cosmos) will be superior. More importantly, people who have few or no ends in common, especially people who do not know one another or one another’s circumstances, will be able to form a mutually beneficial and peaceful spontaneous order by submitting to the same abstract rules, but they can form an organisation only by submitting to somebody’s concrete will. To form a common cosmos they need agree only on abstract rules, while to form an organisation they must either agree or be made to submit to a common hierarchy of ends. Only a cosmos can thus constitute an open society, while a political order conceived as an organisation must remain closed or tribal.
II. Nomos and Thesis
Two distinct kinds of rules or norms correspond respectively to cosmos or taxis which the elements must obey in order that the corresponding kind of order be formed. Since here, too, modem European languages lack terms which express the required distinction clearly and unambiguously, and since we have come to use the word ‘law’ or its equivalents ambiguously for both, we shall again propose Greek terms which, at least in the classic usage of 5th and 4th century Athens BC, conveyed approximately the required distinction.
By nomos we shall describe a universal rule of just conduct applying to an unknown number of future instances and equally to all persons in the objective circumstances described by the rule, irrespective of the effects which observance of the rule will produce in a particular situation. Such rules demarcate protected individual domains by enabling each person or organised group to know which means they may employ in the pursuit of their purposes, and thus to prevent conflict between the actions of the different persons. Such rules are generally described as ‘abstract’ and are independent of individual ends. They lead to the formation of an equally abstract and end-independent spontaneous order or cosmos.
In contrast, we shall use thesis to mean any rule which is applicable only to particular people or in the service of the ends of rulers. Though such rules may still be general to various degrees and refer to a multiplicity of particular instances, they will shade imperceptibly from rules in the usual sense to particular commands. They are the necessary instrument of running an organisation or taxis.
The reason why an organisation must to some extent rely on rules and not be directed by particular commands only also explains why a spontaneous order can achieve results which organisations cannot. By restricting actions of individuals only by general rules they can use information which the authority does not possess. The agencies to which the head of an organisation delegates functions can adapt to changing circumstances known only to them, and therefore the commands of authority will generally take the form of general instructions rather than of specific orders.
In two important respects, however, the rules governing the members of an organisation will necessarily differ from rules on which a spontaneous order rests: rules for an organisation presuppose the assignment of particular tasks, targets or functions to individual people by commands; and most of the rules of an organisation will apply only to the persons charged with particular responsibilities. The rules of organisation will therefore never be universal in intent or end-independent, but always subsidiary to the commands by which roles are assigned and tasks or aims prescribed. They do not serve the spontaneous formation of an abstract order in which each individual must find his place and is able to build up a protected domain. The purpose and general outline of the organisation or arrangement must be determined by the organiser.
This distinction between the nomoi as universal rules of conduct and the theseis as rules of organisation corresponds roughly to the familiar distinction between private (including criminal) and public (constitutional and administrative) law. There exists much confusion between these two kinds of rules of law. This confusion is fostered by the terms employed and by the misleading theories of legal positivism (in turn the consequence of the predominant role of public lawyers in the development of jurisprudence). Both represent the public law as in some sense primary and as alone serving the public interest; while private law is regarded, not only as secondary and derived from the former, but also as serving not general but individual interests. The opposite, however, would be nearer the truth. Public law is the law of organisation, of the superstructure of government originally erected only to ensure the enforcement of private law. It has been truly said that public law passes, but private law persists. Whatever the changing structure of government, the basic structure of society resting on the rules of conduct persists. Government therefore owes its authority and has a claim to the allegiance of the citizens only if it maintains the foundations of that spontaneous order on which the working of society’s everyday life rests.
The belief in the pre-eminence of public law is a result of the fact that it has indeed been deliberately created for particular purposes by acts of will, while private law is the result of an evolutionary process and has never been invented or designed as a whole by anybody. It was in the sphere of public law where law-making emerged while, for millenia, in the sphere of private law development proceeded through a process of law-finding in which judges and jurists endeavoured to articulate the rules which had already for long periods governed action and the ‘sense of justice’.
Even though we must turn to public law to discover which rules of conduct an organisation will in practice enforce, it is not necessarily the public law to which the private law owes its authority. Insofar as there is a spontaneously ordered society, public law merely organises the apparatus required for the better functioning of that more comprehensive spontaneous order. It determines a sort of superstructure erected primarily to protect a pre-existing spontaneous order and to enforce the rules on which it rests.
It is instructive to remember that the conception of law in the sense of nomos (i.e. of an abstract rule not due to anybody’s concrete will, applicable in particular cases irrespective of the consequences, a law which could be ‘found’ and was not made for particular foreseeable purposes) has existed and been preserved together with the ideal of individual liberty only in countries such as ancient Rome and modem Britain, in which the development of private law was based on case law and not on statute law, that is, was in the hands of judges or jurists and not of legislators. Both the conception of law as nomos and the ideal of individual liberty have rapidly disappeared whenever the law crone to be conceived as the instrument of a government’s own ends.
What is not generally understood in this connection is that, as a necessary consequence of case law procedure, law based on precedent must consist exclusively of end-independent abstract rules of conduct of universal intent which the judges and jurists attempt to distil from earlier decisions. There is no such builtin limitation to the norms established by a legislator; and he is therefore less likely to submit to such limitations as the chief task which occupies him. For a long time before alterations in the nomos were seriously contemplated, legislators were almost exclusively concerned with laying down the rules of organisation which regulate the apparatus of government. The traditional conception of the law as nomos underlies ideals like those of the Rule of Law, a Government under the Law, and the Separation of Powers. In consequence, when representative bodies, initially concerned solely with matters of government proper, such as taxation, began to be regarded also as the sources of the nomos (the private law, or the universal rules of conduct), this traditional concept was soon replaced by the idea that law was whatever the will of the authorised legislator laid down on particular matters.
Few insights more clearly reveal the governing tendencies of our time than understanding that the progressive permeation and displacement of private by public law is part of the process of transformation of a free, spontaneous order of society into an organisation or taxis. This transformation is the result of two factors which have been governing development for more than a century: on the one hand, of the increasing replacement of rules of just individual conduct (guided by ‘commutative justice’) by conceptions of ‘social’ or ‘distributive’ justice, and on the other hand, of the placing of the power to lay down nomoi (i.e. rules of just conduct) in the hands of the body charged with the direction of government. It has been largely this fusion of these two essentially different tasks in the same ‘legislative’ assemblies which has almost wholly destroyed the distinction between law as a universal rule of conduct and law as an instruction to government on what to do in particular instances.
The socialist aim of a just distribution of incomes must lead to such a transformation of the spontaneous order into an organisation; for only in an organisation, directed towards a common hierarchy of ends, and in which the individuals have to perform assigned duties, can the conception of a ‘just’ reward be given meaning. In a spontaneous order nobody ‘allocates’, or can even foresee, the results which changes in circumstances will produce for particular individuals or groups, and it can know justice only as rules of just individual conduct but not in results. Such a society certainly presupposes the belief that justice, in the sense of rules of just conduct, is not an empty word – but ‘social justice’ must remain an empty concept so long as the spontaneous order is not wholly transformed into a totalitarian organisation in which rewards are given by authority for merit earned in performing duties assigned by that authority. ‘Social’ or ‘distributive’ justice is the justice of organisation but meaningless in a spontaneous order.
III. A Digression on Articulated and Non-Articulated Rules
Though the distinction to be considered next is not quite on the same plane with the others examined here, it will be expedient to insert some remarks on the sense in which we are employing the term ‘rule’. As we have used it, it covers two distinct meanings, the difference between which is often confused with or concealed by the more familiar and closely related distinction between written and unwritten, or between customary and statute, law. The point to be emphasised is that a rule may effectively govern action in the sense that from knowing it we can predict how people will act, without it being known as a verbal formula to the actors. Men may ‘know how’ to act, and the manner of their action may be correctly described by an articulated rule, without their explicitly ‘knowing that’ the rule is such and such; that is, they need not be able to state the rule in words in order to be able to conform to it in their actions, or to recognise whether others have or have not done so.
There can be no doubt that, both in early society and since, many of the rules which manifest themselves in consistent judicial decisions are not known to anyone as verbal formulae, and that even the rules which are known in articulated form will often be merely imperfect efforts to express in words principles which guide action and are expressed in approval or disapproval of the actions of others. What we call the’ ‘sense of justice’ is nothing but that capacity to act in accordance with nonarticulated rules, and what is described as finding or discovering justice consists in trying to express in words the yet unarticulated rules by which a particular decision is judged.
This capacity to act, and to recognise whether others act, in accordance with non-articulated rules probably always exists before attempts are made to articulate such rules; and most articulated rules are merely more or less successful attempts to put into words what has been acted upon before, and will continue to form the basis for judging the results of the application of the articulated rules.
Of course, once particular articulations of rules of conduct have become accepted, they will be one of the chief means of transmitting such rules; and the development of articulated and unarticulated rules will constantly interact. Yet it seems probable that no system of articulated rules can exist or be fully understood without a background of unarticulated rules which will be drawn upon when gaps are discovered in the system of articulated rules.
This governing influence of a background of unarticulated rules explains why the application of general rules to particular instances will rarely take the form of a syllogism, since only articulated rules can serve as explicit premises of such a syllogism. Conclusions derived from the articulated rules only will not be tolerated if they conflict with the conclusions to which yet unarticulated rules lead. Equity develops by the side of the already fully articulated rules of strict law through this familiar process.
There is in this respect much less difference between the unwritten or customary law which is handed down in the form of articulated verbal rules and the written law, than there is between articulated and unarticulated rules. Much of the unwritten or customary law may already be articulated in orally transmitted verbal formulae. Yet, even when all law that can be said to be explicitly known has been articulated, this need not mean that the process of articulating the rules that in practice guide decisions has already been completed.
IV. Opinion and Will, Values and Ends
We come now to a pair of important distinctions for which the available terms are particularly inadequate and for which even classical Greek does not provide us with readily intelligible expressions. Yet the substitution by Rousseau, Hegel, and their followers down to T. H. Green, of the term ‘will’ for what older authors had described as ‘opinion’, and still earlier ones contrasted as ratio to voluntas, was probably the most fateful terminological innovation in the history of political thinking.
This substitution of the term ‘will’ for ‘opinion’ was the product of a constructivist rationalism which imagined that all laws were invented for a known purpose rather than the articulation or improved formulation of practices that had prevailed because they produced a more viable order than those current in competing groups. The term ‘opinion’ at the same time became increasingly suspect because it was contrasted with incontrovertible knowledge of cause and effect and a growing tendency to discard all statements incapable of proof. ‘Mere opinion’ became one of the chief targets of rationalist critique; ‘will’ seemed to refer to rational purposive action, while ‘opinion’ came to be regarded as something typically uncertain and incapable of rational discussion.
Yet the order of an open society and all modern civilisation rests largely on opinions which have been effective in producing such an order long before people knew why they held them; and in a great measure it still rests on such beliefs. Even when people began to ask how the rules of conduct which they observed might be improved, the effects which they produced, and in the light of which they might be revised, were only dimly understood. The difficulty lay in the fact that any attempt to assess an action by its foreseeable results in the particular case is the very opposite of the function which opinions about the permissibility or non-permissibility of a kind of action play in the formation of an overall order.
Our insight into these circumstances is much obscured by the rationalistic prejudice that intelligent behaviour is governed exclusively by a knowledge of the relations between cause and effect, and by the associated belief that ‘reason’ manifests itself only in deductions derived from such knowledge. The only kind of rational action constructivist rationalism recognises is action guided by such considerations as ‘If I want X then I must do Y’. Human action, however, is in fact as much guided by rules which limit it to permissible kinds of actions – rules which generally preclude certain kinds of actions irrespective of their foreseeable particular results. Our capacity to act successfully in our natural and social environment rests as much on such knowledge of what not to do (usually without awareness of the consequences which would follow if we did it) as on our knowledge of the particular effects of what we do. In fact, our positive knowledge serves us effectively only thanks to rules which confine our actions to the limited range within which we are able to foresee relevant consequences. It prevents us from overstepping these limits. Fear of the unknown, and avoidance of actions with unforeseeable consequences, has as important a function to perform in making our actions ‘rational’ in the sense of successful as positive knowledge. If the term ‘reason’ is confined to knowledge of positive facts and excludes knowledge of the ‘ought not’, a large part of the rules which guide human action so as to enable the individuals or groups to persist in the environment in which they live is excluded from ‘reason’. Much of the accumulated experience of the human race would fall outside what is described as ‘reason’ if this concept is arbitrarily confined to positive knowledge of the rules of cause and effect which govern particular events in our environment.
Before the rationalist revolution of the 16th and 17th centuries, however, the term ‘reason’ included and even gave first place to the knowledge of appropriate rules of conduct. When ratio was contrasted with voluntas, the former referred pre-eminently to opinion about the permissibility or nonpermissibility of the kinds of conduct which voluntas indicated as the most obvious means of achieving a particular result. What was described as reason was thus not so much knowledge that in particular circumstances particular actions would produce particular results, but a capacity to avoid actions of a kind whose foreseeable results seemed desirable, but which were likely to lead to the destruction of the order on which the achievements of the human race rested.
We are familiar with the crucial point that the general order of society into which individual actions are integrated results not from the concrete purposes which individuals pursue but from their observing rules which limit the range of their actions. It does not really matter for the formation of this order what are the concrete purposes pursued by the individuals; they may in many instances be wholly absurd, yet so long as the individuals pursue their purposes within the limits of those rules, they may in doing so contribute to the needs of others. It is not the purposive but the rule-governed aspect of individual actions which integrates them into the order on which civilisation rests.
To describe the content of a rule, or of a law defining just conduct, as the expression of a will (popular or other) is thus wholly misleading. Legislators approving the text of a statute articulating a rule of conduct, or legal draftsmen deciding the wording of such a bill, will be guided by a will aiming at a particular result; but the particular form of words is not the content of such a law. Will always refers to particular actions serving particular ends, and the will ceases when the action is taken and the end (terminus) reached. But nobody can have a will in this sense concerning what shall happen in an unknown number of future instances.
Opinions, on the other hand, have no purpose known to those who hold them – indeed, we should rightly suspect an opinion on matters of right and wrong if we found that it was held for a purpose. Most of the beneficial opinions held by individuals are held by them without their having any known reasons for them except that they are the traditions of the society in which they have grown up. Opinion about what is right and wrong has therefore nothing to do with will in the precise sense in which it is necessary to use the term if confusion is to be avoided. We all know only too well that our will may often be in conflict with what we think is right, and this applies no less to a group of people aiming at a common concrete purpose than to any individual.
While an act of will is always determined by a particular concrete end (terminus) and the state of willing ceases when the end is achieved, the manner in which the end is pursued does also depend on dispositions which are more or less permanent properties of the acting person. These dispositions are complexes of built-in rules which say either which kinds of actions will lead to a certain kind of result or which are generally to be avoided. This is not the place to enter into a discussion of the highly complex hierarchic structure of those systems of dispositions which govern our thinking and which include dispositions to change dispositions, etc., as well as those which govern all actions of a particular organism and others which are only evoked in particular circumstances.
What is of importance is that among the dispositions which will govern the manner of action of a particular organism there will always be, in addition to dispositions to the kind of actions likely to produce particular results, many negative dispositions which rule out some kinds of action. These inhibitions against types of actions likely to be harmful to the individual or the group are probably among the most important adaptations which all organisms, and especially all individuals living in groups, must possess to make life possible. ‘Taboos’ are as much a necessary basis of successful existence of a social animal as positive knowledge of what kind of action will produce a given result.
If we are systematically to distinguish the will directed to a particular end (terminus) and disappearing when that particular end has been reached, from the opinion in the sense of a lasting or permanent disposition towards (or against) kinds of conduct, it will be expedient to adopt also a distinct name for the generalised aims towards which opinions are directed. It is suggested that among the available terms the one which corresponds to opinion in the same way in which end corresponds to will is the term value. It is of course not used currently only in this narrow sense; and we are all apt to describe the importance of a particular concrete end as its value. Nevertheless, at least in its plural form values, the term seems as closely to “approach the needed meaning as any other term available.
It is therefore expedient to describe as values what may guide a person’s actions throughout most of his life as distinct from the concrete ends which determine his actions at particular moments. Values in this sense, moreover, are largely culturally transmitted and will guide the action even of persons who are not consciously aware of them, while the end which will most of the time be the focus of conscious attention will normally be the result of the particular circumstances in which he finds himself at any moment. In the sense in which the term ‘value’ is most generally used it certainly does not refer to particular objects, persons, or events, but to attributes which many different objects, persons, or events may possess at different times and different places and which, if we endeavour to describe them, we will usually describe by stating a rule to which these objects, persons or actions conform. The importance of a value is related to the urgency of a need or of a particular end in the same manner in which the universal or abstract is related to the particular or concrete.
It should be noted that these more or less permanent dispositions which we describe as opinions about values are something very different from the emotions with which they are sometimes connected. Emotions, like needs, are evoked by and directed towards particular concrete objects and rapidly disappear with their disappearance. They are, unlike opinions and values, temporary dispositions which will guide actions with regard to particular things but not a framework which controls all actions. Like a particular end an emotion may overpower the restraints of opinion which refer not to the particular but to the abstract and general features of the situation. In this respect opinion, being abstract, is much more akin to knowledge of cause and effect and therefore deserves to be included with the latter as part of reason.
All moral problems, in the widest sense of the term, arise from a conflict between a knowledge that particular desirable results can be achieved in a given way and the rules which tell us that some kinds of actions are to be avoided. It is the extent of our ignorance which makes it necessary that in the use of knowledge we should be limited and refrain from many actions whose unpredictable consequences might place us outside the order within which alone the world is tolerably safe for us. It is only thanks to such restraints that our limited knowledge of positive facts serves us as a reliable guide in the sea of ignorance in which we move. The actions of a person who insisted on being guided only by calculable results and refused to respect opinions about what is prudent or permissible would soon prove unsuccessful and in this sense irrational to the highest degree.
The understanding of this distinction has been badly blurred by the words at our disposal. But it is of fundamental importance because the possibility of the required agreement, and therefore of a peaceful existence of the order of an Open Society, rests on it. Our thinking and our vocabulary are still determined largely by the problems and needs of the small group concerned with specific ends known to all its members. The confusion and harm caused by the application of these conceptions to the problems of the Open Society are immense. They have been preserved particularly through the dominance in moral philosophyof a Platonic tribalism which in modem times has received strong support from the preference of people engaged in empirical research for the problems of the observable and tangible small groups and from their distaste for the intangible, more comprehensive order of the social cosmos – an order which can be only mentally reconstructed but never intuitively perceived or observed as a whole.
The possibility of an Open Society rests on its members possessing common opinions, rules and values, and its existence becomes impossible if we insist that it must possess a common will issuing commands directing its members to particular ends. The larger the groups within which we hope to live in peace, the more the common values which are enforced must be confined to abstract and general rules of conduct. The members of an Open Society have and can have in common only opinions on values but not a will on concrete ends. In consequence the possibility of an order of peace based on agreement, especially in a democracy, rests on coercion being confined to the enforcement of abstract rules of just conduct.
V. Nomocracy and Teleocracy
The first two of the distinctions we have drawn (in Sections I and II) have been conveniently combined by Professor Michael Oakeshott into the two concepts of nomocracy and teleocracy, which need now hardly any further explanation. A nomocracy corresponds to our cosmos resting entirely on general rules or nomoi, while a teleocracy corresponds to a taxis (arrangement or organisation) directed towards particular ends or teloi. For the former the ‘public good’ or ‘general welfare’ consists solely in the preservation of that abstract and end-independent order which is secured by obedience to abstract rules of just conduct: that
‘public interest which is no other than common right and justice excluding all partiality or private interest [which may be] called the empire of laws and not of men’. 
For a teleocracy, on the other hand, the common good consists of the sum of the particular interests, that is, the sum of the concrete foreseeable results affecting particular people or groups. It was this latter conception which seemed more acceptable to the naïve constructivist rationalism whose criterium of rationality is a recognisable concrete order serving known particular purposes. Such a teleocratic order, however, is incompatible with the development of an Open Society comprising numerous people having no known concrete purposes in common; and the attempt to impose it on the grown order of a nomocracy leads back from the Open Society to the Tribal Society of the small group. And since all conceptions of the ‘merit’ according to which individuals should be ‘rewarded’ must derive from concrete and particular ends towards which the common efforts of a group are directed, all efforts towards a ‘distributive’ or ‘social’ justice must lead to the replacement of the nomocracy by a teleocracy, and thus to a return from the Open to the Tribal Society.
VI. Catallaxy and Economy
The instance in which the use of the same term for two different kinds of order has caused most confusion, and is still constantly misleading serious thinkers, is probably that of the use of the word ‘economy’ for both the deliberate arrangement or organisation of resources in the service of a unitary hierarchy of ends, such as a household, an enterprise, or any other organisation including government, and the structure of many inter-related economies of this kind which we call a social, or national, or world ‘economy’ and often also simply an ‘economy’. The ordered structure which the market produces is, however, not an organisation but a spontaneous order or cosmos, and is for this reason in many respects fundamentally different from that arrangement or organisation originally and properly called an economy.
The belief, largely due to this use of the same term for both, that the market order ought to be made to behave as if it were an economy proper, and that its performance can and ought to be judged by the same criteria, has become the source of so many errors and fallacies that it seems necessary to adopt a new technical term to describe the order of the market which spontaneously forms itself. By analogy with the term catallactics which has often been proposed as a replacement for the term ‘economics’ as the name for the theory of the market order, we could describe that order itself as a catallaxy. Both expressions are derived from the Greek verb katallatein (or katallassein) which significantly means not only ‘to exchange’ but also ‘to receive into the community’ and ‘to tum from enemy into friend’.
The chief aim of this neologism is to emphasise that a catallaxy neither ought nor can be made to serve a particular hierarchy of concrete ends, and that therefore its performance cannot be judged in terms of a sum of particular results. Yet all the aims of socialism, all attempts to enforce ‘social’ or ‘distributive’ justice, and the whole of so-called ‘welfare economics’, are directed towards turning the cosmos of the spontaneous order of the market into an arrangement or taxis, or the catallaxy into an economy proper. Apparently the belief that the catallaxy ought to be made to behave as if it were an economy seems so obvious and unquestionable to many economists that they never examine its validity. They treat it as the indisputable presupposition for rational examination of the desirability of any order, an assumption without which no judgement of the expediency or worth of alternative institutions is possible.
The belief that the efficiency of the market order can be judged only in terms of the degree of the achievement of a known hierarchy of particular ends is, however, wholly erroneous. Indeed, since these ends are in their totality not known to anybody, any discussion in such terms is necessarily empty. The discovery procedure which we call competition aims at the closest approach we can achieve by any means known to us to a somewhat more modest aim which is nevertheless highly important: namely a state of affairs in which all that is in fact produced is produced at the lowest possible costs. This means that of that particular combination of commodities and services which will be produced more will be made available than could be done by any other known means; and that in consequence, though the share in that product which the different individuals will get isleft to be determined by circumstances nobody can foresee and in this sense to ‘accident’, each will get for the share he wins in the game (which is partly a game of skill and partly a game of chance) as large a real equivalent as can be secured. We allow the individual share to be determined partly by luck in order to make the total to be shared as large as possible.
The utilisation of the spontaneous ordering forces of the market to achieve this kind of optimum, and leaving the determination of the relative shares of the different individuals to what must appear as accident, are inseparable. Only because the market induces every individual to use his unique knowledge of particular opportunities and possibilities for his purposes can an overall order be achieved that uses in its totality the dispersed knowledge which is not accessible as a whole to anyone. The ‘maximisation’ of the total product in the above sense, and its distribution by the market, cannot be separated because it is through the determination of the prices of the factors of production that the overall order of the market is brought about. If incomes are not determined by factor pricing within the output, then output cannot be maximised relative to individual preferences.
This does not preclude, of course, that outside the market government may use distinct means placed at its disposal for the purpose of assisting people who, for one reason or another, cannot through the market earn a minimum income. A society relying on the market order for the efficient use of its resources is likely fairly soon to reach an overall level of wealth which makes it possible for this minimum to be at an adequate level. But it should not be achieved by manipulating the spontaneous order in such a manner as to make the income earned on the market conform to some ideal of ‘distributive justice’. Such efforts will reduce the total in which all can share.
VII. Demarchy and Democracy
This, unfortunately, does not exhaust the neologisms which seem necessary to escape the confusion which dominates current political thought. Another instance of the prevailing confusion of language is the almost universal use of the term ‘democracy’ for a special kind of democracy which is by no means a necessary consequence of the basic ideal originally described by that name. Indeed Aristotle questioned whether this form should even be called ‘democracy’. The appeal of the original ideal has been transferred to the particular form of democracy which now prevails everywhere, although this is very far from corresponding to what the original conception aimed at.
Initially the term ‘democracy’ meant no more than that whatever ultimate power there is should be in the hands of the majority of the people or their representatives. But it said nothing about the extent of that power. It is often mistakenly suggested that any ultimate power must be unlimited. From the demand that the opinion of the majority should prevail it by no means follows that their will on particular matters should be unlimited. Indeed the classical theory of the separation of powers presupposes that the ‘legislation’ which was to be in the hands of a representative assembly should be concerned only with the passing of ‘laws’ (which were presumed to be distinguishable from particular commands by some intrinsic property), and that particular decisions did not become laws (in the sense of nomoi) merely because they emanated from the ‘legislature’. Without this distinction the idea that a separation of powers involved the attribution of particular functions to distinct bodies would have been meaningless and indeed circular. 
If the legislature only can make new law and can do nothing else but make law, whether a particular resolution of that body is valid law must be determinable by a recognisable property of that resolution. Its source alone does not constitute a sufficient criterion of validity.
There can be no doubt that what the great theorists of representative government and of liberal constitutionalism meant by law when they demanded a separation of powers was what we have called nomos. That they spoiled their aim by entrusting to the same representative assemblies also the task of making laws in another sense, namely that of the rules of organisation determining the structure and conduct of government, is another story which we cannot further pursue here. Nor can we further consider the inevitable consequence of an institutional arrangement under which a legislature which is not confined to laying down universal rules of just conduct must be driven by organised interests to use its power of ‘legislation’ to serve particular private ends. All we are here concerned with is that it is not necessary that the supreme authority possesses this sort of power. To limit power does not require that there be another power to limit it. If all power rests on opinion, and opinion recognises no other ultimate power than one that proves its belief in the justice of its actions by committing itself to universal rules (the application of which to particular cases it cannot control), the supreme power loses its authority as soon as it oversteps these limits.
The supreme power thus need not be an unlimited power – it may be a power which loses the indispensable support of opinion as soon as it pronounces anything which doesnot possess the substantive character of nomos in the sense of a universal rule of just conduct. Just as the Pope is deemed to be infallible only dum ex cathedra loquitur, that is, so long as he lays down dogma and not in his decision of particular matters, so a legislature may be supreme only when it exercises the capacity of legislating in the strict sense of stating the valid nomos. And it can be so limited because there exist objective tests (however difficult they may be to apply in particular instances) by which independent and impartial courts, not concerned with any particular aims of government, can decide whether what the legislature resolves has the character of a nomos or not, and therefore also whether it is binding law. All that is needed is a court of justice which can say whether the acts of the legislature do or do not possess certain formal properties which every valid law must possess. But this court need possess no positive power to issue any commands.
The majority of a representative assembly may thus well be the supreme power and yet not possess unlimited power. If its power is limited to acting as (to revive another Greek term which appealed both to the 17th century English theorists of democracy and to John Stuart Mill)nomothetae, or as the setters of the nomos, without power to issue particular commands, no privilege or discrimination in favour of particular groups which it attempted to make law would have the force of law. This sort of power would simply not exist because whoever exercised supreme power would have to prove the legitimacy of its acts by committing itself to universal rules.
If we want democratic determination not only of the coercive rules which bind the private citizen as well as the government, but also of the administration of the government apparatus, we need some representative body to do the latter. But this body need not and should not be the same as that which lays down the nomos. It should itself be under the nomos laid down by another representative body, which would determine the limits of the power which this body could not alter. Such a governmental or directive (but in the strict sense not legislative) representative body would then indeed be concerned with matters of the will of the majority (i.e. with the achievement of a particular concrete purpose) for the pursuit of which it would employ governmental powers. It would not be concerned with questions of opinion about what was right and wrong. It would be devoted to the satisfaction of concrete foreseeable needs by the use of separate resources set aside for the purpose.
The fathers of liberal constitutionalism were surely right when they thought that in the supreme assemblies concerned with what they regarded as legislation proper, that is, with laying down the nomos, those coalitions of organised interests which they called factions and which we call parties should have no place. Parties are indeed concerned with matters of concrete will, the satisfaction of the particular interest of the people who combine to form them, but legislation proper should express opinion and therefore not be placed in the hands of representatives of particular interests but in the hands of a representative sample of the prevailing opinion, persons who should be secured against all pressure of particular interests.
I have elsewhere suggested a method of electing such a representative body that would make it independent of the organised parties though they would still remain necessary for the effective democratic conduct of government proper. It requires the election of members for long periods after which they would not be re-eligible. To make them nevertheless representative of current opinion a representation by age groups might be used: each generation electing once in their lives, say, in their fortieth year, representatives to serve for 15 years and thereafter assured of continued occupation as lay judges. The law-making assembly would then be composed of men and women between 40 and 55 (and thus probably of an average age somewhat lower than the existing representative assemblies!), elected by their contemporaries after they had opportunity to prove themselves in ordinary life, and required on election to abandon their private occupations for an honorific position for the rest of their active life.
Such a system of election by the contemporaries (who usually are the best judges of a person’s ability) would come nearer to producing that ideal of the political theorists, a senate of wise and honourable men, than any system yet tried. The restriction of the power of such a body to legislation proper would for the first time make possible that real separation of powers which has never yet existed, and with it a true government under the law and an effective rule of law. The governmental or directive assembly, on the other hand, subject to the law laid down by the former, and concerned with the provision of particular services, might well continue to be elected on established party lines.
Such a basic change in existing constitutional arrangements pre-supposes that we finally shed the illusion that the safeguards men once painfully devised to prevent abuse of government power are all unnecessary once that power is placed in the hands of the majority of the people. There is no reason whatever to expect that an omnipotent democratic government will always serve the general rather than particular interests. Democratic government free to benefit particular groups is bound to be dominated by coalitions of organised interests, rather than serve the general interest in the classical sense of ‘common right and justice, excluding all partial or private interests’.
It is greatly to be regretted that the word democracy should have become indissolubly connected with the conception of the unlimited power of the majority on particular matters. But if this is so we need a new word to denote the ideal which democracy originally expressed, the ideal of a rule of the popular opinion on what is just, but not of a popular will concerning whatever concrete measures seem desirable to the coalition of organised interests governing at the moment. If democracy and limited government have become irreconcilable conceptions, we must find a new word for what once might have been called limited democracy. We want the opinion of the demos to be the ultimate authority, but not allow the naked power of the majority, its kratos, to do rule-less violence to individuals. The majority should then rule (archein) by ‘established standing laws, promulgated and known to the people, and not by extemporary decrees’. We might perhaps describe such a political order by linking demos with archein and call demarchy such a limited government in which the opinion but not the particular will of the people is the highest authority. The particular scheme considered above was meant to suggest one possible way to secure such a demarchy.
If it is insisted upon that democracy must be unlimited government, I do indeed not believe in democracy, but I am and shall remain a profoundly convinced demarchist in the sense indicated. If we can by such a change of the name free ourselves from the errors that have unfortunately come to be so closely associated with the conception of democracy, we might thereby succeed in avoiding the dangers which have plagued democracy from its very beginning and have again and again led to its destruction. It is the problem which arose in the memorable episode of which Xenophon tells us, when the Athenian Assembly wanted to vote the punishment of particular individuals and
‘the great numbers cried out that it was monstrous if the people were to be prevented from doing whatever they wished… Then the Prytanes, stricken with fear, agreed to put the question – all of them except Socrates, the son of Sophroniskus; and he said that in no case would he act except in accordance with the law’.
The passage from Gianbattista Vico used as a motto is taken from Opere, ed. G. Ferrari, 2nd edition, Milan, 1854, Vol. V., p. 183.
Cf. my Studies in Philosophy, Politics, and Economics, London and Chicago, 1967, especially Chapters 4, 5 and 6, as well as my lecture ‘Dr Bernard Mandeville’ (The Proceedings of the British Academy, 1966, Vol. LII, London, 1967).
For example, J. A. Schumpeter, History of Economic Analysis, New York, 1954, p. 67, where he speaks of A. A. Coumot and H. von Thünen as the first two authors ‘to visualise the general inter-dependence of all economic quantities and the necessity of representing this cosmos by a system of equations’.
The only passage known to me in which the error, usually only implicit, that ‘order supposes an end’ is explicitly stated in these words occurs, significantly, in the writings of Jeremy Bentham: ‘An Essay on Political Tactics’, first published in Works, ed. Bowring, Vol. II, p. 399.
The idea of the formation of spontaneous or self-determining orders, like the connected idea of evolution, has been developed by the social sciences before it was adopted by the natural sciences and here developed as cybernetics. This is beginning to be seen by the biologists. For example, G. Hardin, Nature and Man’s Fate (1959), Mentor edn., New York, 1961, p. 54: ‘But long before [Claude Bernard, Clerk Maxwell, Walter B. Cannon or Norbert Wiener] Adam Smith had just as clearly used the idea [of cybernetics]. The “invisible hand” that regulates prices to a nicety is clearly this idea. In a free market, says Smith in effect, prices are regulated by negative feedback.’
Thesis must not be confused with thesmos, a Greek term for ‘law’ older than nomos but, at least in classical times, meaning rather the law laid down by a ruler than the impersonal rules of conduct. Thesis, by contrast, means the particular act of setting up an arrangement. It is significant that the ancient Greeks could never make up their minds whether the proper opposite to what was determined by nature (physei) was what was determined nomō or what was determined thesei. On this problem see Chapter 6 of the volume of essays and the lecture mentioned in footnote 2.
The end-independent character of rules of just conduct has been demonstrated clearly by David Hume and most systematically developed by Immanuel Kant. Cf. D. Hume, An Enquiry Concerning the Principles of Morals, in Essays, Moral, Political, and Literary, ed. T. H. Green and T. H. Grose, London, 1875, Vol. II, p. 273: ‘the benefit resulting from [the social virtues of justice and fidelity] is not the consequence of every individual single act; but arises from the whole scheme of system concurred in by the whole, or the greater part of society. General peace and order are the attendants of justice or a general abstinence from the possessions of others: But a particular regard to the particular right of one individual citizen may frequently, considered in itself, be productive of pernicious consequences. The result of the individual act is here, in many instances, directly opposite to that of the whole system of actions; and the former may be extremely hurtful, while the latter is, to the highest degree advantageous.’ See also his Treatise on Human Nature (same edn.), Vol. II, p. 318: ‘It is evident, that if men were to regulate their conduct by the view of a particular interest, they would involve themselves in endless confusion.’ For I. Kant see the excellent exposition in Mary Gregor, Laws of Freedom, Oxford, 1963, especially pp. 38-42 and 81.
H. Huber, Recht, Staat, und Gesellschaft, Bern, 1954, p. 5: ‘Staatsrecht vergeht, Privatrecht besteht’.
A revealing description of the difference between the law with which the judge is concerned and the law of modern legislation is to be found in an essay by the distinguished American public lawyer P. A. Freund in R. B. Brandt (ed.), Social Justice, Spectrum Books, New York, 1962, p. 94: ‘The judge addresses himself to standards of consistency, equivalence, predictability, the legislator to fair shares, social utility, and equitable distribution’.
The term ‘opinion’ has been most consistently used in this sense by David Hume, particularly in Essays, loc. cit., Vol. I, p. 125: ‘It may be farther said that, though men be much governed by interest, yet even interest itself, and all human affairs, are entirely governed by opinion’; and ibid., p. 110: ‘As force is always on the side of the governed, the governors have nothing to support themselves but opinion. It is therefore on opinion only that government is founded; and this maxim extends to the most despotic military government as well as the most free and popular.’ It seems that this use of the term ‘opinion’ derives from the great political debates of the 17th century; this is at least suggested by the text of a broadside of 1641 with an engraving by Wenceslas Hollar (reproduced as frontispiece to Vol. I of William Haller (ed.), Tracts on Liberty in the Puritan Revolution1638-1747, New York, 1934) which is headed ‘The World is Ruled and Governed by Opinion’.
The Cartesian foundations of Rousseau’s thinking in these respects are clearly brought out in Robert Derathé, Le rationalism de J.-J. Rousseau, Paris, 1948.
The extension of knowledge is largely due to persons who transcended these limits, but of those who did many more probably perished or endangered their fellows than added to the common stock of positive knowledge.
John Locke, Essays on the Law of Nature (1676), ed. W. von Leyden, Oxford, 1954, p. 111: ‘By reason … I do not think is meant here that faculty of the understanding which forms trains of thought and deduces proofs, but certain definite principles of action from which spring all virtues and whatever is necessary for the proper moulding of morals … reason does not so much establish and pronounce this law of nature as search for it and discover it…. Neither is reason so much the maker of that law as its interpreter’.
The distinction between what we call here the ‘purposive’ and the ‘rule-governed’ aspects of action is probably the same as Max Weber’s distinction between what he calls zweckrational and wertrational. If this is so it should, however, be clear that hardly any action could be guided by only either the one or the other kind of consideration, but that considerations of the effectiveness of the means according to the rules of cause and effect will normally be combined with considerations of their appropriateness according to the normative rules about the permissibility of the means.
This is a confusion against which the ancient Greeks were protected by their language, since the only word they had to express what we describe as willing, bouleuomai, clearly referred only to particular concrete actions. (M. Pohlenz, Der Hellenische Mensch, Göttingen, 1946, p. 210.)
Cf. Chapter 3 of my Studies in Philosophy, Politics, and Economics, op. cit.
It is the basic mistake of particularistic utilitarianism to assume that rules of just conduct aim at particular concrete ends and must be judged by them. I know of no clearer expression of this fundamental error of constructivist rationalism than the statement by Hastings Rashdall (The Theory of Good and Evil, London, 1948, Vol. I, p. 148) that ‘all moral judgements are ultimately judgements as to the value of ends’. This is precisely what they are not. They do not refer to concrete ends but to kinds of action or, in other words, they are judgements about means based on a presumed probability that a kind of action will produce undesirable effects but are applicable in spite of our factual ignorance in most particular instances of whether they will do so or not.
Cf. W. Shakespeare, Troilus and Cressida, II, 2, 52:
‘But value dwells not in particular will;
It holds its estimate and dignity
As well wherein ‘tis precious of itself
As in the prizer.’
So far as I know these terms have been used by Professor Oakeshott only in his oral teaching but not in any published work. For reasons which will become clear in Section VII, I should have preferred to employ the term nomarchy rather than nomocracy, if the former were not too easily confused with ‘monarchy’.
James Harrington, The Prerogative of Popular Government (1658), in: The Oceana and His Other Works, ed. J. Toland, London, 1771, p. 224.
I now find somewhat misleading the definition of the science of economics as ‘the study of the disposal of scarce means towards the realisation of given ends’, which has been so effectively expounded by Lord Robbins and which I should long have defended. It seems to me appropriate only to that preliminary part of catallactics which consists in the study of what has sometimes been called ‘simple economies’ and to which also Aristotle’s Oeconomica is exclusively devoted: the study of the dispositions of a single household or firm, sometimes described as the economic calculus or the pure logic of choice. (What is now called economics but had better be described as catallactics Aristotle described as chrematistike or the science of wealth.) The reason why Robbins’ widely accepted definition now seems to me to be misleading is that the ends which a catallaxy serve are not given in their totality to anyone, that is, are not known either to any individual participant in the process or to the scientist studying it.
See H. G. Liddell and R. Scott, A Greek-English Lexicon, new edition, Oxford, 1940, s.v. Katallásso.
Aristotle, Politics, Iv IV 4, 1,292a, Loeb, ed. Rackham, Cambridge, Mass., and London, 1950, p. 303: ‘And it would seem a reasonable criticism to say that such a democracy is not a constitution at all; for where the laws do not govern there is no constitution, as the law ought to govern all things while the magistrates control particulars, and we ought to judge this to be constitutional government; if then democracy really is one of the forms of constitution, it is manifest that an organisation of this kind, in which all things are administered by resolutions of the assembly, is not even a democracy in the proper sense, for it is impossible for a voted resolution to be a universal rule’.
Cp. above what is said under ‘Nomos and Thesis’ on the difference between private and public law; and on what follows now also the important work by M. J. C. Vile, Constitutionalism and the Separation of Powers, Clarendon Press, Oxford, 1967.
Cf. Philip Hunton, A Treatise on Monarchy, London, 1643, p. 5, and John Stuart Mill, On Liberty and Considerations of Representative Government, ed. R. B. McCallum, Oxford, 1946, p. 171.
Most recently in ‘The Constitution of a Liberal State’, Il Politico, 1967.
Cf. R. Wollheim, ‘A Paradox in the Theory of Democracy’, inP. Laslett and W. G. Runciman (eds.), Philosophy, Politics, and Society, 2nd series, London, 1962, p. 72: ‘the modem conception of democracy is of a form of government in which no restriction is placed on the governing body’.
John Locke, Second Treatise on Government, sect. 131, ed. P. Laslett, Cambridge, 1960, p. 371.
Xenophon, Hellenica, I, Vii, 15, Loeb ed. by C. L. Brownson, Cambridge, Mass., and London, 1918, p. 73.
The Synod of European bishops that took place in Rome last year engaged a wide range of topics. Nevertheless both the meetings and the press coverage of them kept returning to a single theme, that is, the re-evangelization of European culture.
While some people find in this idea a fascinating plan of action, there are many others who shudder at the thought. And little wonder, given the account of this effort offered by the journalists. Does the Pope—the suggestion is there—plan a Catholic reconquest of Europe? And will Europe then fall prey to a new Roman conspiracy? Who is pulling the strings? There are indeed rumors of certain recent Catholic movements, looming more powerful and blacker in popular fantasy than the Jesuits used to do. In The Communist Manifesto Marx declared that Communism was a specter haunting Europe. Now that it has become merely ghostly, many Europeans in search of conspiracy might be swapping Communism for a kind of old-new surrogate, i.e. popery. The question is: whether dream or nightmare, will Christianity, particularly in the guise of Roman Catholicism, actually conquer Europe?
Paradoxically, when it comes to this “new evangelization,” those who are sanguine and those who fear it are not neatly divided along lines of membership and non-membership in the Roman Catholic Church. In Eastern Europe, for example, among the victims of Communist rule, including not only non-Catholic Christians but even unbelievers, the values preached in the Gospel—truth, justice, etc.—remained down through the years a vision of possibility to cling to. At least thus far people of all stripes remember with gratitude the important part played by the churches, and especially by the Polish Pope, in the final overthrow of Communism. When Communism was the main enemy, the old rivalry among the various denominations was seen to be largely irrelevant, or at least a kind of side issue. And the Pope himself has never set his idea of a new evangelization in opposition to the Church’s effort at dialogue and reconciliation both among Christian denominations and between Christians and non-Christians.
Imyself was privileged to witness some of all this at first hand in late October of 1991, when I was invited by the Pontifical Council for Culture to take part in a symposium of European intellectuals that was meant to offer the bishops of the Synod food for thought. Among the forty people present—almost all of them lay men and women—fewer than ten came from the countries of Western Europe. Most of those present were from the former Soviet Union or from Poland, Hungary, Czechoslovakia, etc. It is interesting for Western Europeans to note that the term “Eastern Europe” was deeply frowned upon by the participants from these latter countries, who insisted instead on “Central Europe.” The very idea of an Eastern Europe, they said, was one more lie of Soviet propaganda, used to justify the Red Army’s artificial division of Europe into East and West. The “Church of silence” could be heard again, and the first thing it had to tell us was that it had never been completely gagged and that we. Western Christians, had too often and for too long been a Church of deafness. And the Pope was clearly eager to see the reintegration of Europe’s Eastern and Central parts.
In addition, we Westerners were reminded of a few basic historic facts. We were reminded, for instance, that Prague is almost as “western” as Berlin, and more so than Vienna or Stockholm. We had to learn once again that it had been the capital of the Holy Roman Empire, and that it harbored Europe’s first university. We were invited to bear in mind that our intellectuals frequently helped to throw people living under Communist rule into despair by playing footsie with Marxist ideology while they one after another hallowed each new earthly paradise it was giving birth to. As a Frenchman, I experienced a vicarious embarrassment about my country’s role in the post-World War I creation of Czechoslovakia and, even worse, Yugoslavia. These completely artificial states, welding together peoples who had either never lived together or had not done so for centuries—people whose religions, languages, histories, and levels of economic and social development were widely disparate—are the result of the shortsightedness of politicians who were by and large my fellow countrymen.
Beyond this, it appeared that the Church may be the only place that exists at present where all European peoples—including even Serbs and Croats—can speak to one another. The most extraordinary thing about the dialogues that took place both in and out of official sessions was not their content, but who took part in them and how. People were exchanging private reminiscences, but at the same time whole peoples were sharing their respective memories. We too often restrict to the material sphere the commandment to share our goods. We have to share with others not only our wealth or technological know-how; we have to share our pasts as well.
What makes this task difficult is that memories of European peoples are poisoned with the recollections of wrongs done to and suffered by one another. If some way is not found to heal these wounds, everywhere in Europe and in the former USSR, they will fester and keep alive the longing for vengeance. An example of this, of course, is the late Yugoslavia, where allegedly “ex”-Communist leaders are reopening old wounds and flattering Serbian nationalism for their own purposes.
Forgiveness, then, turns out to be more than a theme for sermons. In Europe in any case, and likely even in the world at large, forgiving one another is by far the most real and concrete of all political programs. If we want peace, historic wounds must be healed. Can they be? In order to answer this question, we have to realize that forgiveness is basically a religious idea. It begins with faith: we first have to believe, in the teeth of all evidence to the contrary, that reconciliation is possible, that both we and our enemy can change our hearts.
As a Frenchman, once again, I was deeply surprised and at the same time touched by the way in which, for instance, Croatians and Slovaks considered the current friendship between my country and its eastern neighbor, Germany, as an example of what can be achieved between former “hereditary foes,” and as a ground for hope.
Now, it is a historic fact that the reconciliation among France, Germany, and Italy was the work of three Christian statesmen: Karl Adenauer, Robert Schuman, and Alcide de Gasperi. Nor were these men people who simply happened to be Christians. Their policy, at least in the field of European cooperation, was a direct consequence of their Christian ethics. The first seed of the European Economic Community, i.e., the 1954 European Coal and Steel Company, was not sown for economic reasons only, but—and this was very conscious on the mind of its father, another Christian, Jean Monnet—in order to prevent competition about the possession of these basic industrial goods from becoming once again the cause of wars. What prevailed was not the will to economic power, but the desire for peace and its condition.
What the Pope and leaders of the Church like Cardinal Ratzinger or Cardinal Lustiger of Paris have in mind when they speak of a new evangelization are realities of this kind: a common healing of memories, reconciliation, and mutual help among European peoples—and certainly not some dark conspiracy aimed at wielding political power or influence.
On the other hand, just as many non-Christians gladly welcome the idea of giving more weight to the values preached in the Gospels, Roman Catholics strongly object to any interpretation of the idea of a “new evangelization” in terms of “conquest” or “reconquista.” The idea arises, rather, from accepting for oneself the obligation of remembrance and repentance—which in turn requires taking a fresh look at history. If there is to be a “new” evangelization, we first have to ask ourselves questions about the “old” one. Now, this first evangelization, in the early centuries of our era, took place slowly and peacefully. It proceeded against the political power of the Roman Empire. Thus referring to the idea of evangelization amounts from the outset to excluding the dream of seizing power. In calling for a new evangelization, the Pope precisely chose as a model what happened before Christianity became the official religion of the Roman Empire, in other words, before it could even be tempted to impose its creed by means of political coercion.
Moreover, evangelization does not mean Christianization. To evangelize, in good English, or, to be exact, in good Greek, amounts to proclaim the Gospel. How people react is another story. As for the past, historians know very well that the Christianization of Europe has never been totally completed. There never was such thing as a “Christendom” that was coextensive with Europe. History has long given the lie to Novalis’ romantic vision. First, Jewish communities resisted and have continued to do so up to the present time; and among the Christians themselves, many areas of life, public and private, were never really Christianized and remained more or less heathen. The “nominal Christian” is not an invention of modern times.
As for the present, is the idea of new evangelization a sign of the totalitarian character of the Roman Church, if not of Christianity as a whole? Historical instances of totalitarian behavior among Christian believers and/or institutions cannot settle the matter, because such instances cannot be proven to reach to the core of the Christian message or necessarily to follow from it. Still, the question remains: is Christianity totalitarian in nature? The question cannot be brushed aside.
Facing it squarely, I think, would lead to the answer that, to some extent, it is true that even in the purest form we can conceive, Christianity is and remains a totalitarianism of sorts. But it is so in a most Pickwickian sense. It does, to be sure, claim to recapitulate the totality of human experience in time and space, and to pervade humankind in the totality of its dimensions. But at the same time Christianity is anything but a totalitarian power. For it does the contrary of what totalitarian rulers commonly do.
As a rule, totalitarianism endeavors to do away with the individual subject: his thought is replaced by ideology, his speech is replaced by some dialectical variety of Newspeak, his actions are replaced by the automatic development of Progress, History, or class struggle, supervised by the State or the Party—which, thanks to its “hundreds of eyes” (Brecht, Die Massnahme), is supposed to be wiser and more farsighted than any one of us. In this way, classical totalitarianism muffles moral conscience and makes the burden of individual responsibility lighter. And this may be pleasant, as Hitler once suggested to Rauschning.
Christianity, on the other hand, to the extent that it is so, is a totalitarianism of the moral subject. In this paradoxical totalitarianism, the responsibility of the subject is enhanced beyond all limits. No dimension of life can escape it. There is a Christian way of behaving, hence of doing everything. No dimension of my life can dodge the ethical claim, because I am always present in what I do or in what happens to me. I am myself totally, that which I have to do should be done by myself and not by anybody else, that which I must undergo will happen to myself and to nobody else.
Yet there is no such thing as a Christian method, or code, or set of rules that would apply to the whole realm of human life in order to tell us at each step what is the proper way to do things. There is a Christian behavior towards oneself, but Christianity will not tell you whether you should sleep on your back or on your stomach, or with which hand you should bathe yourself. There is a Christian way of playing one’s part on the political scene, but there is no Christian politics, let alone a specifically Christian policy applicable to any given case. There is a Christian attitude towards man as a fellow citizen, but no Christian law.
This can be shown historically. Take an example from the law. In late Antiquity, Roman law was influenced by few Christian ideas. As a matter of fact, the only point on which this influence can be assessed with some certainty was the legislation on slavery. Apart from that, Christianity left Roman law where it stood. As long as no basic ethical rule was broken, there was no reason for meddling. The same holds true for the world of today: as long as a system of law abides by the basic human rights that bear no exception, there is no reason for interference by the Church.
Such an attitude finds its roots in the founding texts of the New Testament. Everybody knows the celebrated passage distinguishing those things owed respectively to God and Caesar (Matthew 22:21 and parallels). But there is a proviso: Jesus does not in fact draw a precise line between the spheres of God and of Caesar; indeed, there is no such thing as a realm of Caesar, existing as an independent reality, for Caesar himself has to answer before God for what he does.
A second example is perhaps more significant. In the Gospel according to Luke (12:13-15), we are told that someone has asked Jesus to compel his brother to share with him what their father had bequeathed to them. Jesus refuses to interfere: he was not sent to act as a justice of the peace. Instead of deciding for or against one of the two brothers, he warns people, in a general way, against cupidity. Jesus displaces the whole issue from the juridical to the moral level. Laying down rules, about inheritance or anything at all, is no part of Jesus’ business. This does not mean that rules should be discarded, or replaced by some enthusiastic effusion that would abolish private property. Technical rules must be kept and even improved and refined. This only means that all existing rules have to measure up to a moral standard. Furthermore, this moral standard can help us find better rules. The above little story nips in the bud the very possibility of a Christian religious law, of a Christian Shari’a. This is all the more striking as—what Luke could not foresee—the rules on inheritance were to become one of the trickiest and most developed in Islamic religious law, a nest of case studies for students of legal theory.
Therefore, the modern demand of a separation between the religious realm and the way in which societies organize themselves does scarcely more than remind us of one of the fundamental principles of Christianity. In the course of history, especially in post-Reformation times, Christians sometimes yielded to the temptation to forget this. Modern democratic societies had to refresh their memories by insisting on the separation, and, from time to time, they did so in opposition to the Church. Nevertheless, these societies were proving to be perfectly legitimate and faithful heirs of Christianity.
As for our present problem, a re-evangelization of European culture does not mean retracing one’s steps towards a new union of political and religious powers, for the simple reason that such a union never existed.
It would be well at this point to ask a few fundamental questions about the relationship between Christianity and Europe. Obviously Christianity has played, and may continue to play, an important part in European culture—an observation as much harped upon, for good and ill, as it is self-evident. At the very beginning of European history, Christianity was instrumental in the process of integrating newcomers to what was, and remained, the Roman world. Barbarian tribes were simultaneously baptized and settled: sharing a common faith furthered intermarriage and integration. Yet this bare historical fact says nothing to the issue of whether what happened was or was not legitimate. As Hume taught us centuries ago, no “ought” can be legitimately deduced from an “is.” So it is a matter of plain fact that Christianity molded what came to be called Europe (whose original name, after all, was “Christendom”), but to say so does not by itself tell us whether that shaping of European culture through the medium of Christian ideas was a good thing or a bad thing to begin with, let alone whether those ideas speak to us now.
Let us put the question in a different way. When we speak of Christianity, and of its importance to European cultural history, we commonly assume that Christianity belongs to European culture, that it is a part of that culture, an element among other elements: e.g., Jewish ethics, Greek democracy and philosophy, pagan sacrality, Roman law and organization, not to mention the customs of the Germanic, Slavic, and Hungarian tribes who were invaders. Why, then, we might ask, should the Christian element be given a place of honor? There might, after all, have been other cultural syntheses than the one that did in fact develop.
But this way of framing the issue is misleading. For while Christianity did contribute its mite to the formation of Europe, it did so in quite a peculiar way. The formative elements, or roots, of European thought are commonly referred to as Greek rationality on the one side and the faith of Israel on the other. For the most part, this story is a “tale of two cities”: Athens and Jerusalem. Is Christianity, then, to be thought of as a third root? Should a third city, Rome, be added to the tale? The answer is that we ought not to think of Rome, and of Christianity—which is far more deeply “Roman” than is commonly surmised—as being some third element in European culture. Nor should we think of it as the synthesis of Athens and Jerusalem in an encompassing whole. Christianity is in fact the common structure of our relationship to both sources.
Ponder for a moment the peculiar character of European cultural history. One way of describing it is as a long series of renaissances. The concept of renaissance came into use in connection with what was believed or expected to take place in late medieval Italy. The rediscovery of the classics was supposed by people like Petrarch to bring to a close what was called (by Petrarch himself) the “dark ages.” Now, the historians have destroyed the legend of the “dark ages” and shown that the renaissance—or as I have called it, renaissances—never actually began or ended. They first spoke of a “renaissance of the twelfth century” (C. H. Haskins), then discovered an earlier one in the ninth century. On the other hand, the series of Italian renaissances was furthered by the French and German classicist movement, and by the nostalgic passion for Greece among the German and English romantic poets, not to speak of the more recent attempts by Nietzsche, and into our time, Heidegger, Leo Strauss, and others.
What is a renaissance? Basically, a new way of looking at old texts or works of art, grounded in a sense of inferiority vis-a-vis what the Ancients achieved in science and the arts. Seeing oneself as a barbarian or decadent results in the determination to go back to the original masterpieces that lay buried under dust or rust, to remove what marred their radiance, and to live up to them. In most cases what was covering the original masterpieces were the remnants of former attempts at retrieving their original meaning. For instance, the Italian renaissance tried to recover the genuine Aristotle. And it did so through a criticism of scholasticism—that is, of a previous reading of Aristotle.
A claim to the legacy of Greece could be found in civilizations that did not view themselves as European or even having something in common with Europe, i.e., Islam and Byzantium. A wide-ranging effort at translating Greek texts was made by the Arabic-speaking world in the ninth century. Byzantium, for its part, never completely cast off from its moorings in ancient Greece and always kept, at least in educated circles, the classical form of written language.
Yet there was something unique to Europe, which was that, strictly speaking, renaissances occurred there and nowhere else. In particular, there were no renaissances in the Byzantine or in the Islamicized worlds. Byzantium remained proud of speaking still the language of Homer and Plato. The Arabic-speaking world produced a great number of translations, but did not keep the original texts. Since Arabic, as the language of the Koran, therefore of God himself, was considered as the most perfect language, it superseded and replaced other languages (Greek, Syriac, Coptic, etc.) that were spoken before the Muslim conquest. Consequently, the Arabic-speaking world did not receive what could not be translated and had to be read in the original, namely, poetry, epics, drama. Europe, unlike Byzantium, never thought of itself as the legitimate heir of classical culture. Its educated circles, who spoke Latin, never shed their feeling of estrangement vis-à-vis the Greeks: to them, ancient culture was irretrievably lost. And unlike the Islamicized world, Europe kept the original texts that testified to its inferiority and never tried to do without them.
Now, this corresponds to a structure that exists in the religious sphere as well. Christianity is grounded in the experience that the people of Israel had with God under the Old Covenant. It claims to be a fresh way of looking at this experience and of synthesizing it in the light of the life, teaching, passion, and resurrection of Jesus. The Christian Bible unites in an inseparable whole the New Testament and what becomes the Old Testament. For the Christians, “old” does not mean obsolete. On the contrary, the Old Testament still keeps its permanent value.
Such an attitude is difficult. It would be far easier simply to do away with the old covenant. This is precisely what the heresiarch Marcion, in the second century, proposed to do. The Church Fathers resisted the attempt at parting from the Old Testament and chose the more difficult way: keeping it and interpreting it as more or less clearly pointing to its fulfillment in Jesus. Islam, on the other hand, chose to reject the texts of both the Old and New Testaments. According to the Koran, their texts were tampered with by Jews and Christians who were unwilling to admit that they announced the final coming of Muhammad, as the seal of all prophecy. Their authentic content, fortunately, is to be read in clear Arabic in the Koran itself, so that Islam can dispense with reading the sacred books of former revelations.
Thus, a very peculiar attitude towards the past underlies the way in which European culture relates to the sources from which it springs. The dominant pattern is the same for both Jewish and Greek. European culture always resisted the temptation to absorb in itself what it had inherited from either the Greeks or the Jews—to suck in the content and to throw away the empty husk. It always maintained the lively, even painful, consciousness of its being secondary vis-à-vis classical culture and the old covenant. And it could do so because accepting secondarity stemmed from the deepest layer, or, to change metaphors, the peak of its culture, i.e., its religion.
This is the reason why historic Christianity, in the long run, always granted a place to other cultural traditions: paganism survived in law; its mythology enjoyed a series of rebirths in art. Christianity can’t help doing that, for in its innermost structure it is rooted in something that it is not, i.e., Judaism. Thus, Christianity is not an element among others in European culture, but its very form, the form that enables it to remain open to whatever can come from the outside and enrich the hoard of its experiences with the human and the divine.
That is why my own guess for the future is that Christianity will be able to play a positive role on the European stage if, and only if, it gets rid of the temptation to repudiate its Jewish roots. It might be that the deepest issue at stake in the new evangelization has to do with the Church’s attempt at a reconciliation with the Jews.
The idea of a new evangelization, then, should be carefully distinguished from some dream of Christendom, the temptation to seek the conquest of European culture. An unbiased look at what the Catholic Church actually says and does through its official statements—to be distinguished from distorted and/or unauthorized reports—should suffice to dispel fears and resentment on that matter.
Moreover, the idea of such a conquest does not tally with what the Catholic Church has always considered to be its role. At least at the level of principle, principle never eschewed, it has always acknowledged both the freedom for temporal affairs to be self-regulating and the right for itself to keep a critical outlook on them and to assess moral, political, and economic practice from the point of view of ethics.
Finally, endeavoring to make the Christian message more conspicuous is not parochial. We do not have to ask, if we are Christian, what place we can leave to other trends of culture or, if we are not, what place will be left for us. On the contrary, Christianity has enabled the other trends of European culture to remain what they are and to develop themselves. If this culture wants to survive and to keep drawing on its two sources, it should be careful to help Christians towards a better understanding of their own cultural mission.