Hillary Versus America: Part VIII — Individual Rights Against the Common Good

The United States of America was the first country in history to be founded, explicitly, on any philosophy, and is still the only country in history to have been founded on the principle of individual rights. It had taken thousands of years for a series of heroes, Aristotle foremost among them — aided by unlikely happenstance upon unlikely happenstance, the unintended consequences of misplaced passions, the printing press, and a mad monk — to wrest the idea of individual rights into being. At the peak of the Enlightenment, in the afterglow of Newton, when humanity was still beaming with new-found pride in its power to reason, when it still seemed possible to organize a society on rational principles, some few among the revolutionary generation of Americans sought to put the theory of individual rights into practice. This idea inspired a rag-tag conglomeration of Britain’s cast-offs to take up arms against the most powerful empire in the world, to fight it, persevere, and win. Yet today, if you were to ask a recent college graduate, one who had paid close attention in her classes at a top-tier school, a superb student, a model, a paragon: “In the interest of justice, before anything else, what should everyone understand about founding of the United States of America?” she would unhesitatingly answer: “The founders kept slaves, didn’t recognize women as equals, and stole all their land from the first peoples.”

This profound superficiality, which is the rule among the up-and-coming “educated” classes in contemporary American society, is a large part of why, if something doesn’t change, if Hillary is elected president, we are likely fucked. If vacuous ignorance continues to align itself with oligarchic power, there will be no coming back. The tens-of-thousands-of-years long slog through the muck of human misery, the millennia of the strong preying on the weak, of priests, nobles, and kings pressing their boots on the necks of the men and women they saw below them, like a dark tide that has been out too long, like an ocean sucked flat before a tsunami, will come rushing back, black and inexorable, to drown out all hope.

While today a variety of significant cultural, political, and institutional forces are aligned against individual rights, and the forces that contend to preserve rights are weak and few, the transition from freedom to enslavement is likely to be permanent for deeper reasons. The uniqueness of the American way in human history indicates a basic truth about us: that we do not yet know how to be free. Taken as a whole, humanity does not yet want to be free. Our ancient habit is to submit to hierarchies, to raise up chiefs and presidents, medicine men and priests, to anoint David with oils, to give glory to the emperor, to carry pictures of Chairman Mao, and to elect FDR to four consecutive terms in office. To be free, humanity must learn something new.

Even though they didn’t have the reach for it yet, the American revolutionary generation grasped at political freedom, at this something new. But what does political “freedom” mean, really? When a contemporary politician alludes to “our freedoms” or the like, it sounds like an empty platitude — because it is. Still, just because a word can be parroted meaninglessly, doesn’t mean it’s meaningless. In a political context, freedom means: a social organization that secures individual rights. If this were broadly understood, there would be zero chance of Hillary Clinton’s election in November. If it were broadly understood, Donald Trump would never have won the Republican party’s nomination. If it were broadly understood, there would be riots in the streets tomorrow. These realities hint at why the ruling classes have worked so long, so hard, and so effectively, to make sure this isn’t broadly understood. They explain the profound superficiality of our exemplary college graduate.

In fact, the concept of ‘rights’ has been so muddled (people now believe that healthcare and Internet access are “rights” (!)), that it now communicates nothing to a general audience. This is a disaster in the making. Hillary Clinton’s election to the presidency would significantly advance an existential threat to Western civilization, but that threat would come in the form of a sneak attack on the very rights that, now, few Americans understand or appreciate. The greatest treasure of our civilization, the work of generations of geniuses, the inheritance that we are squandering, individual rights, may be stolen from us by a threadbare con.

Since it means less than nothing to a contemporary audience to warn them, “Hillary Clinton is a threat to individual rights!” let us consider the matter indirectly. If you cannot understand what Clinton is against, understand what she is for. And what Hillary Clinton is for (publicly) is: her vision of the “common good.”

What is the “common good?” The “common good” is a vision of what the left carefully calls the “distribution” of goods in a society — plus one additional element, which we will come to shortly. To understand the “common good,” one must first understand goods generally. Education, healthcare, housing, clothing, food, transportation and entertainment are all goods. Romantic partnerships, friendships, and pets are goods. Goods are everything we spend our lives trying to get and to keep.

In the Western world, it is broadly agreed (for now) that people should be able to decide for themselves what goods to prioritize in their own lives. It is also understood that not every effort to get or to keep a good will be successful. For example, you might want to be a lawyer, but if your LSAT scores are too low, that good will be difficult or impossible for you to get. And people generally accept that not always getting the goods that we want is normal.

Despite the general agreement that goods are, largely, a personal matter, and that, to some degree, we won’t always get the goods we want, most politicians adhere to some vision of the “common good.” This “common good” is supposed to both override the private goods of individuals and also to underpin all these private goods. What I mean by this is that politicians tend to believe that certain “distributions” of goods in society are necessary for the “health” of the society. But since goods don’t come from thin air, but have to be produced with thought and effort, and since politicians don’t produce goods themselves, this means that any who subscribe to an idea of the “common good” intend to steal goods from some individuals and hand them to others. This is their only means for having goods “distributed” in accordance with their visions. Thus a vision of the “common good” overrode my own personal good when my income was taxed to support the second Iraq war. I would never have volunteered to support it in any way. Yet, against my will, the government took my goods (my earnings), by force, and used them to buy a few rounds of depleted uranium, or something equally repugnant to me. The implicit justification for this was that the Iraq war was in the “national interest,” which is another way of saying “common good.” So president Bush had a vision of dead Iraqis that he thought was more important than my vision of, e.g., a new pair of cross-country skis, and his vision overrode mine, with the help of all the United States’ government’s guns.

This is how the “common good” overrides individuals’ private goods. But the “common good,” in theory, only has this overriding privilege because it benefits everyone. Supposedly, maiming and killing Iraqis — after pretending they had something to do with 9/11 — protected our “freedom.” And since individual Americans, like me, could hardly pursue our own private goods without the benefit of our “freedom” — well, you can see why all that maiming and killing had to be done.

Another example of how the “common good” is supposed to underpin each individuals’ private good is found justifying public education. The theory there is that educated workers produce more goods, which enriches the entire society. Therefore, say proponents of the “common good,” taxing me to provide public education actually benefits me and everyone else; it underpins my successful pursuit of private goods, because it improves the economy and makes more goods available at less cost.

With these examples in mind, we can expand on the definition of “common good” above: The “common good” is a vision of the proper distribution of goods in a society, intended to be realized by force.

Because Hillary “It Takes a Village” Clinton is an enthusiastic proponent of the “common good,” she rejects freedom. A society can be guided by one principle, or by the other, but never by both. A free society must reject, as an organizing principle, any reference to a “common good.”

The necessary opposition between political freedom and any “common good” is not trivial to discover. Our paragon of contemporary college education — with someone’s vision of social justice misting her eyes — will not find it. For us to discover the connection now, we must examine the founding principles of the United States.

Ceremonially, the United States came into being in July of 1776, with the Declaration of Independence. The Declaration proclaimed the following founding principles:

  • Equality: The Declaration repudiated the notion, central to all prior systems of government, that some men were, either by nature or by divine sanction, “set above” others. In the Declaration, there is no political hierarchy among men: we are all, politically, equal.
  • Unalienable Rights: Rights are the central concept of the Declaration, derived directly from Locke’s philosophy (and, by extension, from Aristotle’s). To “alienate” something, a possession for example, means to separate it from yourself. You “alienate” your dollars when you spend them. You “alienate” your house when you sell it. You “alienate” your old clothes when you give them to a charity. According to the Declaration, you cannot “alienate” a right: rights cannot be sold, traded, or given away. If they are taken away, the taking is always illegitimate and always criminal.
  • Natural Rights: In the Declaration, rights are not just inseparable from the individuals who hold them, they are aspects of human nature, endowed by God when he created humans and created human nature. This means that rights exist before governments, and outside and beyond all governments. Because human nature is the same at all times and in all places, rights are the same in all times and in all places. Rights do not “evolve,” develop, or change in any way. The number of rights cannot increase or decrease. No new rights will ever be discovered (although some rights that have existed all along might come to be recognized), and nothing that was ever a right can ever, later, turn out not to be a right. (It is vital to note here that the author of the Declaration, Thomas Jefferson, was a Deist. When a Deist writes the word “God,” he does not mean what typical churchgoers today mean by that word. Deists were the Enlightenment-era analog of what, in contemporary life, we call “atheists.” Deists believed only nominally in God, as a kind of abstract force that created the laws that govern the cosmos, then let the clockwork mechanism of the cosmos turn out its internal tensions for the rest of eternity. While it’s true that not all of the founders were Deists, and some (Sam Adams, for example) were fervent Christians, it’s also true that Enlightenment Deism was the animating philosophy behind the Declaration, and behind the United States as such. Far from being a religious republic, or a republic founded on or inspired by Christian belief, the United States were the most non-theistic political entity in human history.)
  • Unlimited Rights: The wording of the Declaration is subtle (by contemporary standards). It is easy to miss key implications by passing too breezily through its passages. Take this sentence, for example: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” The word “among” here implies that there are other rights than those listed. In fact, in natural-rights theory, individuals possess an infinite number of rights. Essentially, it holds that individuals have a right to do anything and everything they deem necessary to do in order to preserve and secure their lives. To put this in concrete terms: according to this theory, you have an absolute right to read She-Hulk comics (assuming you do not obtain them by force). You have this right, for example, because if it makes you happy to read She-Hulk, then that happiness contributes to the preservation of your life — if only by contributing to a pleasant evening, which makes it easier to relax and rest well, which makes it easier to get up on time, which makes it easier to get to work on time, which makes it easier to keep producing whatever it is you produce in order to provide for yourself. She-Hulk, by your own judgment, helps you to be productive in the service of your own life. Therefore, you have an absolute right to all the She-Hulk you can obtain through peaceful trade.
  • Rights-Based Government: According to the Declaration, the whole, entire, exhaustive purpose of government is to secure individuals’ rights. There is no legitimate purpose for government beyond this. (And here we begin to see how radically individual rights theory repudiates the notion of a “common good.”)
  • Consent of the Governed: In the Declaration, because the only (legitimate) purpose of government is to secure individuals’ rights, and because it is an observable fact that governments do not always limit themselves to this one legitimate function, there needs to be a way for the legitimacy of a government to be decided. The answer given in the Declaration is that governments are legitimate only as long as they keep the consent of the governed. This consent can be withdrawn at any time, which leads to the next principle of the Declaration.
  • The Right of Revolution: Since the people can withdraw their consent at any time, and since, in practice, governments tend to prefer not to be dissolved and replaced, the Declaration proclaims the people’s right to “alter or abolish” a government whenever they believe that government is failing to secure their rights. The clear implication is that the people have the right to do this violently, if necessary. As Jefferson wrote elsewhere:

    what country can preserve it’s liberties if their rulers are not warned from time to time that their people preserve the spirit of resistance? let them take arms. the remedy is to set them right as to facts, pardon & pacify them. what signify a few lives lost in a century or two? the tree of liberty must be refreshed from time to time with the blood of patriots & tyrants. it is it’s natural manure.

  • Popular Sovereignty: Consistent with what Locke had argued in his Second Treatise of Government, the Declaration’s view of governing power is that it is power derived from the people. In other words, whatever power and authority a government legitimately wields, it wields as a loan from the people. As such, it is the people (individuals in the aggregate) who are sovereign, not the government, not any king, legislature, or council. Popular sovereignty would be the principle underpinning the United States’ Constitution.

Taken all together, these principles of the Declaration construct government as the servant of the people, a diametric reversal of thousands of years of human history. Consider now: if the government is really the servant of the people (meaning, not merely in rhetoric, but in practice), what will it be permitted to do? Some people, for example, might want the government to subsidize corn production, because they truly believe these subsidies will benefit the country as a whole (it being a mere coincidence, of course, that those with this insight farm corn). But surely other people would rather keep their wealth than give it to corn farmers. Still others might want the same wealth that might be expropriated for corn subsides to subsidize college tuition instead. If the corn crowd manages to capture the government’s power and turn it toward its vision of the “common good,” everyone else loses.

Since there are contingents for and against just about any conceivable use of government power, it is obviously incoherent to claim that government is the servant of the people as a whole, if it is going to be common practice for one contingent to get its wishes today, another tomorrow, and neither the day after. There are only three possible solutions to this dilemma: One, to give up the notion that government is, or ought to be, the servant of the people. Two, to school the people to assent to one set of common values (a universal “common good”), so that everyone assents to the forces brought to bear in service of those values. Three, to radically limit the use of government power, so that no “interest group” can ever use it to steal goods from some and hand them to others. (Some readers, probably hailing from what I have elsewhere called the “Blue team,” might want there to be a fourth option: to say that government serves the people, even if it takes goods from some needy people to give to others who are no more needy, as long as the winners and losers are decided upon democratically. Think about that for a bit. Realize it’s a dishonest dodge. Move on.)

The framers of the United States’ Constitution chose the third option. Their implementation of the Declaration‘s principle of popular sovereignty was the constitutional precept of enumerated powers. This basic principle of the Constitution means that the federal government has only the powers that are explicitly granted to it (enumerated) in the text. If a power is not explicitly granted to the federal government, from the limitless pool of powers (rights) that each individual possesses by nature, then the federal government, in theory, has no right to exercise that power.

Consider the implications of this for the “common good.” If government is only granted a meager quiver of powers, just enough to secure individual rights and no more, where will it find the power to expropriate goods from some citizens and hand them to others? Nowhere. A limited government is too fine an instrument to be wielded for the “common good.” That requires a club, something blunt enough and broad enough to “get things done.” This indicates why freedom, individual rights, limited government, and prosperity like mankind had never before seen came up together in history. It also indicates why slavery, contingent and ever-changing “rights,” tyrannical government, and mass poverty characterized all previous social orders. This is an either-or choice. There is no way around it.

Now, no one could reasonably argue that the federal government has, in fact, kept within the size and scope suggested by the theory outlined above. (One could argue instead that the theory is inaccurate, but that would be a losing case.) Not only do we have corn subsidies, we have federal bureaucrats telling Americans what plants they can smoke, how many inches from the ground their hand-railings have to be, how prepared their restaurants must be to accommodate miniature horses, how much gasoline their trucks are allowed to burn, what wage they are allowed to work for, what parts they are allowed to have in their rifles, where they can keep their retirement savings, how much money they are allowed to put in a bank account (without being investigated), how much money they are allowed to withdraw from a bank account (without being investigated), what patterns of deposit and withdrawal they are allowed to follow (without being investigated), and, perhaps most importantly, what percentage of fat content is minimally permissible in a processed cheese spread.

But maybe it is all for the best? Maybe the blurring of the federal government’s once-narrow boundary lines has enabled us to pursue a “common good” that’s worthwhile? Maybe all this change has not been corruption, but progress?

No one really cares, in the abstract, whether the United States’ government adheres to the principles of its founding. What people care about, if they care at all, is whether the government does good — or, perhaps, allows good to be freely done. If the principles of the founding were good, right, and true, and they have been abandoned, that is a matter for concern, and for remedial action. But if they were no good in the first place, then their apparent abandonment is hardly worth noting.

The upcoming election is a referendum on these questions. Put directly: A vote for Hillary Clinton says: “Yes, these changes are for the best. Yes, there’s a common good worth pursuing. Yes, the changes so far have been progress.”

But the truth is the opposite: These changes are destroying individual rights. There is no such thing as a common good, because the notion is incoherent. What’s happened so far has not been progress, but a return to the norm, a regression to the mean of human history: slavery and tyranny, justified by mystical visions. If Hillary is elected, she will, with the full support of the ruling class, slyly prepare Americans to be ruled, cowed, enslaved, degraded, and impoverished. She will do this superbly well. She will do this in the name of a “common good,” and I think she will sincerely believe that she is making the world better. She will be wrong.

People reasonably fear Donald Trump as president. His principles are incoherent. He is brash, ignorant, and oblivious to his limitations. Although they are not, I think, representative of his support, racists and other savages do throng to him. Although I believe he does love America, in some hazy way, and all that it means and stands for, he will not advance the cause of freedom. He will not restore the promise of the Enlightenment. But Trump’s haphazard vision is cast perpendicular to the arrow of history. If elected, his notion of the common good will go nowhere, as institutions and inertia will resist him. This will give the few of us who have some sense a little time to act. Maybe then the death of Western civilization can be averted.

Hillary Versus America: Part VII — The West at Noon

Before we begin, please take a few minutes to put yourself into the mindset of a medieval monk. Listening to as much of this music as you can sit through is the best way I know of to do that on short notice:

https://www.youtube.com/watch?v=jGfLUXtARjM

When I hear this, I can imagine waking long before dawn to begin a day that would be just like the ten thousand days that came before it. I pray; read approved works, mostly the Bible; read some more; toil in a field; sing hymns; eat bland food; toil; pray; read; sing; sleep briefly, and begin again, in the dark.

The music is, I think, unquestionably beautiful: simple, and pure. But I can’t make it through more than a few minutes, unless as background music for something more active. It’s too monotonous. For some, simplicity, rigidity, and repetitiousness are the hallmarks, not only of beautiful music, but also of a good life, well lived. Not for me.

Although it’s not to my taste, this kind of life makes sense, as as an ideal, if you can think like a medieval monk. Because monks Know the Truth. If you already know everything that’s important to know, there’s no need to explore, to learn, or to grow. Perfection is a cycle, orbiting around a central truth, like the sun orbiting around the earth, like the earth turning through its seasons.

In the last installment, that peculiar monk, Thomas Aquinas, had established an unstable symbiosis between Athens and Jerusalem, between reason and faith. His balance of the West’s antipodes would never come to rest. In some times and places, faith would rise. In others, reason would be resurgent. Although reason never, ever reached a zenith, it also never fell below the West’s horizon again.

At the outset, I said that the values distinctive to Western civilization, the values that determine Western civilization’s worth, are individual rights and reason. Aristotle brought the first systematic understanding of reason to the West. Aquinas later carved out a space for individuals to exercise their consciences, even against the wishes of “legitimate” authorities. But, hundreds of years after Aquinas’s death, even after the Renaissance explosion of cultural, economic, and intellectual productivity, the idea of individual rights had never been given voice. It would be a long and meandering path leading from Aristotle, through Aquinas, to America — but we can retrace it in outline.

For centuries after Aquinas lived, there was only one Christian church, headed in Rome by the Vicar of Christ. There were recurring power struggles between the papacy and the secular authorities, but the church held the high ground. This was because if any church-state conflict were to devolve from words to swords, the men fighting knew that loyalty to a king might win them treasures on earth, but loyalty to the church would earn them treasures in heaven. And, also, if a man is forced to chose between keeping his body together and keeping his soul together, he will tend to prefer the latter (providing, of course, he believes in souls).

If this was not enough, another way the church maintained its grip on spiritual authority was to establish itself as the intermediary between men and God. In the medieval church, priests might be able to read the word of God, but common men, and most nobles, were illiterate. Even if they had been literate in their native tongues, they would not have been able to read the Bible as it existed then: either in archives of the original Hebrew, Aramaic, and Greek, or in churches, where priests used excerpts from Latin translations.

In the 1450s, Johannes Gutenberg introduced the printing press to Europe, and began printing one of these Latin translations of the Bible. This was a technological revolution that would dovetail into a spiritual one. Before the introduction of the printing press, manuscripts were copied longhand, a painstaking process that put severe physical limits on the volume of texts that could be copied and spread throughout Europe.

Because copies of the Bible, and of all texts, were in such short supply before Gutenberg’s revolution, it was not only culturally impossible for anyone to challenge the Catholics’ priestly monopoly on the Bible, it was technically impossible as well. (I note that this is not the first time that the changing limits of technology had a powerful effect on the nature of human society. Consider that, before the invention of agriculture, for example, it was impossible for enough food to be set aside to allow large population centers to form and stabilize. And until large population centers could form and stabilize, it was impossible for humans to diversify their productive (or unproductive) activities through a division of labor. In other words: granaries make priests possible.)

But after Gutenberg’s revolution, all bets were off. For centuries, the Catholic church had been the undisputed moral authority for all of Europe. For centuries, the church had been teaching that the individual conscience was the highest authority in practical life. For centuries, this teaching, even though it concerned how everyday life should be practiced, was not practiced in everyday life. Aquinas’s teaching about the inviolability of the individual conscience presided over a church which presided over the Spanish Inquisition. Victims of the Inquisition, whose confessions of “heresy” had been prepared under torture, but who later refused to repent for their “heresies,” were burned alive, and conscious, at the stake. (Apparently, if you repented for your heresies, the Inquisition would do you the kindness of strangling you or breaking your neck before burning you at the stake.)

And then Martin Luther happened. In 1517, Luther published a list of disputes with Catholic church doctrine. This event is recognized as the beginning of the Reformation, the breaking-up of the Christian church from the one Catholic church, to many diverse congregations of believers, each promulgating their own interpretation of God’s will and word. A complex cultural interaction — among Gutenberg’s printing technology, Luther and other reformers’ doctrinal challenges to the Catholic monopoly, the expansion of literacy, Aquinas’s idea of the sovereignty of the individual conscience, the pre-existing political divisions in England and in Europe, and the continuing influence of Aristotle on learned men — fired an intellectual crucible. In the end, the molten mixture would solidify into the precursors of individualism as a political philosophy and experimental science as an expansion of the Aristotelian method.

Luther rejected the idea that a priesthood should control and mediate God’s relationship with individual men. For Luther, building upon the idea of the sovereign individual conscience, each man was to be his own priest, reading the word of God directly, and deciding for himself, through his conscience, what God’s word meant. There was no room for an authoritarian Church in this conception of man’s relationship with God. Churches existed to provide guidance and fellowship, not to step between man and his God. This Lutheran doctrine is known as sola scriptura, “by scripture alone,” and means that Christian doctrine should come only from the Bible, from each conscience-guided individual’s reading of the Bible, not from thick books of Catholic theology or the pronouncements of popes.

Luther’s ideas spread like dank memes through Twitter, and they carried world-changing implications. First, if every man would find his own, personal, relationship with God through the study of scripture, he needed to be able to read scripture. Protestantism thus melded with the results of Gutenberg’s printing technology to inspire and enable a massive increase in literacy. Second, if every man was supposed to develop his own relationship with God, then, by implication, every individual was worthy of having a personal relationship with the creator of the universe. Think about that. Under Catholic rule, God had One Order for the world, which he communicated through his vicars; everyone had a fixed place in that Order, assigned to them at birth. But if you are worthy to forge your own relationship with the author of all things, then who can tell you what your place is, but God himself? Protestantism thus prepared the way for individualism by honoring and sanctifying the individual in ways totally new and alien to prior belief. Third, increased literacy had secondary effects. It created general readers: readers of bills, of accounting books, of newspapers, of pamphlets; literacy enabled the beginnings of an information economy, and concomitant wide-reaching expansions of economic activity, of the division of labor, and, ultimately, of wealth.

The massive, disruptive, liberating, chaotic, and deadly change catalyzed by Protestantism broadcast this message to everyone in Europe who was prepared to hear it: “IDEAS ARE IMPORTANT!” And because of the beginnings of mass literacy, there were many, many more minds prepared to hear (or to read) this message than there ever could have been before Gutenberg and Luther. How could any educated person fail to notice the terrible and awesome power of ideas in a world scarred and broken by wars — over ideas?

The Hanging by Jacques Callot

The Thirty-Years’ War, which began as a Protestant-Catholic conflict, would eventually kill 8 million people. One soldier in this war over ideas was the philosopher René Descartes, who later wrote in his Discourse on the Method that, during his tour of duty, he had begun to conceive of a new approach to knowledge. Descartes’ work would become the foundation of modern philosophy, supposedly rejecting Aristotle in favor of a fresh way of looking at the world and deriving knowledge from it. In fact, Descartes, and the most important philosophers who came after him and shared his project, did not so much reject Aristotle as they rejected Scholasticism, the blend of Aristotelian philosophy and Catholic doctrine that had been the foundation of higher education in Europe, by Descartes’ time, for hundreds of years. They saw Scholastic thought as stale, a dead end. But the proliferation of texts in the late 16th- and 17th-Centuries, the era of Shakespeare, Descartes, Hobbes, and Francis Bacon, had inaugurated an almost feverish exchange of ideas, and this happened against a backdrop of deadly ideological conflict that underscored just how powerful ideas could be. Ambitious men, men ambitious enough to want to change the Western world, perhaps men weary of war, began looking for new ways to think about the world and man’s place in it. It was time for Aquinas’s unstable balance to shift again, and for reason to rise from the very ashes of faith’s ravages.

The man who seems most singly responsible for this shift in the Western balance was Francis Bacon. Bacon was a wily one. Like many philosophers, he seems to have played at the game of esoteric writing, meaning he wrote with a double-meaning: one for his intended audience, and one for everyone else. This was necessary, or at least prudent, because what Bacon wanted to say would not have been pleasing to many Anglicans. What he wanted to say was: “Faith has held Europe in a stranglehold for more than a thousand years. We suffer, fight, starve, bleed, and die, all more than we need to, if indeed we need to at all. As long as men can claim to see different invisible truths, and then proceed to kill each other over these self-conjured ghosts, we will never have peace. The times are such that there are enough of us, now — who are well read, who understand our Aristotle, who know where knowledge really comes from — we can change the world. Let faith wane in influence as reason waxes. To make this possible, let’s take Aristotle’s work a step further: let’s make the method of reason even easier to follow. Let’s convince men to focus their intelligence on the natural world, not the spiritual. Let’s teach man science.”

Since he could not say this openly, and since he was a wily philosopher (but I repeat myself), what Bacon actually said was:

The greatest error of all the rest is the mistaking or misplacing of the last or farthest end of knowledge: for men have entered into a desire of learning and knowledge, sometimes upon a natural curiosity and inquisitive appetite; sometimes to entertain their minds with variety and delight; sometimes for ornament and reputation; and sometimes to enable them to victory of wit and contradiction; and most times for lucre and profession; and seldom sincerely to give a true account of their gift of reason, to the benefit and use of men: as if there were sought in knowledge a couch whereupon to rest a searching and restless spirit; or a tarrasse, for a wandering and variable mind to walk up and down with a fair prospect; or a tower of state, for a proud mind to raise itself upon; or a fort or commanding ground, for strife and contention; or a shop, for profit or sale; and not a rich storehouse, for the glory of the Creator and the relief of man’s estate.

This was masterful rhetoric, perfectly pitched to its audience: the authorities of the Christian West. Bacon wanted freedom of scientific inquiry, for experimentalists using the new methods he pioneered to be able to investigate nature without having to account, to a skeptical religious authority, for their every unsettling discovery. He knew that Christian authorities praised charity and service to the poor. He knew they condemned pride. He thus presents free scientific inquiry (here and elsewhere) as, not a means of self-aggrandizement, but as a means to the “relief of man’s estate.” Whose “estate” needed relief? Kings’, lords’, and bishops’? No. The poor. They needed food, shelter, medicine, a means to a more humane existence. How could any Christian say no?

Bacon’s rhetoric worked. His method spread. Scientific inquiry proceeded in the Western world at an ever-quickening pace. As discovery piled upon discovery, as techniques of food production and the manufacture of goods consequently improved, life in Europe became progressively easier. Man’s estate began, incrementally, slowly, to be relieved. Literacy, numeracy, and the beginnings of a scientific understanding of the world dispersed from the small circle of philosophers and other educated elites to the wider culture. And the wider culture reflected on these developments, and slowly, Europe began to think: maybe the human mind is something noble after all? Maybe man is not merely a poor, miserable sinner: fallen, corrupt, pitiable, and worthless without God’s grace? Maybe whatever is meant by “divine” is something in man?

However far reason and science progressed, by the late 17th Century, the Christian ethos still dominated all of Europe. The relief of man’s estate was all well and good, but man was still corrupt and sinful, still needed the firm hand of God’s vicars and officers and kings to guide him toward right conduct. And there were mysteries that this new science would never unravel, gaps in man’s mortal understanding as wide as chasms. All one had to do was look up into the vastness of a night sky, where untouchable celestial bodies — bright, beautiful, cold, distant, and perfect — moved according to God’s immutable and impenetrable will.

And then Newton happened.

Contemporary man has to stretch herself until it hurts, and maybe even further than that, to even begin to understand how big a deal Newton’s law of universal gravitation was. When Newton published his findings in 1687, it was like some Herculean hero had taken the sky — from, not Atlas’s shoulders, but God’s hands — and put it in a book that any man could carry. In metaphor, in symbol, Newton had brought heaven to earth through the power of human reason. In metaphor, Newton had raised man up to stand eye-to-eye with God.

The poet Alexander Pope wrote this epitaph:

NATURE and Nature’s Laws lay hid in Night:
God said, “Let Newton be!” and all was light.

The intellectual world of Western civilization came alight, burning incandescently for the next hundred years. This was the Enlightenment. This was mankind’s high noon.

Almost simultaneously with Newton’s great discoveries, John Locke published his Second Treatise of Government. He argued that every individual was sovereign, and that governments, or communities of any kind, existed to serve individuals, to secure opportunities for individuals to pursue the goals that furthered their own lives.

And here is that music coming again. Locke sketched in the final note of the triad: Man’s epistemological independence from Aristotle, his (partial) moral independence from Aquinas, and, now, at long last, his political independence. Enlightenment man did not exist to serve the glory of the state or the church, he existed, for the first time, if only implicitly, for his own sake.

Now the West had reason and individual rights. In a few decades, the chord Locke had sketched would sound out for the first time, as a gunshot, heard ’round the world.

When Beethoven heard, not too long after the American Revolution, that Napoleon Bonaparte was bringing down the false authority of kings and nobles, he dedicated his work-in-progress Symphony No. 3 to him. Napoleon later betrayed human freedom by declaring himself Emperor, and so Beethoven scratched his name off of the dedication page of this symphony. But Beethoven’s Third still records, better than any gunshot, what it sounds like when man breaks tens of thousands of years of chains, finds his nobility, sees the horizon open up, and knows the heavens, even, are his. To my ears at least, no monk’s hymns can compare:

Hillary Versus America: Part VI — The West Rises

It’s been said that, “[s]ome men are born posthumously.” Of no one is this more true than of Aristotle — who hasn’t been born even yet.

Aristotle’s logic, his gift-wrapping of the method of reason for all of humanity, was eventually reborn in Europe because it was uniquely useful. Its usefulness, and that of Aristotle’s broader philosophy, fascinated a series of scholars. The two most important were Ibn Rushd, an Islamic philosopher born in what is now Spain, and Thomas Aquinas, the greatest philosopher and theologian in the history of the Catholic church.

Rushd and other Islamic philosophers had to spend an inordinate amount of time and energy convincing the established authorities of their faith that philosophy was not heresy. They never quite succeeded. As a result, Islamic philosophy receded from its high tide with Rushd’s death, and the tide is still — far — out. Aquinas, for a complex of reasons, had more success. He had so much success, in fact, that his thinking became the standard philosophy and theology underpinning Catholicism.

In an earlier post, I said, “the history of Western civilization has sometimes been understood as the story of the conflict and cooperation between Athens (Greek philosophy) and Jerusalem (Christianity).” Thomas Aquinas is the single most important point of synthesis between Athens and Jerusalem.

But Aquinas was unoriginal. He habitually referred to Aristotle as “The Philosopher,” because for him, there was no one else: Aristotle was the beginning, middle, and end of philosophy. What Aquinas added to Western thought was not novel philosophy, but a synthesis of Aristotelian philosophy with Catholic belief.

Aquinas would almost certainly have failed in this task — as reason and faith go together like ice cream and ground-up glass — but for the fact that he was a world-shaking genius. Incredibly, almost absurdly good at what he did, Aquinas managed to find what many learned Catholics, to this day, consider a “golden mean” between reason and faith.

In essence, Aquinas argued that reason was supreme and that faith was secondary; in any (apparent) conflict between the two, reason must prevail. To me, this is a remarkable view for a lone scholar-monk to have promulgated in the faith-dominated milieu of 13th-Century Italy. The most striking example of Aquinas’s deference to reason (and, by extension, to Aristotle), which I have at hand, is his argument for the independence of inviolability of the individual conscience.

In his Summa Theologica, his most famous work of philosophy and theology, Aquinas considers the question of whether one is bound to follow one’s conscience, even in cases when, in fact, conscience is directing a wrong action. For example, suppose I believe I should vote for Hillary Clinton, because I believe that her stances on the issues would make for better policies than those of Trump. Further suppose that, due to factors I had not considered, my analysis is flawed, and Hillary Clinton’s policies would, in fact, lead to some disastrous and evil result. Aquinas argues that I should follow my conscience; in other words, I should follow my best reasoning and understanding of what action I should take, even though I am wrong. (Keeping in mind: at the time, I don’t know that I am wrong.) (The opposing view would be that if some authority (for example, Thomas Fuller, blogger) told me I should not vote for Hillary because she’s an apocalypse in a pantsuit, then if the authority were legitimate, it would be moral for me to follow the authority, and thereby act against the dictates of my own conscience.) What’s especially interesting about Aquinas’s argument is that it survived and thrived in an intellectual environment which placed great value on authority.

Catholics are very big on authority. Their claim to supremacy among Christian faiths is based on the notion that Peter was the first Bishop of Rome, that the popes have been his successors in this office, and that Peter was hand-picked by Jesus to be his vicar, or stand-in, on earth. To the Catholic mind, then, to reject the authority of the pope is substantively indistinguishable from rejecting the authority of God himself.

In effect, Aquinas declared: the supreme authority each of us must follow at all times is our own conscience. And for Aquinas, this meant: our own reasoning. Even if it turned out that a person’s conscience directed her to act against God’s will, it would make no difference. God, argued Aquinas, would prefer that she use the tool he gave her, a reasoning mind, rather than blindly follow authorities she cannot know are more right than our own judgments.

Aquinas’s synthesis of Aristotelian reason and Catholic faith marks the exact moment of Aristotle’s rebirth in the West. It is no coincidence at all that the Renaissance followed. And this is the moment when Western civilization, that unstable emulsion of reason and faith, truly came into its own. It was a remarkable moment in human history, not least because it was paradoxical.

On a surface reading, it seems that reason and faith must be locked in perpetual conflict, because both reason and faith claim to be means to knowledge. And if there are two distinct means to know, then there might come points where the conclusions arrived at by the former means conflict with the conclusions arrived at by the latter. And if the two means of knowledge conflict, if both are equally valid, how can the conflict ever be resolved? Aquinas resolves the paradox by giving supremacy to reason (in practice) and to faith (in theory). And since we live in practice, not in theory, this means: Aquinas gave supremacy to reason, full stop.

In most other systems of faith, I don’t believe this synthesis could have lasted. But Christianity had, from its inception, been focused on the salvation of the individual soul. Consequently, the theory — that, in practice, the individual conscience must be the supreme authority in every life — makes sense.

Now there is some music in this. Aristotle first argued that the senses were the root of knowledge, understanding, and wisdom. But where are the senses? Does the polis have eyes? I do not mean to ask figuratively, in metaphor or synecdoche, but literally: Does the polis have eyes? Does the polis have ears? Hands? A mind? Reason? No.

Individuals have senses; individuals have sense; and, Aquinas added, individuals have conscience.

The triad has two notes, now; it’s incomplete: epistemological independence from Aristotle, an unstable moral independence from Aquinas. In the next installment, or perhaps the one after, we will hear the chord sound for the first time in human history, if still imperfectly.

 

Hillary Versus America: Part V — The Dawn of the West

Imagine the recent Olympic Games had gone differently. In the 100-meter final, Usain Bolt notices a stranger at the starting blocks one lane over from his. He puzzles for a moment, shrugs it off, and takes position. The starting gun fires. Bolt runs superbly to a 9.81, but, to his astonishment, crosses the line behind the stranger. Not only that, but the stranger has managed to cross the finish line, complete a full lap around the track, and cross the finish line a second time, all in 9 seconds flat.

In the field of ideas, someone like Einstein is Usain Bolt — a stand-out superstar, someone whose achievements so far outstrip even elite competitors’ that there is hardly competition to be had at all. The stranger? He is someone like Aristotle.

Aristotle’s achievements are so outsized that he seems like a comic-book hero, one  written by a teenager with no sense of proportion. To recount them plainly is to invite endless caviling. This is because no one such as Aristotle is allowed — in the contemporary mind, steeped in its characteristically egalitarian prejudices — to exist. (In my view, Aristotle has exactly two peers in the entire history of human genius: Homer (whose merits and accomplishments are beyond our scope here), and the genius-of-all geniuses, his or her name lost in pre-history, who invented language.)

If we refuse to get bogged down in academic trivialities, refuse to make much of distinctions that make no difference, and refuse to give in to the egalitarian prejudices of the day, an honest list of Aristotle’s more astounding accomplishments might go like this:

  • He invented physics.
  • He invented geology.
  • He invented biology.
  • He invented taxonomy.
  • He invented psychology.
  • He invented science itself.
  • He invented ethics.
  • He invented political philosophy.
  • He invented rhetoric.
  • He invented metaphysics.
  • He invented epistemology.
  • He invented logic.

Take a moment — take several, long moments — to let that sink in.

Of course it is true, for example, that the sophists were teaching rhetoric before Aristotle invented it, that Plato had grand, philosophical schemes for governing the polis before Aristotle invented political philosophy, and that Thales et al. were investigating the natural world before Aristotle invented science. But all of these observations — if they are offered to diminish Aristotle’s unique genius and unparalleled achievements — entirely miss their mark.

They miss because Aristotle’s innovations were, above all and in essence, innovations of method. While Aristotle does indeed seem to have been the first person to ask many important questions, it’s not as much what he asked, but the way he asked that distinguished his thought. With Aristotle, many important questions were asked, for the first time in the only right way: with implicit or explicit reference to an explicitly defined and validated method.

The keys to Aristotle’s uniquely valuable method are in his metaphysics, epistemology, and logic. For Plato, there had been two worlds: the higher world of ideas and the lower world of the senses. Reasoning was a quasi-mystical rite that allowed one’s consciousness to ascend from the dark cave of sensory illusion to the open, sunlight horizons of the abstract True Good. For Aristotle, in contrast, reasoning was rooted in sensory experience; that which we reason about is the world we encounter by means of our eyes, ears, and hands. For Aristotle, then, there was just one world, a world individuals discovered with their senses and could come to understand through rigorous, methodical, logical reasoning.

For reasons that are unknown to history (or perhaps just unknown to me), most of Aristotle’s writing did not survive the collapse of Classical civilization in Europe. The monks who kept the flame of learning alight through Europe’s Dark and Middle ages had earlier and more extensive access to Plato’s philosophy, and the philosophies of his followers, than they did to Aristotle’s. (This surely suited them, as Platonism might as well have been tailor-made to serve as the official Classical philosophy of Christendom.) But although the precipitous decline in commerce which characterized medieval Europe included a precipitous decline in intellectual commerce, just because Europe was no longer trading in Aristotle’s philosophy did not mean that it was lost.

After the fall of Rome and Classical civilization, as the new Western civilization struggled in its infancy, Islamic civilization was just catching its stride. And, as civilizations on the upswing tend to do, Islamic civilization found itself reading Aristotle. By the late 1100s C.E., the best Aristotle scholarship in the world, perhaps the only Aristotle scholarship in the world, was being done in Arabic by men like Ibn Rushd.

Centuries after Aristotle’s time, Western civilization was in the midst of what would later be called its Renaissance. The word “Renaissance” means “rebirth.” This period, which began in the 1300s C.E., was a period of furious cultural transformation in Europe. But what was being reborn, and what was driving this transformation? Generally, they were the values and philosophy of Classical civilization that were being reborn, but, above all, and in essence, the Renaissance was the second coming of Aristotle.

Consider this painting, Raphael’s The School of Athens:

 

 

Sanzio 01.jpg
By Raphael – Stitched together from vatican.va, Public Domain, Link

 

The central figures are Plato, on our left, and Aristotle, on our right. By 1509, when this painting was begun, the Renaissance was well underway. Raphael is recognizing and honoring the central figures of Classical civilization, giving them credit for what their thinking contributed to the rebirth and renewal that characterized Italy in the 1500s. By placing them as he does in this great painting, Rafael is saying: “Look around you at the wealth and power of our civilization. This wealth and power comes as a gift from these two men’s hands.”

And now look at those hands. Notice that Plato’s hand has one finger extended upward. This is Raphael recognizing that Plato found the root of reality in the otherworldly, in heaven above, in the “higher” realm of ideas. But where is Aristotle’s hand? It is spread out, open, taking in the world before him, and, not coincidentally, taking us in as well, as we stand in front of Aristotle, viewing the painting. Raphael knew it: Aristotle’s philosophy was a philosophy for living on earth, a philosophy of the senses, of the hands-on, the practical. (Raphael, who took ideas and realized them through rigorous manual labor, likely found a kindred spirit in Aristotle.)

What Aristotle did by inventing formal logic was to take the essence of valid thought, logic, and capture it in a concrete method; he made logic a practical art. He took all the formerly mysterious things that successful minds did whenever they succeeded in knowing reality, and distilled these down to a recipe that anyone could follow. He made logic — which in its most developed forms had been an aristocratic skill, practiced with difficulty by men like Plato, men with the wealth and leisure to refine their thinking through thousands of hours of impractical conversation — accessible to everyone.

If I were to pick just one picture to illustrate just how important Aristotle is to Western civilization, it would be The School of Athens. But I am going to pick two pictures. The second is this chart:

 

 

 

Notice what happened to world population when the rebirth of Aristotle had had a few hundred years to settle in in the West. This is why I think a better name for Western civilization would be: Aristotelian civilization.