Hillary Versus America: Part VIII — Individual Rights Against the Common Good

The United States of America was the first country in history to be founded, explicitly, on any philosophy, and is still the only country in history to have been founded on the principle of individual rights. It had taken thousands of years for a series of heroes, Aristotle foremost among them — aided by unlikely happenstance upon unlikely happenstance, the unintended consequences of misplaced passions, the printing press, and a mad monk — to wrest the idea of individual rights into being. At the peak of the Enlightenment, in the afterglow of Newton, when humanity was still beaming with new-found pride in its power to reason, when it still seemed possible to organize a society on rational principles, some few among the revolutionary generation of Americans sought to put the theory of individual rights into practice. This idea inspired a rag-tag conglomeration of Britain’s cast-offs to take up arms against the most powerful empire in the world, to fight it, persevere, and win. Yet today, if you were to ask a recent college graduate, one who had paid close attention in her classes at a top-tier school, a superb student, a model, a paragon: “In the interest of justice, before anything else, what should everyone understand about founding of the United States of America?” she would unhesitatingly answer: “The founders kept slaves, didn’t recognize women as equals, and stole all their land from the first peoples.”

This profound superficiality, which is the rule among the up-and-coming “educated” classes in contemporary American society, is a large part of why, if something doesn’t change, if Hillary is elected president, we are likely fucked. If vacuous ignorance continues to align itself with oligarchic power, there will be no coming back. The tens-of-thousands-of-years long slog through the muck of human misery, the millennia of the strong preying on the weak, of priests, nobles, and kings pressing their boots on the necks of the men and women they saw below them, like a dark tide that has been out too long, like an ocean sucked flat before a tsunami, will come rushing back, black and inexorable, to drown out all hope.

While today a variety of significant cultural, political, and institutional forces are aligned against individual rights, and the forces that contend to preserve rights are weak and few, the transition from freedom to enslavement is likely to be permanent for deeper reasons. The uniqueness of the American way in human history indicates a basic truth about us: that we do not yet know how to be free. Taken as a whole, humanity does not yet want to be free. Our ancient habit is to submit to hierarchies, to raise up chiefs and presidents, medicine men and priests, to anoint David with oils, to give glory to the emperor, to carry pictures of Chairman Mao, and to elect FDR to four consecutive terms in office. To be free, humanity must learn something new.

Even though they didn’t have the reach for it yet, the American revolutionary generation grasped at political freedom, at this something new. But what does political “freedom” mean, really? When a contemporary politician alludes to “our freedoms” or the like, it sounds like an empty platitude — because it is. Still, just because a word can be parroted meaninglessly, doesn’t mean it’s meaningless. In a political context, freedom means: a social organization that secures individual rights. If this were broadly understood, there would be zero chance of Hillary Clinton’s election in November. If it were broadly understood, Donald Trump would never have won the Republican party’s nomination. If it were broadly understood, there would be riots in the streets tomorrow. These realities hint at why the ruling classes have worked so long, so hard, and so effectively, to make sure this isn’t broadly understood. They explain the profound superficiality of our exemplary college graduate.

In fact, the concept of ‘rights’ has been so muddled (people now believe that healthcare and Internet access are “rights” (!)), that it now communicates nothing to a general audience. This is a disaster in the making. Hillary Clinton’s election to the presidency would significantly advance an existential threat to Western civilization, but that threat would come in the form of a sneak attack on the very rights that, now, few Americans understand or appreciate. The greatest treasure of our civilization, the work of generations of geniuses, the inheritance that we are squandering, individual rights, may be stolen from us by a threadbare con.

Since it means less than nothing to a contemporary audience to warn them, “Hillary Clinton is a threat to individual rights!” let us consider the matter indirectly. If you cannot understand what Clinton is against, understand what she is for. And what Hillary Clinton is for (publicly) is: her vision of the “common good.”

What is the “common good?” The “common good” is a vision of what the left carefully calls the “distribution” of goods in a society — plus one additional element, which we will come to shortly. To understand the “common good,” one must first understand goods generally. Education, healthcare, housing, clothing, food, transportation and entertainment are all goods. Romantic partnerships, friendships, and pets are goods. Goods are everything we spend our lives trying to get and to keep.

In the Western world, it is broadly agreed (for now) that people should be able to decide for themselves what goods to prioritize in their own lives. It is also understood that not every effort to get or to keep a good will be successful. For example, you might want to be a lawyer, but if your LSAT scores are too low, that good will be difficult or impossible for you to get. And people generally accept that not always getting the goods that we want is normal.

Despite the general agreement that goods are, largely, a personal matter, and that, to some degree, we won’t always get the goods we want, most politicians adhere to some vision of the “common good.” This “common good” is supposed to both override the private goods of individuals and also to underpin all these private goods. What I mean by this is that politicians tend to believe that certain “distributions” of goods in society are necessary for the “health” of the society. But since goods don’t come from thin air, but have to be produced with thought and effort, and since politicians don’t produce goods themselves, this means that any who subscribe to an idea of the “common good” intend to steal goods from some individuals and hand them to others. This is their only means for having goods “distributed” in accordance with their visions. Thus a vision of the “common good” overrode my own personal good when my income was taxed to support the second Iraq war. I would never have volunteered to support it in any way. Yet, against my will, the government took my goods (my earnings), by force, and used them to buy a few rounds of depleted uranium, or something equally repugnant to me. The implicit justification for this was that the Iraq war was in the “national interest,” which is another way of saying “common good.” So president Bush had a vision of dead Iraqis that he thought was more important than my vision of, e.g., a new pair of cross-country skis, and his vision overrode mine, with the help of all the United States’ government’s guns.

This is how the “common good” overrides individuals’ private goods. But the “common good,” in theory, only has this overriding privilege because it benefits everyone. Supposedly, maiming and killing Iraqis — after pretending they had something to do with 9/11 — protected our “freedom.” And since individual Americans, like me, could hardly pursue our own private goods without the benefit of our “freedom” — well, you can see why all that maiming and killing had to be done.

Another example of how the “common good” is supposed to underpin each individuals’ private good is found justifying public education. The theory there is that educated workers produce more goods, which enriches the entire society. Therefore, say proponents of the “common good,” taxing me to provide public education actually benefits me and everyone else; it underpins my successful pursuit of private goods, because it improves the economy and makes more goods available at less cost.

With these examples in mind, we can expand on the definition of “common good” above: The “common good” is a vision of the proper distribution of goods in a society, intended to be realized by force.

Because Hillary “It Takes a Village” Clinton is an enthusiastic proponent of the “common good,” she rejects freedom. A society can be guided by one principle, or by the other, but never by both. A free society must reject, as an organizing principle, any reference to a “common good.”

The necessary opposition between political freedom and any “common good” is not trivial to discover. Our paragon of contemporary college education — with someone’s vision of social justice misting her eyes — will not find it. For us to discover the connection now, we must examine the founding principles of the United States.

Ceremonially, the United States came into being in July of 1776, with the Declaration of Independence. The Declaration proclaimed the following founding principles:

  • Equality: The Declaration repudiated the notion, central to all prior systems of government, that some men were, either by nature or by divine sanction, “set above” others. In the Declaration, there is no political hierarchy among men: we are all, politically, equal.
  • Unalienable Rights: Rights are the central concept of the Declaration, derived directly from Locke’s philosophy (and, by extension, from Aristotle’s). To “alienate” something, a possession for example, means to separate it from yourself. You “alienate” your dollars when you spend them. You “alienate” your house when you sell it. You “alienate” your old clothes when you give them to a charity. According to the Declaration, you cannot “alienate” a right: rights cannot be sold, traded, or given away. If they are taken away, the taking is always illegitimate and always criminal.
  • Natural Rights: In the Declaration, rights are not just inseparable from the individuals who hold them, they are aspects of human nature, endowed by God when he created humans and created human nature. This means that rights exist before governments, and outside and beyond all governments. Because human nature is the same at all times and in all places, rights are the same in all times and in all places. Rights do not “evolve,” develop, or change in any way. The number of rights cannot increase or decrease. No new rights will ever be discovered (although some rights that have existed all along might come to be recognized), and nothing that was ever a right can ever, later, turn out not to be a right. (It is vital to note here that the author of the Declaration, Thomas Jefferson, was a Deist. When a Deist writes the word “God,” he does not mean what typical churchgoers today mean by that word. Deists were the Enlightenment-era analog of what, in contemporary life, we call “atheists.” Deists believed only nominally in God, as a kind of abstract force that created the laws that govern the cosmos, then let the clockwork mechanism of the cosmos turn out its internal tensions for the rest of eternity. While it’s true that not all of the founders were Deists, and some (Sam Adams, for example) were fervent Christians, it’s also true that Enlightenment Deism was the animating philosophy behind the Declaration, and behind the United States as such. Far from being a religious republic, or a republic founded on or inspired by Christian belief, the United States were the most non-theistic political entity in human history.)
  • Unlimited Rights: The wording of the Declaration is subtle (by contemporary standards). It is easy to miss key implications by passing too breezily through its passages. Take this sentence, for example: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” The word “among” here implies that there are other rights than those listed. In fact, in natural-rights theory, individuals possess an infinite number of rights. Essentially, it holds that individuals have a right to do anything and everything they deem necessary to do in order to preserve and secure their lives. To put this in concrete terms: according to this theory, you have an absolute right to read She-Hulk comics (assuming you do not obtain them by force). You have this right, for example, because if it makes you happy to read She-Hulk, then that happiness contributes to the preservation of your life — if only by contributing to a pleasant evening, which makes it easier to relax and rest well, which makes it easier to get up on time, which makes it easier to get to work on time, which makes it easier to keep producing whatever it is you produce in order to provide for yourself. She-Hulk, by your own judgment, helps you to be productive in the service of your own life. Therefore, you have an absolute right to all the She-Hulk you can obtain through peaceful trade.
  • Rights-Based Government: According to the Declaration, the whole, entire, exhaustive purpose of government is to secure individuals’ rights. There is no legitimate purpose for government beyond this. (And here we begin to see how radically individual rights theory repudiates the notion of a “common good.”)
  • Consent of the Governed: In the Declaration, because the only (legitimate) purpose of government is to secure individuals’ rights, and because it is an observable fact that governments do not always limit themselves to this one legitimate function, there needs to be a way for the legitimacy of a government to be decided. The answer given in the Declaration is that governments are legitimate only as long as they keep the consent of the governed. This consent can be withdrawn at any time, which leads to the next principle of the Declaration.
  • The Right of Revolution: Since the people can withdraw their consent at any time, and since, in practice, governments tend to prefer not to be dissolved and replaced, the Declaration proclaims the people’s right to “alter or abolish” a government whenever they believe that government is failing to secure their rights. The clear implication is that the people have the right to do this violently, if necessary. As Jefferson wrote elsewhere:

    what country can preserve it’s liberties if their rulers are not warned from time to time that their people preserve the spirit of resistance? let them take arms. the remedy is to set them right as to facts, pardon & pacify them. what signify a few lives lost in a century or two? the tree of liberty must be refreshed from time to time with the blood of patriots & tyrants. it is it’s natural manure.

  • Popular Sovereignty: Consistent with what Locke had argued in his Second Treatise of Government, the Declaration’s view of governing power is that it is power derived from the people. In other words, whatever power and authority a government legitimately wields, it wields as a loan from the people. As such, it is the people (individuals in the aggregate) who are sovereign, not the government, not any king, legislature, or council. Popular sovereignty would be the principle underpinning the United States’ Constitution.

Taken all together, these principles of the Declaration construct government as the servant of the people, a diametric reversal of thousands of years of human history. Consider now: if the government is really the servant of the people (meaning, not merely in rhetoric, but in practice), what will it be permitted to do? Some people, for example, might want the government to subsidize corn production, because they truly believe these subsidies will benefit the country as a whole (it being a mere coincidence, of course, that those with this insight farm corn). But surely other people would rather keep their wealth than give it to corn farmers. Still others might want the same wealth that might be expropriated for corn subsides to subsidize college tuition instead. If the corn crowd manages to capture the government’s power and turn it toward its vision of the “common good,” everyone else loses.

Since there are contingents for and against just about any conceivable use of government power, it is obviously incoherent to claim that government is the servant of the people as a whole, if it is going to be common practice for one contingent to get its wishes today, another tomorrow, and neither the day after. There are only three possible solutions to this dilemma: One, to give up the notion that government is, or ought to be, the servant of the people. Two, to school the people to assent to one set of common values (a universal “common good”), so that everyone assents to the forces brought to bear in service of those values. Three, to radically limit the use of government power, so that no “interest group” can ever use it to steal goods from some and hand them to others. (Some readers, probably hailing from what I have elsewhere called the “Blue team,” might want there to be a fourth option: to say that government serves the people, even if it takes goods from some needy people to give to others who are no more needy, as long as the winners and losers are decided upon democratically. Think about that for a bit. Realize it’s a dishonest dodge. Move on.)

The framers of the United States’ Constitution chose the third option. Their implementation of the Declaration‘s principle of popular sovereignty was the constitutional precept of enumerated powers. This basic principle of the Constitution means that the federal government has only the powers that are explicitly granted to it (enumerated) in the text. If a power is not explicitly granted to the federal government, from the limitless pool of powers (rights) that each individual possesses by nature, then the federal government, in theory, has no right to exercise that power.

Consider the implications of this for the “common good.” If government is only granted a meager quiver of powers, just enough to secure individual rights and no more, where will it find the power to expropriate goods from some citizens and hand them to others? Nowhere. A limited government is too fine an instrument to be wielded for the “common good.” That requires a club, something blunt enough and broad enough to “get things done.” This indicates why freedom, individual rights, limited government, and prosperity like mankind had never before seen came up together in history. It also indicates why slavery, contingent and ever-changing “rights,” tyrannical government, and mass poverty characterized all previous social orders. This is an either-or choice. There is no way around it.

Now, no one could reasonably argue that the federal government has, in fact, kept within the size and scope suggested by the theory outlined above. (One could argue instead that the theory is inaccurate, but that would be a losing case.) Not only do we have corn subsidies, we have federal bureaucrats telling Americans what plants they can smoke, how many inches from the ground their hand-railings have to be, how prepared their restaurants must be to accommodate miniature horses, how much gasoline their trucks are allowed to burn, what wage they are allowed to work for, what parts they are allowed to have in their rifles, where they can keep their retirement savings, how much money they are allowed to put in a bank account (without being investigated), how much money they are allowed to withdraw from a bank account (without being investigated), what patterns of deposit and withdrawal they are allowed to follow (without being investigated), and, perhaps most importantly, what percentage of fat content is minimally permissible in a processed cheese spread.

But maybe it is all for the best? Maybe the blurring of the federal government’s once-narrow boundary lines has enabled us to pursue a “common good” that’s worthwhile? Maybe all this change has not been corruption, but progress?

No one really cares, in the abstract, whether the United States’ government adheres to the principles of its founding. What people care about, if they care at all, is whether the government does good — or, perhaps, allows good to be freely done. If the principles of the founding were good, right, and true, and they have been abandoned, that is a matter for concern, and for remedial action. But if they were no good in the first place, then their apparent abandonment is hardly worth noting.

The upcoming election is a referendum on these questions. Put directly: A vote for Hillary Clinton says: “Yes, these changes are for the best. Yes, there’s a common good worth pursuing. Yes, the changes so far have been progress.”

But the truth is the opposite: These changes are destroying individual rights. There is no such thing as a common good, because the notion is incoherent. What’s happened so far has not been progress, but a return to the norm, a regression to the mean of human history: slavery and tyranny, justified by mystical visions. If Hillary is elected, she will, with the full support of the ruling class, slyly prepare Americans to be ruled, cowed, enslaved, degraded, and impoverished. She will do this superbly well. She will do this in the name of a “common good,” and I think she will sincerely believe that she is making the world better. She will be wrong.

People reasonably fear Donald Trump as president. His principles are incoherent. He is brash, ignorant, and oblivious to his limitations. Although they are not, I think, representative of his support, racists and other savages do throng to him. Although I believe he does love America, in some hazy way, and all that it means and stands for, he will not advance the cause of freedom. He will not restore the promise of the Enlightenment. But Trump’s haphazard vision is cast perpendicular to the arrow of history. If elected, his notion of the common good will go nowhere, as institutions and inertia will resist him. This will give the few of us who have some sense a little time to act. Maybe then the death of Western civilization can be averted.

Hillary Versus America: Part VII — The West at Noon

Before we begin, please take a few minutes to put yourself into the mindset of a medieval monk. Listening to as much of this music as you can sit through is the best way I know of to do that on short notice:

https://www.youtube.com/watch?v=jGfLUXtARjM

When I hear this, I can imagine waking long before dawn to begin a day that would be just like the ten thousand days that came before it. I pray; read approved works, mostly the Bible; read some more; toil in a field; sing hymns; eat bland food; toil; pray; read; sing; sleep briefly, and begin again, in the dark.

The music is, I think, unquestionably beautiful: simple, and pure. But I can’t make it through more than a few minutes, unless as background music for something more active. It’s too monotonous. For some, simplicity, rigidity, and repetitiousness are the hallmarks, not only of beautiful music, but also of a good life, well lived. Not for me.

Although it’s not to my taste, this kind of life makes sense, as as an ideal, if you can think like a medieval monk. Because monks Know the Truth. If you already know everything that’s important to know, there’s no need to explore, to learn, or to grow. Perfection is a cycle, orbiting around a central truth, like the sun orbiting around the earth, like the earth turning through its seasons.

In the last installment, that peculiar monk, Thomas Aquinas, had established an unstable symbiosis between Athens and Jerusalem, between reason and faith. His balance of the West’s antipodes would never come to rest. In some times and places, faith would rise. In others, reason would be resurgent. Although reason never, ever reached a zenith, it also never fell below the West’s horizon again.

At the outset, I said that the values distinctive to Western civilization, the values that determine Western civilization’s worth, are individual rights and reason. Aristotle brought the first systematic understanding of reason to the West. Aquinas later carved out a space for individuals to exercise their consciences, even against the wishes of “legitimate” authorities. But, hundreds of years after Aquinas’s death, even after the Renaissance explosion of cultural, economic, and intellectual productivity, the idea of individual rights had never been given voice. It would be a long and meandering path leading from Aristotle, through Aquinas, to America — but we can retrace it in outline.

For centuries after Aquinas lived, there was only one Christian church, headed in Rome by the Vicar of Christ. There were recurring power struggles between the papacy and the secular authorities, but the church held the high ground. This was because if any church-state conflict were to devolve from words to swords, the men fighting knew that loyalty to a king might win them treasures on earth, but loyalty to the church would earn them treasures in heaven. And, also, if a man is forced to chose between keeping his body together and keeping his soul together, he will tend to prefer the latter (providing, of course, he believes in souls).

If this was not enough, another way the church maintained its grip on spiritual authority was to establish itself as the intermediary between men and God. In the medieval church, priests might be able to read the word of God, but common men, and most nobles, were illiterate. Even if they had been literate in their native tongues, they would not have been able to read the Bible as it existed then: either in archives of the original Hebrew, Aramaic, and Greek, or in churches, where priests used excerpts from Latin translations.

In the 1450s, Johannes Gutenberg introduced the printing press to Europe, and began printing one of these Latin translations of the Bible. This was a technological revolution that would dovetail into a spiritual one. Before the introduction of the printing press, manuscripts were copied longhand, a painstaking process that put severe physical limits on the volume of texts that could be copied and spread throughout Europe.

Because copies of the Bible, and of all texts, were in such short supply before Gutenberg’s revolution, it was not only culturally impossible for anyone to challenge the Catholics’ priestly monopoly on the Bible, it was technically impossible as well. (I note that this is not the first time that the changing limits of technology had a powerful effect on the nature of human society. Consider that, before the invention of agriculture, for example, it was impossible for enough food to be set aside to allow large population centers to form and stabilize. And until large population centers could form and stabilize, it was impossible for humans to diversify their productive (or unproductive) activities through a division of labor. In other words: granaries make priests possible.)

But after Gutenberg’s revolution, all bets were off. For centuries, the Catholic church had been the undisputed moral authority for all of Europe. For centuries, the church had been teaching that the individual conscience was the highest authority in practical life. For centuries, this teaching, even though it concerned how everyday life should be practiced, was not practiced in everyday life. Aquinas’s teaching about the inviolability of the individual conscience presided over a church which presided over the Spanish Inquisition. Victims of the Inquisition, whose confessions of “heresy” had been prepared under torture, but who later refused to repent for their “heresies,” were burned alive, and conscious, at the stake. (Apparently, if you repented for your heresies, the Inquisition would do you the kindness of strangling you or breaking your neck before burning you at the stake.)

And then Martin Luther happened. In 1517, Luther published a list of disputes with Catholic church doctrine. This event is recognized as the beginning of the Reformation, the breaking-up of the Christian church from the one Catholic church, to many diverse congregations of believers, each promulgating their own interpretation of God’s will and word. A complex cultural interaction — among Gutenberg’s printing technology, Luther and other reformers’ doctrinal challenges to the Catholic monopoly, the expansion of literacy, Aquinas’s idea of the sovereignty of the individual conscience, the pre-existing political divisions in England and in Europe, and the continuing influence of Aristotle on learned men — fired an intellectual crucible. In the end, the molten mixture would solidify into the precursors of individualism as a political philosophy and experimental science as an expansion of the Aristotelian method.

Luther rejected the idea that a priesthood should control and mediate God’s relationship with individual men. For Luther, building upon the idea of the sovereign individual conscience, each man was to be his own priest, reading the word of God directly, and deciding for himself, through his conscience, what God’s word meant. There was no room for an authoritarian Church in this conception of man’s relationship with God. Churches existed to provide guidance and fellowship, not to step between man and his God. This Lutheran doctrine is known as sola scriptura, “by scripture alone,” and means that Christian doctrine should come only from the Bible, from each conscience-guided individual’s reading of the Bible, not from thick books of Catholic theology or the pronouncements of popes.

Luther’s ideas spread like dank memes through Twitter, and they carried world-changing implications. First, if every man would find his own, personal, relationship with God through the study of scripture, he needed to be able to read scripture. Protestantism thus melded with the results of Gutenberg’s printing technology to inspire and enable a massive increase in literacy. Second, if every man was supposed to develop his own relationship with God, then, by implication, every individual was worthy of having a personal relationship with the creator of the universe. Think about that. Under Catholic rule, God had One Order for the world, which he communicated through his vicars; everyone had a fixed place in that Order, assigned to them at birth. But if you are worthy to forge your own relationship with the author of all things, then who can tell you what your place is, but God himself? Protestantism thus prepared the way for individualism by honoring and sanctifying the individual in ways totally new and alien to prior belief. Third, increased literacy had secondary effects. It created general readers: readers of bills, of accounting books, of newspapers, of pamphlets; literacy enabled the beginnings of an information economy, and concomitant wide-reaching expansions of economic activity, of the division of labor, and, ultimately, of wealth.

The massive, disruptive, liberating, chaotic, and deadly change catalyzed by Protestantism broadcast this message to everyone in Europe who was prepared to hear it: “IDEAS ARE IMPORTANT!” And because of the beginnings of mass literacy, there were many, many more minds prepared to hear (or to read) this message than there ever could have been before Gutenberg and Luther. How could any educated person fail to notice the terrible and awesome power of ideas in a world scarred and broken by wars — over ideas?

The Hanging by Jacques Callot

The Thirty-Years’ War, which began as a Protestant-Catholic conflict, would eventually kill 8 million people. One soldier in this war over ideas was the philosopher René Descartes, who later wrote in his Discourse on the Method that, during his tour of duty, he had begun to conceive of a new approach to knowledge. Descartes’ work would become the foundation of modern philosophy, supposedly rejecting Aristotle in favor of a fresh way of looking at the world and deriving knowledge from it. In fact, Descartes, and the most important philosophers who came after him and shared his project, did not so much reject Aristotle as they rejected Scholasticism, the blend of Aristotelian philosophy and Catholic doctrine that had been the foundation of higher education in Europe, by Descartes’ time, for hundreds of years. They saw Scholastic thought as stale, a dead end. But the proliferation of texts in the late 16th- and 17th-Centuries, the era of Shakespeare, Descartes, Hobbes, and Francis Bacon, had inaugurated an almost feverish exchange of ideas, and this happened against a backdrop of deadly ideological conflict that underscored just how powerful ideas could be. Ambitious men, men ambitious enough to want to change the Western world, perhaps men weary of war, began looking for new ways to think about the world and man’s place in it. It was time for Aquinas’s unstable balance to shift again, and for reason to rise from the very ashes of faith’s ravages.

The man who seems most singly responsible for this shift in the Western balance was Francis Bacon. Bacon was a wily one. Like many philosophers, he seems to have played at the game of esoteric writing, meaning he wrote with a double-meaning: one for his intended audience, and one for everyone else. This was necessary, or at least prudent, because what Bacon wanted to say would not have been pleasing to many Anglicans. What he wanted to say was: “Faith has held Europe in a stranglehold for more than a thousand years. We suffer, fight, starve, bleed, and die, all more than we need to, if indeed we need to at all. As long as men can claim to see different invisible truths, and then proceed to kill each other over these self-conjured ghosts, we will never have peace. The times are such that there are enough of us, now — who are well read, who understand our Aristotle, who know where knowledge really comes from — we can change the world. Let faith wane in influence as reason waxes. To make this possible, let’s take Aristotle’s work a step further: let’s make the method of reason even easier to follow. Let’s convince men to focus their intelligence on the natural world, not the spiritual. Let’s teach man science.”

Since he could not say this openly, and since he was a wily philosopher (but I repeat myself), what Bacon actually said was:

The greatest error of all the rest is the mistaking or misplacing of the last or farthest end of knowledge: for men have entered into a desire of learning and knowledge, sometimes upon a natural curiosity and inquisitive appetite; sometimes to entertain their minds with variety and delight; sometimes for ornament and reputation; and sometimes to enable them to victory of wit and contradiction; and most times for lucre and profession; and seldom sincerely to give a true account of their gift of reason, to the benefit and use of men: as if there were sought in knowledge a couch whereupon to rest a searching and restless spirit; or a tarrasse, for a wandering and variable mind to walk up and down with a fair prospect; or a tower of state, for a proud mind to raise itself upon; or a fort or commanding ground, for strife and contention; or a shop, for profit or sale; and not a rich storehouse, for the glory of the Creator and the relief of man’s estate.

This was masterful rhetoric, perfectly pitched to its audience: the authorities of the Christian West. Bacon wanted freedom of scientific inquiry, for experimentalists using the new methods he pioneered to be able to investigate nature without having to account, to a skeptical religious authority, for their every unsettling discovery. He knew that Christian authorities praised charity and service to the poor. He knew they condemned pride. He thus presents free scientific inquiry (here and elsewhere) as, not a means of self-aggrandizement, but as a means to the “relief of man’s estate.” Whose “estate” needed relief? Kings’, lords’, and bishops’? No. The poor. They needed food, shelter, medicine, a means to a more humane existence. How could any Christian say no?

Bacon’s rhetoric worked. His method spread. Scientific inquiry proceeded in the Western world at an ever-quickening pace. As discovery piled upon discovery, as techniques of food production and the manufacture of goods consequently improved, life in Europe became progressively easier. Man’s estate began, incrementally, slowly, to be relieved. Literacy, numeracy, and the beginnings of a scientific understanding of the world dispersed from the small circle of philosophers and other educated elites to the wider culture. And the wider culture reflected on these developments, and slowly, Europe began to think: maybe the human mind is something noble after all? Maybe man is not merely a poor, miserable sinner: fallen, corrupt, pitiable, and worthless without God’s grace? Maybe whatever is meant by “divine” is something in man?

However far reason and science progressed, by the late 17th Century, the Christian ethos still dominated all of Europe. The relief of man’s estate was all well and good, but man was still corrupt and sinful, still needed the firm hand of God’s vicars and officers and kings to guide him toward right conduct. And there were mysteries that this new science would never unravel, gaps in man’s mortal understanding as wide as chasms. All one had to do was look up into the vastness of a night sky, where untouchable celestial bodies — bright, beautiful, cold, distant, and perfect — moved according to God’s immutable and impenetrable will.

And then Newton happened.

Contemporary man has to stretch herself until it hurts, and maybe even further than that, to even begin to understand how big a deal Newton’s law of universal gravitation was. When Newton published his findings in 1687, it was like some Herculean hero had taken the sky — from, not Atlas’s shoulders, but God’s hands — and put it in a book that any man could carry. In metaphor, in symbol, Newton had brought heaven to earth through the power of human reason. In metaphor, Newton had raised man up to stand eye-to-eye with God.

The poet Alexander Pope wrote this epitaph:

NATURE and Nature’s Laws lay hid in Night:
God said, “Let Newton be!” and all was light.

The intellectual world of Western civilization came alight, burning incandescently for the next hundred years. This was the Enlightenment. This was mankind’s high noon.

Almost simultaneously with Newton’s great discoveries, John Locke published his Second Treatise of Government. He argued that every individual was sovereign, and that governments, or communities of any kind, existed to serve individuals, to secure opportunities for individuals to pursue the goals that furthered their own lives.

And here is that music coming again. Locke sketched in the final note of the triad: Man’s epistemological independence from Aristotle, his (partial) moral independence from Aquinas, and, now, at long last, his political independence. Enlightenment man did not exist to serve the glory of the state or the church, he existed, for the first time, if only implicitly, for his own sake.

Now the West had reason and individual rights. In a few decades, the chord Locke had sketched would sound out for the first time, as a gunshot, heard ’round the world.

When Beethoven heard, not too long after the American Revolution, that Napoleon Bonaparte was bringing down the false authority of kings and nobles, he dedicated his work-in-progress Symphony No. 3 to him. Napoleon later betrayed human freedom by declaring himself Emperor, and so Beethoven scratched his name off of the dedication page of this symphony. But Beethoven’s Third still records, better than any gunshot, what it sounds like when man breaks tens of thousands of years of chains, finds his nobility, sees the horizon open up, and knows the heavens, even, are his. To my ears at least, no monk’s hymns can compare:

Hillary Versus America: Part VI — The West Rises

It’s been said that, “[s]ome men are born posthumously.” Of no one is this more true than of Aristotle — who hasn’t been born even yet.

Aristotle’s logic, his gift-wrapping of the method of reason for all of humanity, was eventually reborn in Europe because it was uniquely useful. Its usefulness, and that of Aristotle’s broader philosophy, fascinated a series of scholars. The two most important were Ibn Rushd, an Islamic philosopher born in what is now Spain, and Thomas Aquinas, the greatest philosopher and theologian in the history of the Catholic church.

Rushd and other Islamic philosophers had to spend an inordinate amount of time and energy convincing the established authorities of their faith that philosophy was not heresy. They never quite succeeded. As a result, Islamic philosophy receded from its high tide with Rushd’s death, and the tide is still — far — out. Aquinas, for a complex of reasons, had more success. He had so much success, in fact, that his thinking became the standard philosophy and theology underpinning Catholicism.

In an earlier post, I said, “the history of Western civilization has sometimes been understood as the story of the conflict and cooperation between Athens (Greek philosophy) and Jerusalem (Christianity).” Thomas Aquinas is the single most important point of synthesis between Athens and Jerusalem.

But Aquinas was unoriginal. He habitually referred to Aristotle as “The Philosopher,” because for him, there was no one else: Aristotle was the beginning, middle, and end of philosophy. What Aquinas added to Western thought was not novel philosophy, but a synthesis of Aristotelian philosophy with Catholic belief.

Aquinas would almost certainly have failed in this task — as reason and faith go together like ice cream and ground-up glass — but for the fact that he was a world-shaking genius. Incredibly, almost absurdly good at what he did, Aquinas managed to find what many learned Catholics, to this day, consider a “golden mean” between reason and faith.

In essence, Aquinas argued that reason was supreme and that faith was secondary; in any (apparent) conflict between the two, reason must prevail. To me, this is a remarkable view for a lone scholar-monk to have promulgated in the faith-dominated milieu of 13th-Century Italy. The most striking example of Aquinas’s deference to reason (and, by extension, to Aristotle), which I have at hand, is his argument for the independence of inviolability of the individual conscience.

In his Summa Theologica, his most famous work of philosophy and theology, Aquinas considers the question of whether one is bound to follow one’s conscience, even in cases when, in fact, conscience is directing a wrong action. For example, suppose I believe I should vote for Hillary Clinton, because I believe that her stances on the issues would make for better policies than those of Trump. Further suppose that, due to factors I had not considered, my analysis is flawed, and Hillary Clinton’s policies would, in fact, lead to some disastrous and evil result. Aquinas argues that I should follow my conscience; in other words, I should follow my best reasoning and understanding of what action I should take, even though I am wrong. (Keeping in mind: at the time, I don’t know that I am wrong.) (The opposing view would be that if some authority (for example, Thomas Fuller, blogger) told me I should not vote for Hillary because she’s an apocalypse in a pantsuit, then if the authority were legitimate, it would be moral for me to follow the authority, and thereby act against the dictates of my own conscience.) What’s especially interesting about Aquinas’s argument is that it survived and thrived in an intellectual environment which placed great value on authority.

Catholics are very big on authority. Their claim to supremacy among Christian faiths is based on the notion that Peter was the first Bishop of Rome, that the popes have been his successors in this office, and that Peter was hand-picked by Jesus to be his vicar, or stand-in, on earth. To the Catholic mind, then, to reject the authority of the pope is substantively indistinguishable from rejecting the authority of God himself.

In effect, Aquinas declared: the supreme authority each of us must follow at all times is our own conscience. And for Aquinas, this meant: our own reasoning. Even if it turned out that a person’s conscience directed her to act against God’s will, it would make no difference. God, argued Aquinas, would prefer that she use the tool he gave her, a reasoning mind, rather than blindly follow authorities she cannot know are more right than our own judgments.

Aquinas’s synthesis of Aristotelian reason and Catholic faith marks the exact moment of Aristotle’s rebirth in the West. It is no coincidence at all that the Renaissance followed. And this is the moment when Western civilization, that unstable emulsion of reason and faith, truly came into its own. It was a remarkable moment in human history, not least because it was paradoxical.

On a surface reading, it seems that reason and faith must be locked in perpetual conflict, because both reason and faith claim to be means to knowledge. And if there are two distinct means to know, then there might come points where the conclusions arrived at by the former means conflict with the conclusions arrived at by the latter. And if the two means of knowledge conflict, if both are equally valid, how can the conflict ever be resolved? Aquinas resolves the paradox by giving supremacy to reason (in practice) and to faith (in theory). And since we live in practice, not in theory, this means: Aquinas gave supremacy to reason, full stop.

In most other systems of faith, I don’t believe this synthesis could have lasted. But Christianity had, from its inception, been focused on the salvation of the individual soul. Consequently, the theory — that, in practice, the individual conscience must be the supreme authority in every life — makes sense.

Now there is some music in this. Aristotle first argued that the senses were the root of knowledge, understanding, and wisdom. But where are the senses? Does the polis have eyes? I do not mean to ask figuratively, in metaphor or synecdoche, but literally: Does the polis have eyes? Does the polis have ears? Hands? A mind? Reason? No.

Individuals have senses; individuals have sense; and, Aquinas added, individuals have conscience.

The triad has two notes, now; it’s incomplete: epistemological independence from Aristotle, an unstable moral independence from Aquinas. In the next installment, or perhaps the one after, we will hear the chord sound for the first time in human history, if still imperfectly.

 

Hillary Versus America: Part V — The Dawn of the West

Imagine the recent Olympic Games had gone differently. In the 100-meter final, Usain Bolt notices a stranger at the starting blocks one lane over from his. He puzzles for a moment, shrugs it off, and takes position. The starting gun fires. Bolt runs superbly to a 9.81, but, to his astonishment, crosses the line behind the stranger. Not only that, but the stranger has managed to cross the finish line, complete a full lap around the track, and cross the finish line a second time, all in 9 seconds flat.

In the field of ideas, someone like Einstein is Usain Bolt — a stand-out superstar, someone whose achievements so far outstrip even elite competitors’ that there is hardly competition to be had at all. The stranger? He is someone like Aristotle.

Aristotle’s achievements are so outsized that he seems like a comic-book hero, one  written by a teenager with no sense of proportion. To recount them plainly is to invite endless caviling. This is because no one such as Aristotle is allowed — in the contemporary mind, steeped in its characteristically egalitarian prejudices — to exist. (In my view, Aristotle has exactly two peers in the entire history of human genius: Homer (whose merits and accomplishments are beyond our scope here), and the genius-of-all geniuses, his or her name lost in pre-history, who invented language.)

If we refuse to get bogged down in academic trivialities, refuse to make much of distinctions that make no difference, and refuse to give in to the egalitarian prejudices of the day, an honest list of Aristotle’s more astounding accomplishments might go like this:

  • He invented physics.
  • He invented geology.
  • He invented biology.
  • He invented taxonomy.
  • He invented psychology.
  • He invented science itself.
  • He invented ethics.
  • He invented political philosophy.
  • He invented rhetoric.
  • He invented metaphysics.
  • He invented epistemology.
  • He invented logic.

Take a moment — take several, long moments — to let that sink in.

Of course it is true, for example, that the sophists were teaching rhetoric before Aristotle invented it, that Plato had grand, philosophical schemes for governing the polis before Aristotle invented political philosophy, and that Thales et al. were investigating the natural world before Aristotle invented science. But all of these observations — if they are offered to diminish Aristotle’s unique genius and unparalleled achievements — entirely miss their mark.

They miss because Aristotle’s innovations were, above all and in essence, innovations of method. While Aristotle does indeed seem to have been the first person to ask many important questions, it’s not as much what he asked, but the way he asked that distinguished his thought. With Aristotle, many important questions were asked, for the first time in the only right way: with implicit or explicit reference to an explicitly defined and validated method.

The keys to Aristotle’s uniquely valuable method are in his metaphysics, epistemology, and logic. For Plato, there had been two worlds: the higher world of ideas and the lower world of the senses. Reasoning was a quasi-mystical rite that allowed one’s consciousness to ascend from the dark cave of sensory illusion to the open, sunlight horizons of the abstract True Good. For Aristotle, in contrast, reasoning was rooted in sensory experience; that which we reason about is the world we encounter by means of our eyes, ears, and hands. For Aristotle, then, there was just one world, a world individuals discovered with their senses and could come to understand through rigorous, methodical, logical reasoning.

For reasons that are unknown to history (or perhaps just unknown to me), most of Aristotle’s writing did not survive the collapse of Classical civilization in Europe. The monks who kept the flame of learning alight through Europe’s Dark and Middle ages had earlier and more extensive access to Plato’s philosophy, and the philosophies of his followers, than they did to Aristotle’s. (This surely suited them, as Platonism might as well have been tailor-made to serve as the official Classical philosophy of Christendom.) But although the precipitous decline in commerce which characterized medieval Europe included a precipitous decline in intellectual commerce, just because Europe was no longer trading in Aristotle’s philosophy did not mean that it was lost.

After the fall of Rome and Classical civilization, as the new Western civilization struggled in its infancy, Islamic civilization was just catching its stride. And, as civilizations on the upswing tend to do, Islamic civilization found itself reading Aristotle. By the late 1100s C.E., the best Aristotle scholarship in the world, perhaps the only Aristotle scholarship in the world, was being done in Arabic by men like Ibn Rushd.

Centuries after Aristotle’s time, Western civilization was in the midst of what would later be called its Renaissance. The word “Renaissance” means “rebirth.” This period, which began in the 1300s C.E., was a period of furious cultural transformation in Europe. But what was being reborn, and what was driving this transformation? Generally, they were the values and philosophy of Classical civilization that were being reborn, but, above all, and in essence, the Renaissance was the second coming of Aristotle.

Consider this painting, Raphael’s The School of Athens:

 

 

Sanzio 01.jpg
By Raphael – Stitched together from vatican.va, Public Domain, Link

 

The central figures are Plato, on our left, and Aristotle, on our right. By 1509, when this painting was begun, the Renaissance was well underway. Raphael is recognizing and honoring the central figures of Classical civilization, giving them credit for what their thinking contributed to the rebirth and renewal that characterized Italy in the 1500s. By placing them as he does in this great painting, Rafael is saying: “Look around you at the wealth and power of our civilization. This wealth and power comes as a gift from these two men’s hands.”

And now look at those hands. Notice that Plato’s hand has one finger extended upward. This is Raphael recognizing that Plato found the root of reality in the otherworldly, in heaven above, in the “higher” realm of ideas. But where is Aristotle’s hand? It is spread out, open, taking in the world before him, and, not coincidentally, taking us in as well, as we stand in front of Aristotle, viewing the painting. Raphael knew it: Aristotle’s philosophy was a philosophy for living on earth, a philosophy of the senses, of the hands-on, the practical. (Raphael, who took ideas and realized them through rigorous manual labor, likely found a kindred spirit in Aristotle.)

What Aristotle did by inventing formal logic was to take the essence of valid thought, logic, and capture it in a concrete method; he made logic a practical art. He took all the formerly mysterious things that successful minds did whenever they succeeded in knowing reality, and distilled these down to a recipe that anyone could follow. He made logic — which in its most developed forms had been an aristocratic skill, practiced with difficulty by men like Plato, men with the wealth and leisure to refine their thinking through thousands of hours of impractical conversation — accessible to everyone.

If I were to pick just one picture to illustrate just how important Aristotle is to Western civilization, it would be The School of Athens. But I am going to pick two pictures. The second is this chart:

 

 

 

Notice what happened to world population when the rebirth of Aristotle had had a few hundred years to settle in in the West. This is why I think a better name for Western civilization would be: Aristotelian civilization.

 

Hillary Versus America: Part IV

Western civilization had begun toying with something entirely new when Thales and those who followed him began the reasoned investigation of nature. But these efforts were not systematic enough or sustained enough to distinguish Western civilization’s inchoate version of science from other civilizations’ similar efforts. More importantly, science itself is not the essential or distinctive feature of Western civilization, because its development was an effect of a more fundamental cause. To tell the story of that cause’s emergence requires this installment to detour into the political and cultural history of one particular city in Greece: Athens.

In the Greek-speaking world of Classical civilization, the prominent form of political organization was the polis. In English, polis is translated as “city-state,” and is the antecedent of our words “politics” and “police.” A city-state is an independent country that consists of a single city and the surrounding area. Athens was the foremost city-state of the time (rivaled principally by another city-state, Sparta), and it commanded great wealth and military power, eventually developing into an empire. (This last development, through confrontation with Sparta, would lead to its downfall.)

Athens began experimenting with democracy in earnest after a particularly unpopular ruler, Draco (from whose name we get the adjective “draconian”) left upper-class Athenians wanting more control over their own fates within the polis. There were fits and starts, but by the beginning of the 5th Century, democracy had solidified its grip on Athens, and Greece began the era of furious cultural productivity that is now sometimes referred to as its Golden Age. Socrates, Plato, Diogenes of Sinope, Aristotle, Herodotus, Thucydides, Aeschylus, Sophocles, Euripides, Euclid, Pericles, and Alexander the Great were all figures of this era.

Although early Greek philosophy, following Thales, had tended to focus on the natural world, the great wealth and power of Athens perhaps made its upper-class inhabitants less concerned about mastering nature; they had all the goods they needed, and plenty of leisure time in which to enjoy them. Greek philosophy took a turn, in the democratic 5th Century, away from Thales’ interests, what would later be called “natural philosophy” or “science,” toward ethics and political philosophy. Put differently, Greek philosophy took a turn away from nature and a turn toward people, toward human values and human modes of social life. The man who initiated this turn, his hand unwittingly on the helm of world history, was Socrates.

In a dictatorship, there is no point in debating justice. Justice is whatever draconian nonsense Stalin says it is. In a democracy, though, competing notions of justice jostle for position. Athenian democracy was not like contemporary American governance. There was no separation of powers among different branches of government. The Athenian assembly was judge, jury, and executioner. It made laws, judged cases, and enforced judgments. There were no limits whatsoever on the kinds of laws that could be made, judgments handed down, or actions executed. Your neighbor could take you before the assembly and “sue” you for having a funny face, and demand that all of your property be handed over to him in recompense. He could have you tortured to death in the town square because you ate too many lamb kabobs at his last barbecue. All he would have to do was convince enough of the assembly to see things his way. Socrates lived in this climate, and it would, eventually kill him.

Socrates would eventually be executed at the assembly’s order, but long before he would be found guilty on charges of “impiety,” “corrupting the youth,” and “making the worse argument appear better,” other aristocratic Athenians realized that the whims of the assembly were a threat, not primarily to their lives, but to their livelihoods. If the assembly were to decide a case the wrong way, a fortune could be lost in a day. So when traveling teachers of rhetoric began offering instruction to aristocratic Athenians, it caused something of a craze. These teachers, called “sophists”  offered instruction in the art of persuasion, for a fee. (“Sophist” comes from “sophia,” the Greek word for “wisdom”; it is the antecedent of “sophisticated,” and half antecedent of “sophomore.”) Aristocratic Greeks considered it a sort of public duty to share any wisdom you happened to posses, so this practice of charging fees for teaching was considered a bit unsavory, possibly blasphemous, even criminal. Perhaps because trying to convince an assembly of random Athenians to punish a sophist for sophistry would have been a little like challenging ’87-vintage Mike Tyson to an underground boxing match, the general grumbling against sophists went nowhere. They taught their techniques, collected their fees, rubbed elbows with aristocrats, offended Greek sensibilities, and got away with it.

As Plato tells the story (and we have to rely largely on Plato — who was a student and ardent admirer of Socrates — because Socrates left no writing of his own), many Athenians seemed to have thought Socrates was, himself, a sophist. As Plato portrays it, though, Socrates was their opposite. First, he never charged fees for teaching. Second, he claimed to be totally incapable of teaching anything. Indeed, Socrates claimed that he could teach nothing, because he knew nothing.

The story is told in Plato’s dialogue, Apology, which recounts (although none can say how accurately) what happened when Socrates went before the assembly to defend himself (“apology,” in this context, means “defense”) from the aforementioned charges. Socrates claims before the assembly that a friend of his, Chaerephon, asked the mystic oracle at Delphi whether there was anyone wiser than Socrates. The answer was “No.” When Chaerephon recounted this to Socrates, Socrates was gobsmacked. On the one hand, as a pious Greek, he could not doubt the oracle. On the other hand, Socrates believed he knew nothing. He decided to puzzle out what the oracle could have meant. His method? Socrates went around asking questions of people who had a reputation for wisdom. Every single time, it would turn out that they weren’t particularly wise after all. For example, a reciter of poetry (“rhapsode”) might be “wise” in performing Homer’s epics, and though he thought his wisdom in this area meant he was wise in other areas too, his blustering answers to Socrates’s questions would invariably prove otherwise. Socrates eventually concluded that the gods had called him wisest of all because he, at least, understood that he knew nothing, whereas everyone else knew nothing, but failed to understand this basic truth about themselves.

Apparently Athens’ elite didn’t take kindly to Socrates’ probing questions. Although the details of Socrates’ trial and execution might not have been just as they are portrayed in Plato’s dialogues, certainly Socrates was hauled before the assembly, certainly he was found guilty of offending the leading citizens’ sensibilities, and certainly he died as a result. Unfortunately for the reputations of all the Athenians involved in this trial and execution, Plato, who witnessed this miscarriage of justice, and seems to have all-but worshiped his mentor, turned out to be both one of the greatest writers and one of the greatest philosophers of all time.

Having the full attention of Plato’s world-class mind did wonders for the longevity of Greek philosophy. While a mere pamphlet’s-worth of writing survives from the nature-focused school of Thales and the other per-Socratic philosophers, a dictionary-thick stack of Plato’s dialogues survives to this day. What made his writing last, I think, was not just that it was extraordinarily well crafted, but that it dealt with subjects that are of great interest to people who have power and position, and want to retain or enhance them. For Plato realized — if democracy meant that matters of justice, of wealth and power, and of life and death, could be decided by persuasive speeches, if, in other words, the power of the polis could be governed by wise words (either sophistry or philosophy or a mixture of both) — why couldn’t the basic nature of the polis be governed this way? If the assembly could be convinced to disband itself, to take a very simple example, democracy could destroy democracy. If you were an Athenian aristocrat, bitter over your relative loss of position in the newly democratic polis, you might find Plato’s arguments so interesting that you would have a scribe make a copy of them, and a courier hand-deliver them to your friend in another city, where perhaps the rabble were making noises in favor of a democratic revolution. Perhaps copies of copies of copies, all this copying paid for by the aristocrats who could afford it, spread this way, letting Plato’s writing survive the centuries. But however it happened, and for whatever reasons, Plato’s dialogues did endure, and not as mere curiosities. The ideas he puts forth in his dialogues are the dominant ideas in Western civilization even today, although they are not the most distinctively Western ideas. In fact, many of Plato’s ideas are distinctively anti-Western:

  • The world we encounter through our senses is a low-quality shadow of a high-quality world that exists beyond our senses.
  • Ideas, which come from the higher world, are more real than things, which are mere shadows.
  • The soul is something like an idea.
  • All souls existed in the higher world of ideas before coming to the lower world of the senses.
  • When we die, our souls return to the higher world.
  • Souls are immortal.
  • When immortal souls return to the higher world, they reconnect with the benevolent ruling power of the universe: the Good Itself.
  • There is a natural hierarchy of men. Each person is suited to fill a particular role in society, from governors to warriors; from craftsmen to slaves.
  • The most productive and just society will have a place for everyone, and everyone will be in their place.
  • Those who have the highest understanding should rule over those with lesser understanding.
  • Philosophers are those with the highest understanding, because they can, by developing their superior nature, come to understand the One True Good that transcends all apparent good things.
  • Therefore, philosophers should rule the polis.
  • Common men, since they cannot “see” the transcendent ideas upon which philosophers base their just rulership of the polis, can never understand the true reasons for the ruling order of the polis.
  • Therefore, for their own good, and for the good of the polis, the common men should be told lies.
  • Because these lies serve the One True Good, they are Noble Lies. The rulers of the city should not feel bad about telling them.

If you have any familiarity with Christian theology at all, the fact that Plato lived hundreds of years before Christ should, if this is your first exposure to his ideas, raise an eyebrow. It is not without reason that academics are fond of calling Christianity, “Platonism for the people.”

If the development of Classical philosophy had ended with Plato, Western civilization as we know it today would likely never have come into being. But Golden-Age Athens would become the stomping ground for at least one more world-shaking genius: Plato’s student, friend, and rival — a foreigner named Aristotle.

[Edit 9/15/16: Added a comma.]

Hillary Versus America: Part III

Picture a practical joke, played by an older brother on his younger brother. The little brother is having cereal for breakfast. He pours out a serving from the cereal box, which is nearly full, and so has some small heft. The big brother distracts him, “Look behind you!” As the little brother looks away, the big brother swaps boxes. The replacement is nearly empty, but looks the same. A bit later, the little brother goes to refill his bowl, but he jerks the lighter box up way too quickly, then over-corrects, sending bits of cereal flying up, then falling down in a shower into his hair, over the kitchen table, and onto the floor. Big brother has a good laugh.

The little brother’s senses might seem, at first, to have told him that the cereal box was both heavy and light at the same time and in the same way. But then he checks his premises. Things in the world don’t normally behave like this.

Of the two values most distinctive and essential to Western civilization, individual rights and reason, reason is the more fundamental. What is reason? I know of no better definition than that offered by the philosopher Ayn Rand: “Reason is the faculty that identifies and integrates the material provided by man’s senses.” Reason operates by a particular method: logic. “Logic is the art of non-contradictory identification.” What does this mean in practice?

To use reason is to investigate or examine the world by means of the senses and then to check the various (provisional) conclusions one comes to about the world against each other. If there is a conflict discovered among these provisional conclusions, then logic is applied to sort it out. Contradictions do not exist in reality; a thing cannot be both heavy and light, for example, at the same time and in the same way.

A reasonable little brother might realize this. If the cereal box that had been nearly full was suddenly nearly empty, perhaps it wasn’t the same box after all? Or perhaps someone had emptied it? And since the little brother knows he had neither emptied nor switched it himself … it would suggest that big brother should get a bowl of milk and cereal bits dumped in his lap, right away, if not sooner.

Man is the “rational animal.”  Human beings reason; this has been true since man has existed, and has been true in every culture and in every civilization. If we did not try to make an ordered, logical sense out of our sensory experience of the world, but instead treated every phenomenon as entirely unique, entirely unrelated to every other phenomenon, we would not have lasted as a species. It would have been impossible even to feed ourselves, since we would have had no means for concluding, for example, that because the first rock we bit wasn’t food, the next one wouldn’t somehow turn out to be.

While people in civilizations other than Classical civilization (which is Western civilization’s antecedent in this context), used reason to develop sophisticated life-improving technologies and enduring socio-political structures, only in Western civilization would reason itself become a core value, and only in Western civilization would systematic reasoning about the natural world become a prominent area of inquiry. The historical record is sparse, but this process seems to have started with a man called Thales, who lived in the 6th Century BCE, in what is now Turkey.

Thales seems to have been concerned with finding what Aristotle would later call the archÄ“, or ruling principle, behind matter. He wanted to understand what made all of the physical stuff in the world — rocks, trees, animals, men, etc. — exist as it existed, and change as it changed. Note that it is precisely this line of inquiry that recently led to the discovery of the Higgs boson. In other words, modern science derives directly from the attempts to answer the questions that Thales and others in Classical civilization started asking. Thales is often recognized (more because it is convenient to pin the movement to a name we know, and less because we actually know that Thales started the movement) as the founder of Western philosophy.

Fashions come and go, and probably the fashion for this style of philosophy around the islands and coasts of 5th-Century Mediterranean would have died out eventually, or it would have been absorbed into other cultural traditions, such as local religions or schools of engineering and other practical arts, if the Greek-speaking people of that area had not become enormously wealthy (by the standards of the time), and if they had not developed a new social technology: democracy. With enormous wealth came both the desire, on the part of the rich, to keep as much of it as possible, and the leisure time to come up with novel strategies for doing so. With democracy came arguments, lots of arguments, and often these arguments were about wealth, who should have it, and what to do with it. When wealth, leisure, democracy, and the fashion for philosophy came together in Classical-age Athens, the most prominent city-state in the Greek-speaking world, a cultural crucible began to heat. Its molten mixture, when it was poured out, would be forged into weapons and tools unlike anything the world had ever seen. Many centuries later, with these weapons and tools, the West would conquer the world.

Hillary Versus America: Part II

I have said that Western civilization faces a crucial choice: elect Donald Trump and struggle a little longer for life, or elect Hillary Clinton and resign itself to suicide. But what is Western civilization? What does it matter if it lives or dies?

Western civilization is the civilization “west of Greece,” the civilization that emerged from the ashes of the Classical civilization of Greece and Rome. It might also be called “European civilization” (although I will not be calling it this, for reasons that will become clear). Beginning more than 1500 years ago, as the Roman variant of Classical civilization declined and fell, a number of elements mixed: the cultures of the invading groups (Goths, Visigoths, Vandals, Huns, etc.); the emerging religion of Christianity, which bore with it cultural elements from Judaism; and the remnants of Classical culture and civilization. Of crucial importance to our concerns here, the remnants of Classical civilization connected the emerging Western civilization with the culture of Homer, Euclid, and Aristotle. This connected Western civilization to a tradition of reason (especially Greek philosophy), which would develop in tandem with the Judaeo-Christian tradition of faith. Because both Christianity and Greek philosophy have shaped the history of Western civilization at every turn, and because these two cultural forces have had a complicated and contentious relationship, the history of Western civilization has sometimes been understood as the story of the conflict and cooperation between Athens (Greek philosophy) and Jerusalem (Christianity). In sum: Western civilization is the civilization that developed in Europe after the fall of the Roman empire, mixing Classical reason and Judaeo-Christian faith, and developing from these twin foundations.

(Western civilization has been uniquely successful, and is now the dominant civilization on the planet. The United States of America, it must be noted here, is an outpost and an outgrowth of this civilization. Among other things, this means that if Western civilization were to fall, it would entail the end of the United States, either through its dissolution as sovereign state, or by being absorbed into another civilization, or both.)

Having a definition of Western civilization does not yet tell us whether it should continue to exist, or why it matters. To do that, we must consider civilizations in general, and we must have some standard for evaluating them. How, in general, would someone determine whether any civilization deserves to exist?

A popular answer these days might be that all civilizations deserve to exist. If people believed this, generally, you would see organized attempts to preserve any civilizations that were in decline, much like environmentalists engage in activism to protect endangered species. And, in fact, we do see some evidence of this kind of civilization-preserving activity, although, partly because civilizations are very big, and (usually) take a long, long time to die out, what we mostly see is activism directed at preserving smaller cultural groups. We see activists and linguists trying to preserve dying languages, for example. We see activists and anthropologists trying to preserve indigenous ways of life as they come into contact with Western civilization. But, although there is certainly value and beauty and irreplaceable uniqueness among the people of every civilization that exists, or that has ever existed, the popular answer is wrong. Not every civilization deserves to exist. For example, Soviet civilization has died, and we should all be glad that it is dead, and hope that it stays that way.

If the popular answer, that all civilizations deserve to exist, is wrong, we are still left with our previous question: How, in general, would someone determine whether any particular civilization deserves to exist? First, one would need to determine what any given civilization is, i.e., what is essential to that civilization or what is distinctive about it. Second, one would need to determine what values that civilization would nurture and what values it would weed out. Third, finally, and most importantly, one would need to have some way of evaluating these values: would it be good or bad that a civilization advances its particular values and holds back or destroys other values?

In my view, if it were well understood among Americans what Western civilization actually is, Hillary Clinton would have no chance of being elected. Therefore, I will focus in the next installment on identifying the essence of Western civilization and on identifying the values which would be preserved along with it; I will not attempt, at this time, to fully demonstrate why these particular values are worth preserving and advancing.

So what is distinctive about Western civilization? What makes it different than all other civilizations? In brief preview, the values distinctive to Western civilization, are, in order of ascending importance: individual rights and reason. These are the values a second Clinton presidency will work to torture to death. In the next installment, I will begin to explain why these values should be recognized as distinctive to Western civilization.

Hillary Versus America: Part I

Western civilization is not prepared to face the choice between Donald Trump and Hillary Clinton. But it is the nature of the most important tests that they do not come at the time of our choosing, but in their own time. Ready or not, willing or not, able or not, the test has begun. The West must not fail. Hillary Clinton must not take office as president.

The stakes are these: if Donald Trump wins, the West earns what looks likely to be its last reprieve. If Hillary Clinton wins, the West will return to business-as-usual. At present, business-as-usual means: suicide. If Hillary Clinton takes office, she will bring the West its draft of hemlock, and, like Socrates, even though it could easily refuse, it will drink. Socrates drank because he thought (or said) that life was a disease, and that service to the state was the most noble end. But life is not a disease. And there are better ends than service to a mob.

It has been wisely said that, “[t]he weight of evidence for an extraordinary claim must be proportioned to its strangeness.” The long-range consequences of human action (unless, say, they involve climate change) are, as a rule, not considered in contemporary politics. In fact, such considerations are implicitly forbidden. (Forbidden how, and by whom? — you should ask.) To even suggest that there could be a connection between something as mundane as a presidential election and something as grand as the fate of a civilization is considered bizarre, outré, even gauche. Since there is no way for me to escape these prejudices born of ignorance, I shall seek, in what follows, to overwhelm them with a mass of evidence.

As the evidence mounts, I will pursue two aims. First, to convince you to see Hillary as I do: as a Trojan horse stuffed, not with Greeks, but with Unitarians; a rough beast slouching toward Washington, a Fabian chimera of nurse Ratched and Lisa Simpson, an apocalypse in a pantsuit. Second, to convince you that the evaluative tools that mass education and mass media have provided are no tools at all, that it is impossible to make a responsible and informed decision about Hillary’s candidacy if one is armed merely with (synecdochically) a college degree and a copy of The New York Times.

As we proceed, please keep the following in mind:

  • That Hillary is apocalyptically bad does not mean that Trump is good.
  • Nothing I write here is to be taken as an endorsement of ordinary participation in the political process. I hold voting in contempt, because ballots are, ordinarily now, assault weapons wielded in gang savagery. But I do not hold self-defense in contempt, and voting against Hillary can be just that.