My Kantian-Kierkegaardian Christianity

In accordance with my standing as an outsider, I’ve been increasingly inclined over the years to make up my own mind about matters of philosophical theology, attendance of religious ceremonies, and matters of faith. I am unimpressed by the tenured faculty member who proudly proclaims his atheism, unless he can show that he or she has actually read a few lines of the New Testament — or has an explanation for those curious events many of us have experienced that don’t quite add up, given the materialist view of the universe.

These days I think of myself as a Kantian-Kierkegaardian Christian. What does that mean, and how does it differ from some kind of fundamentalism?

First, I’ve long accepted the idea that Western civilization is the scene of a long cold war between two worldviews. One is that of Christendom in a broad sense — the worldview of Christianity. The second is materialism, or materialist naturalism. There are other worldviews, but they are not major players, at least not at the moment. And there are several variations on both Christianity and materialist naturalism. There are the many Christian denominations, that is, leading to struggles over how you identify a person as a Christian. And there is Marxist materialism, which differs quite a bit from the varieties of materialism found in the so-called capitalist world.

I am more interested in what they have in common than where they differ. What all forms of Christianity have in common is the existence and centrality of the Christian God, and the believer’s fealty to His dictates, including how Christian salvation occurs (Jesus Christ). Can God’s existence be proven? is one of the most longstanding debates in philosophical theology, or in Western philosophy generally. The upshot of over 2,000 years of conversation is: probably not. The three most widely studied arguments for God’s existence of interest to philosophers, the ontological, cosmological, and teleological arguments, all face well-known and daunting criticisms. But where do we go from here? is the really interesting question. We can, of course, simply become atheists. But atheism does not follow logically from the failures of the arguments. How about agnosticism? Agnosticism might seem to be on safer ground epistemologically, but not existentially. Sooner or later, you must choose. Do I believe, or do I disbelieve? You must commit to one or the other. And be willing to accept the consequences. To refuse to talk about it and live a life under the assumption that belief in God has no part to play in it is to commit to God’s nonexistence. Are there grounds for belief, absent a decisive argument for God’s existence? Is an argument for the existence of a Being such as God even a good idea? An argument is possible that it is not, that such arguments really are a misuse of whatever rational faculties we have, however construed.

From the German philosopher Immanuel Kant (1724 – 1804) we obtain the idea that the human mind is structured to operate in a world of three dimensions plus time. He developed this idea in great detail in Critique of Pure Reason. To make this as easy as possible and hopefully without oversimplifying too much, Kant had the idea that the world of experience is a kind of construction of consciousness, via forms of intuition (space, time) and categories of the understanding. Reality apart from the constituting human consciousness is an unknowable Ding-an-Sich (thing-in-itself). To make this a tad more accessible, all we need to say is that we cannot step outside our humanness, to see what reality looks like from a totally neutral vantage point, outside the structure our conscious intellects bring to experience. You cannot cease being human. You don’t have a “God’s eye point of view.” If God exists, only He has a “God’s eye point of view,” standing outside of space and time. Subtract the theistic element here, and we have what has been a common theme of much subsequent philosophy, explaining the increasing importance assigned to such matters as historicity, culture-centeredness, and so on. For if it is true that we cannot cease being human, nor can we cease being members of particular cultures. How to construe these apparently genuine limits on our cognition remains a problem, because if they are pursued too far in the extreme, they lead to various forms of self-stultifying relativism and subjectivism.

A realist, non-constructionist version of Kant’s ideas is possible that would enable us to avoid a variety of problems. It would hold that our world of experience, of physical nature, is real insofar as it goes, as indicated by our ability to act effectively in it. The objects of experience, that is, are real, but our experience is only of certain elements of them. I have a visual experience of the table my computer is sitting on as I type this, that is, but not of the atoms and molecules which comprise the table, and given the limitation of my visual perception resulting from the way my eye, optic nerve, and visual center in my brain is put together, this is to be expected. Physical nature extends beyond the senses, in other words, and this is the testimony of modern theoretical physics.

Physical nature, too, hardly needs to exhausts reality (as materialism asserts). Theoretical physics supports this idea as well. Theoretical physics is based on higher mathematics, not experience; higher mathematics is, in turn, based on necessity. It is not arbitrary, even if we can find many cultures that make no use of it. The mathematics of, e.g., superstring theory, appears to require higher dimensions: nine, according to one count (the number may have increased). The details are unimportant for our purposes. All that is required is that we be realists about mathematical objects and their ontological implications, as it is unclear what our being something else (nominalists?) would amount to. We find ourselves in a transcendental reality vastly different from the world we experience, something that was evident over a hundred years ago when Albert Einstein (1879 – 1955) was writing.

The most advanced modern science, in this case, is at variance with the requirements of materialism but compatible with Christianity. Note carefully: compatibility is a logical relation. Two propositions are compatible if both can be true in the same possible universe. Physical science may not prove or even offer direct evidence for the existence of any deity, or set out to do so. It, like experience, begins with the assumption of a physical universe of three spatial dimensions plus time, at least until reasons appeared for questioning that assumption. Its presuppositions that this universe is both ordered (not chaotic) and that its order is comprehensible to the human mind, are suggestive. For are these presuppositions true? What does it mean to ask this?

All of which brings us to the Danish philosopher Søren Kierkegaard (1813 – 1855). Some historians of ideas label Kierkegaard’s philosophy “Christian existentialism.” He himself had no label for it, and would have rejected labels. And he would have been suspicious of the above presuppositions, which are just too easy — thrown into doubt by what we would today call “black swan” events.

Kierkegaard emphasized that God’s existence is impossible to prove. He singled out the design argument, popular in his day, and still popular now among many Christian theists, who use the term Intelligent Design even though neither does Intelligent Design necessarily entail that the Designer is the Christian God. The point is, perceived disruptions in the design would cause the perceiver to throw out the whole thing (cf. Philosophical Fragments, ch. 3), in the context of the absence of logical entailment from “Intelligent Design” to Christianity’s God. An agnostic friend once reasoned to me, “There’s no proof.” It dawned on me later that day: that’s true, and it’s the key! Christianity is based on faith, on a Kierkegaardian “leap” (his term), and cannot be based on anything else.

Nor can any other worldview be based on anything else. Materialist naturalism seems to me refuted by recent findings of theoretical physics, the many experiences we have that do not align with it, as well as careful attention to what goes on with language and our understanding of it (not a material process). I know that any materialists reading this will complain that I don’t understand materialism. It is always possible to reinterpret experience so that it fits a favored point of view, usually by ignoring or discounting those elements that “don’t fit.” The point I would make is, materialism is not a finding of any science. Nor does any scientific theory entail it, not even the ones on which there is largescale consensus, like Darwinian evolution. Those who believe otherwise, all they have to do is lay out the logic leading from specific findings of these sciences to the general thesis that materialism is a true account of reality. I submit that this cannot be done. Materialism is, if anything, a presupposition of a certain way of looking at physical reality alone through science. Empirical science, therefore, has not refuted Christianity, nor could it do so. Much modern philosophy of science begins by having science on an epistemic pedestal (except for a few writers such as Paul Feyerabend): positivism and its immediate descendants. Remove it from this pedestal, and this becomes instantly and abundantly clear.

We can say all this before we even get to ethics, or political philosophy, or legal philosophy, with all the practical problems they lead to. I submit: in all three of these areas, if our goal is to base our conclusions on reason instead of obedience to secular authority, materialism leaves us completely at sea. You can see this by googling the essay by British philosopher Bertrand Russell (1872 – 1970), “A Free Man’s Worship,” and reading it carefully. It is a classic statement of the secularist’s stance based on purported findings of modern science … and dilemma. In different ways, the Russian novelist Fyodor Dostoevsky (1821 – 1881) and the German philosopher, Friedrich Nietzsche (1844 – 1900), each had far more dramatic and compelling accounts of the actual consequences of “God [being] dead.”

But this should be sufficient for now. Normative matters require a far different discussion than we’ve provided here. I will discuss them in a future post. Just as a teaser: while it is possible to quarrel with Christian ethics, or claim that there are many points where the Christian worldview is unclear, all I can say at this point is, Christian ethics is no worse off than any of the available secular theories, all of which fail miserably if the idea is to establish them with something more than, “This is where I make my stand.” As for legal philosophy, I will invoke the idea of legal positivism: the idea boils down to the law being whatever those in power say it is. Legal positivism stands unnoticed in the background behind social issues ranging from abortion to Kim Davis’s refusal to put her name on marriage certificates for gays, and behind Judge Bunning’s contention that natural law would set a terrible precedent. It would at that, because it would destroy the entire edifice of legal positivism! But … I get ahead of myself. More on a future occasion. Where we will end up … there are propositions worth believing … even if you cannot prove them true! This ought to be compatible enough with all the modern and postmodern tendencies to be of some interest!

Posted in Christian Worldview, Philosophy | Tagged , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment

Theses on Political Economy (Pre-Meltdown of 2015?)

A borderline-panic has gripped U.S. markets, which experienced their worst two-day drop since 2008 (roughly 890 points). This may be just a prelude of things to come. Before going on: I sincerely hope this analysis is wrong! It may not be, and if you live in a big city in the U.S., my advice would be to pack up and get out!

In my book Four Cardinal Errors I spoke of the Meltdown of 2008, which ended with the infamous bailout of the very financial institutions whose reckless behavior caused the meltdown in the first place. But I could cite writers the rest of the day — many of them outsiders like myself who see the forest because we aren’t blinded by the trees — who believe the real financial holocaust is still ahead. With its epicenters the bubble economy in China as well as banker-dominated Europe (following the resignation of Alexis Tsipras in Greece) as well as the U.S., this holocaust could be about to begin.

We must revisit first premises in political economy. A philosopher is obligated to do this. Anyone may challenge me who believes any of these theses are wrong. Show me where!

1. Real wealth (as if there is any other kind) must be produced. It cannot be conjured out of thin air by, e.g., a central bank. The latter idea is magical rather than rational thinking. There are premises here which could have been earlier theses, but they go outside political economy. The physical universe is indifferent to our fate. Nor is God going to “drop us a sandwich” from on high. Hence the necessity of human action. Mises had that right. Actions — the conscious employment of specific means to achieve specific ends — are directed at solving problems, the first of which is ensuring our ability to eat and maintain shelter over our heads.

2. People increase their real wealth legitimately by producing more than they consume, within their communities. This involves cooperation as well as competition. Indeed, people must cooperate on many things if they are to live in a community.

3. A sustainable advancing civilization must therefore emphasize production and not just consumption. It must also encourage voluntary cooperation and trust. Voluntary is best, but as Hayek observed in The Constitution of Liberty (1962, ch. 4), coercion is avoidable only to the extent people act responsibly and predictably.

4. A sustainable advancing civilization must be able to pass its sustaining principles on to ensuing generations. This means taking education seriously, encouraging a variety of enterprises and endeavors to further it.

5. One’s moral compass should recognize the fundamental benevolence of production; it should also recognize both that “freedom is not free” and is not an absolute license to do anything one wants, because both ideas and actions have consequences, some of them unintended. Such ideas should be built into all education. And we should be able to revisit our goals and change course if actions to achieve those goals are causing harm we did not foresee. In other words, a moral view of the human world is a necessity, as is flexibility and a certain pragmatism. Morality should recognize the intrinsic value of persons, with all that ensues from this. A person is genetically human, and not just with a right to life but a right to live to the fullest possible without infringing on similar rights of others. To this extent, the libertarians have the right idea. What they get wrong is that people do suffer setbacks not of their own making, or begin with disadvantages not caused by themselves. And they get too old to work, and must be taken care of if not by family than by someone. To this extent, limited welfarism and safety nets are understandable.

6. There is a minority in any population that appears to lack a moral compass, however we understand it. This sociopathic minority adopts a fundamentally criminal mentality that plunders rather than produces.

7. A sizeable fraction of this minority is fascinated with power, and is driven to obtain power … and may well produce for a time (think Rockefeller in the late 1800s), intending to use its wealth not for the benefit of civilization but to buy favors from others, and eventually a political class of surrogate plunderers. This minority has also grown very skilled, almost instinctively, over time, at exploiting every secular materialist and ethical relativist philosophy, and every group-derived source of division and distraction, to sow confusion and disorder wherever possible, pursuing a “divide and conquer” strategy. Issues such as gay marriage, the so-called campus rape culture, and “black lives matter” fall into this category today.

8. Our civilization is failing in large part because we failed to identify, much less understand the mindset, and place checks on this minority … originally the infamous “robber barons” whose minions went on to create the Federal Reserve System in the U.S., followed by European central banks and the Bank for International Settlements in Europe. They understood a remark made by the early John Maynard Keynes that the system would begin the process of consolidating wealth and power in the hands of the few while discouraging saving and other responsible actions on the part of the many, doing so “in a manner that not one person in a million is able to diagnose.” They saw the masses as sheep who responded to conditioning, and designed educational philosophies and systems accordingly. (Think: John Dewey, who was discovered at Rockefeller-endowed University of Chicago and received his initial funding from the Rockefeller Foundation.)

9. It is over a century since the Federal Reserve System was created, and the descendants of this minority now have the financial systems of all the major economies of the world in a stranglehold. They are what I call the superelite (a term I use to avoid confusion with visible national political elites most of whom are bought). They have discouraged production in advanced nations in favor of financialization. They have encouraged those working for lower wages to continue spending as if there is no tomorrow, going into debt to maintain the same standard of living. They have created a single globalized network, whereby if, e.g., China drops into a massive recession, the ensuing economic effects will ripple across the rest of the world. They operate primarily through central banks and the corporations that have grown up around them. Although these institutions are technically private corporations, they do not really produce anything. They do not add value to the world. They merely manipulate paper (or its electronic equivalent). They engage in the magical acts of invention, creating complex instruments such as derivatives the combined paper wealth of which exceeds the real wealth on the planet.

10. It became conventional quite some time ago to blame “capitalism” for this situation … although the merging of private and public we see in the real world (as opposed to the fictitious worlds of ideologues of various stripes) is more characteristic of fascism. Since historically, fascism has always been openly totalitarianism and nationalistic, while the system we see is covertly totalitarian at best and globalist, the terms corporatism or global corporatocracy or technofeudalism are surely better. But clearly, casting this issue as a clash between capitalism and socialism greatly oversimplifies the matter at hand.

11. The proper strategy is to pursue decentralization … a devolving of power away from the center, away from central banks, leviathan corporations, and bloated governments whose bureaucracies enact the will of the superelite and bought-and-paid-for political classes. This strategy emphasizes the small over the large: small businesses, small and highly mobile educational enterprises of a wide variety, etc. This is the only way of ever recovering a world where we produce more than we consume, generate real wealth, and can also be at peace with each other and with the environment as a whole. See Leopold Kohr for some good ideas. His book The Breakdown of Nations, published in 1957 and completely ignored by mainstream academia, should be required reading.

12. The only way to pursue this strategy realistically is through a reawakening of skills-building and critical thought at all levels. The skills must include agricultural (growing things) as well as manufacturing (making things) and technological (coding, etc.). Critical thinking is an absolute must, as a means of achieving sufficient mental independence and grasping the logical structure of one’s surroundings. The result is that one can plan one’s actions rationally (i.e., recognizing that ideas and actions have consequences, some of them unseen). Part of the reason we are in our present predicament is the gutting of critical thinking as part of the gutting of liberal arts learning generally … to create the kind of obedient sheep fit to live in the covert (or inverted – to use Sheldon Wolin’s term) totalitarianism of the globalized order the superelite have been gradually constructing.
This last is the foremost political-economic reality of our time. If it is dismissed mindlessly as a “conspiracy theory” and not recognized and acted upon, none of our problems will be solved. Many people, moreover, will continue to pursue ideological fantasies or create political heroes of the day (Donald Trump is the best present example despite his getting some things right), and our civilization will continue its present decline. Under these terms those not connected to the superelite as functionaries or to the political class as bureaucrats will find themselves living in third world conditions, having to begin again from scratch, and with far fewer resources than are available now! Wouldn’t it be better to address the problems now, before an unprecedented global-financial emergency takes the world to that point?

Posted in Political Economy, Where is Civilization Going? | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment

Worldviews and Christianity: What the Loss of the “Culture War” Means

It is Sunday, but I didn’t make it to church this morning. When we lived in Las Condes, our church, San Marcos, was just a few bus stops down the road. Now, it’s over an hour away, and the public transportation is far less convenient (not to mention the six block walk from the metro station at the far end).

And my wife has been a bit under the weather, so I’m staying here. Some of it might be the pollution. Libertarians won’t like the idea, but Chile is likely to need something akin to a Clean Air Act before long. Or the city of Santiago, at least.

Went to church last week. Left with enough to think about for two weeks.

It was in high school that I became conscious of the reality of worldviews, as we philosophers sometimes call them: comprehensive theories or accounts of reality, merely incompatible in some areas but incommensurable in others. (Incommensurable means: unable to be translated into, or brought under, a single vocabulary and/or set of standards and/or values that will resolve core conflicts to the satisfaction of all who disagree. The term originates with mathematics: the incommensurability of rational and irrational numbers. Incommensurability in scientific vocabularies was my dissertation topic.)

Worldviews. The idea simply sat in my brain for a very long time.

The two clearest and closest-to-home examples of worldviews in conflict are Christianity and materialism, leaving aside for now variations within each. The first finds its ground of reality, knowledge, and valuation in a transcendent God as Creator, on which all reality ontologically depends, is inaccessible to the human senses (unless He chooses to make Himself accessible), and whose reality and nature is only partially graspable by the finite human mind. The second limits reality to the physical universe in space and time. Only science and reason, materialists insist, provide reliable knowledge of reality (reliability includes continuous revisability). The latter, they contend, is the key to progress. The former is not.

Christianity offers a transcendent ground of moral values intended to supervene over other sorts of values: aesthetic, epistemic, and economic. Materialism reduces all valuation to the status of cultural artifact: human animals evolved morality in culture because these have survival value. Western civilization as it developed was fundamentally (although not exclusively) Christian. Some writers credit Christianity for the very idea that the universe is both ordered, not random or chaotic, and intelligible to the human mind: ideas that does not exist everywhere. Without them, there is no motivation to do science. Gradually, beginning in the 1700s, continuing through the 1800s and culminating in the 1900s, various forms of Christianity were replaced by various forms of materialism. Materialism is essentially the guiding philosophy of Western civilization at present, and its prevalence I see as one of the main reasons U.S. civilization is failing. I don’t see this as the only reason civilization is failing but it is the most important one: it has precipitated an utter vacuum in morality and in culture, as materialism cannot produce a moral system with teeth in it: where there are actual rewards for virtuous conduct and decisive penalties for heinous conduct. There are only legal controls which are easily circumvented with money and personal contacts. Ask the Clintons. Do not expect an honest reply, however! Ordinary people can circumvent what they perceive as inconvenient rules. The 11th commandment: thou shalt not get caught!

Thus the materialist worldview enables, and encourages, an ethic of cynical opportunism, especially in one’s business or financial and personal affairs; and also especially in government. Both are about power and control over resources — and over people. What does all this have to do with the now-lost “culture war”? Plenty, as it turns out!

The “culture war” is essentially a conflict between the idea that there are universal standards of merit, quality, etc., in literature, philosophy, and so on, and the idea that all standards are local and history-specific, or group-specific. Since at least some standards are history-specific, there is just enough truth in the latter idea that materialism has enabled it to catch on and rise to domination in academia. To those who defend the latter, the former idea, of universal standards, is a delusion — an objectification of the local or group-specific (the group being straight white Christian males, of course). Just yesterday I encountered the strange idea that reason itself is a straight-white-Christian-male construct. The “philosopher” being interviewed apparently doesn’t realize the racist, sexist, homophobic, etc.., implications of his pronouncement (he’s a white male): other groups are incapable of using reason and so are fundamentally nonrational or irrational (take your pick: the position is not clear enough for decidability).

All this is somewhat beside the point. As far as the “culture war” goes, it is clear that Christianity has largely missed the boat. It missed the boat long before the ongoing clash began to be called that. Christians sat on the sidelines during the early civil rights movement, the early women’s movement, issues involving poverty, those involving the environment, and so on. Those arose because of a genuine sense of injustice, or in the case of the last, that commercial products (e.g., pesticides) were, as a matter of objective fact, damaging other species and ecosystems themselves in the name of profitability.

Christians, at one level, what we might call “Christianity of the masses,” were more worried about whether it was sinful to have a beer, go to dances, etc. “Life application” ministries which dominated in many churches trivialized Christian teachings. At another level were those who read Hal Lindsey’s The Late, Great Planet Earth, became obsessed with the “End Times,” and expected they would be Raptured any day now! They wouldn’t have to worry about it, as Planet Earth went through seven years of Tribulation! (A real eye-opener on this subject for me was Gary DeMar’s Myths, Lies & Half-Truths: How Misreading the Bible Neutralizes Christians 2004, although I’d dissected the insidious influence of Scofieldism on Christianity several years before.)

Secularists thus did all the heavy lifting, as it were, trying to make this world a better place, and naturally every one of those movements eventually went off course. Now their dark sides are on display for all today, just as the dark side of capitalism is on display for all to see. Whether your politics are “liberal” or “conservative,” whether your economics are capitalist or something else, if you combine them with materialism and its failed attempt at a secular ethic, whether based on utility or simply on money, what you get will eventually self-destruct.

The victors in the “culture war” are causing the various cultures that make up America to self-destruct. A few of us outsiders have figured this out. Others will, too, probably when it was too late. Christians, I predict, will not be “raptured.” But they are already being persecuted. They are even being physically attacked.

We saw the beginnings prior to the Supreme Court’s gay marriage decision in June, with business destroying legal declarations and then pronouncements that openly violate Christians’ rights under the First Amendment. I don’t need to rehearse all these details. But one upshot of last week’s thoughts: there is a sense in which Christian Churches had this coming, as the long term consequences of having dropped the ball long ago. Some of us were warning over twenty years ago that this day was coming. We were either ignored or dismissed as paranoid.

In case you haven’t figured it out, in the dispute between Christians and materialists, I stand squarely on the side of the Christians. If you’ve read this far and you can’t tolerate that, I am sorry for you — but I would suggest that you revisit your conception of tolerance. If you are honest, you might find it wanting.

But we are wanting as well, and I would never say otherwise. Today, sadly, we Christians are widely seen as narrow-minded, bigoted, hateful, intolerant, anti-intellectual, etc.

Secularists of whatever stripe see themselves as informed, open-minded, tolerant, progressive, etc. — even when behaving destructively. It is okay, in their view, to demonize and destroy those perceived as “intolerant.”

For my own part, I do believe the current state of affairs indicates that Christians need to get their own house in order, and be sure of themselves. If one reads the letters of the Apostle Paul and thinks in terms of context, who is he writing to? Is he writing to the Emperor or Rome, or to Roman authorities generally? No. He is addressing Christians. He says, for example, in I Corinthians 10: 6 – 12: “Now these things became our examples, to the intent that we should not lust after evil things as they also lusted. And do not become idolators as were some of them. As it is written, The people sat down to eat and drink, and rose up to play. Nor let us commit sexual immorality, as some of them did, and in one day 23,000 fell; nor let us tempt Christ, as some of them also tempted, and were destroyed by serpents; nor complain, as some of them also complained, and were destroyed by the destroyer. Now all these things happened to them as examples, and they were written for our admonition, upon whom the ends of the ages have come. Therefore let him who thinks he stands take heed lest he fall.”

Who is Paul talking to? Christians! Who is he referring to, who met with all these unfortunate consequences of bad behavior? Those who fell away, and acted no differently than unbelievers. One of the most striking things is how little Paul has to say directly to unbelievers, or about their practices. There was, after all, plenty of heinous behavior in the Roman Empire besides mere persecution of Christians: suppression of local peoples under the Romans’ heels, violent gladiatorial contests, and plenty of sexual misconduct. I Corinthians 5: 12 – 13: “It isn’t my responsibility to judge outsiders, but it is certainly your job to judge those inside the church who are sinning in these ways. God will judge those on the outside …”

This is not to say Christians shouldn’t engage the world. But they should do so from within full awareness of their worldview and what it requires of them: loving others, accepting responsibility including taking seriously the idea that when God gave us “dominion” over the world (Genesis 1:26) he meant respecting and caring for it, not despoiling and destroying it; and actually retaining the life Christ lived as a regulative ideal. There are themes Christians can glean from the progressives, many of whom are well-intentioned even if the fundamentals of their worldview were bound to steer them off course. Many progressives have a sense of community and of responsibility that is lacking in other ideologies, often just as thoughtlessly steeped in materialism as most progressivism.

It is too late, I think, for Christians to avoid open persecution in U.S. society, where the combination of progressivism, pseudo-tolerance, and hedonism now prevail. But it is not too late for us Christians to get our own houses in order, so that we do not live or act in ways Scripture condemns when trying to engage this world.

Posted in Christian Worldview, Culture, Where is Civilization Going? | Tagged , , , , , , , , , , , , , , | 1 Comment

Modern Moral Philosophy (Part Two—Is the Libertarian Non-Aggression Principle Adequate As the Foundation for a Systematic Morality)

In Part One (two weeks ago) we surveyed such questions as: are there such things as knowable moral facts, or is morality a cultural artifact? I argued that the former claim makes better sense of what we use ethical language to do, what our usages and living debates actually presuppose. That moral principles are not mere matters of opinion — individual, collective, or that of moral authorities — seems a reasonable working presupposition of any ensuing discussion.

I will assume that most libertarians are cognitivists. Their invocation of the non-aggression principle (NAP) is not intended as a mere sentiment, or opinion. To be sure we have the NAP right, I will state it as I understand it: no person may rightly act aggressively towards others, whether this means initiating force or coercion against them, defrauding them, or appealing to the state or some other surrogate power to initiate aggression on one’s behalf.

Libertarians (be they minarchists or anarcho-capitalists) as I understand them intend this as a prime candidate for basic moral truth, or fact, from which everything else they have to say logically follows. They do not want to impose the NAP on you. That would violate the NAP. There is no logical entailment from the truth of a moral principle to the idea that it should be put into place by force. Libertarians therefore rightly argue in its defense, and try to live lives consistent with its application. The question I now want to consider is: can one do the latter in all circumstances? Is the NAP either a necessary or a sufficient condition for a functional as well as moral society?

Please note: when asking the question, I am not saying that the NAP will not do for the majority of situations where human beings interact. Clearly it will. I am interested in whether there are counterexamples: circumstances where it either does not apply, or would give bad advice and should be overridden, e.g., by principles favoring the preservation of a life. The situation may be analogous to the relationship between Newton’s physics and that of Einstein. When Einsteinian relativity became the dominant paradigm in physics, we did not stop using Newtonian math to put up buildings and bridges. What we showed is that Newton’s ideas are limited to the world of middle-sized objects and situations. Likewise, the NAP may be perfect for our workaday world of business interactions, and a very good justification for keeping government as small as possible (assuming that isn’t still too big). But are there life-and-death circumstances that (dare I use the word?) force us to set it aside because the results of not doing so could be disastrous?

A few years ago I found myself in a quandary. My elderly parents had been in declining health for a number of years, and were reaching the point of near-helplessness. My mom, a stroke patient since 1999, had needed round the clock care ever since due to partial paralysis on her right side and additional internal physical problems. My father had been able to take care of her, but was himself having physical problems and showing signs of dementia, possibly Alzheimer’s disease. These began sometime during 2007. At first they were minor, such as forgetting a password or whether the cats were inside or outside. Then one day he returned from the grocery store, having forgotten what he went there for. He began to misplace things, including money. Gradually problems worsened, as on a couple of occasions he’d lost track of where he was while out driving. My mom was scared to death he’d cause an accident. Or, how long it would be before he put something on the stove or in the oven, forgot about it, the result being a fire that trapped them both inside a poorly-designed house (imho) with just one convenient exit to a carport (another to a back porch with no stairwell down and a third from a basement inaccessible to my mom). It was clear; something had to be done, and I was the one who had to do it since I was closest geographically.

I urged them to sell the house and go into assisted living. When my father did not want to go voluntarily, I basically forced the issue. He’d been injured in a fall, which made matters easier than they otherwise would have been. He’d wanted to return home following a couple of weeks of rehabilitative therapy. I wasn’t about to let him. Instead, I basically used force to get him into assisted living, along with my mom. Another fall, a few weeks later, fractured his hip and put him in nursing care. Had he fallen in or outside the house, with no one but my mom around, he might have lain in one place unattended unless he could attract a neighbor’s attention. That would have been a long shot, as their neighborhood was remote, woodsy, and with houses spread well apart. Neither of my parents had kept up with technology. Neither had obtained cell phones they could use to call me.

My use of force on my father turned out to be the right thing to do, even if unauthorized by the NAP, and I have to ponder whether many parallel circumstances are also justified by the need for certain results, such as the general need to care for elderly people who are no longer mentally able to make their own decisions or are aware of their own best interests — even when they don’t want these decisions made (and my father had fought my decision tooth and nail!).

At the time I wasn’t thinking about the NAP. It dawned on me much later that it would have allowed me to do nothing for my parents. They would have either eventually starved in their house; or my dad would have caused a fire or some other calamity. The NAP gave me no advice in the matter. I could have played hands off, done nothing, without having violated it. You’ve not aggressed against someone by leaving him or her to rot. On the other hand, a libertarian could argue that I did violate the NAP when I used coercion to get my father into a facility where he could be properly cared for, and I would have to agree. It slowly became clear to me, however long after the fact, that the NAP doesn’t help us here, but rather gives us very bad advice: if you have no obligations to others except to “leave them alone,” then you have no obligations to elderly family members in grave danger of injuring themselves or of being left to rot. Never initiate coercion against others, the NAP says. Not even to save their lives when they are unable, physically or mentally, to take responsibility for themselves?

One specific point where the libertarian argument can go wrong is in the assumption that everyone has privileged access to his/her own best interests. In the case of children, this is trivially false, of course. Even John Stuart Mill did not apply his classical liberal “harm principle” to children. Usually (not always!) we restrict our considerations to adults, for ethical deliberations. But not all adults know what is in their own best interests all the time. The Alzheimer’s sufferer clearly does not. It is likely that anyone suffering from an addiction does not. Research indicates that addictions are physical, not merely mental. They involve changes in the brain. Expressed in familiar language, the addict wants more of whatever he/she is addicted to, and will sometimes do anything, including harm others, to obtain it. Can you use force to prevent the addict from harming others? Should you use force, if doing so will have a greater likelihood than not of enabling getting the addiction under control? It seems to me you can make a case for a Yes answer.

We could multiply, it seems to me, counterexamples to the NAP: circumstances in which it allows us to do nothing, when whatever moral intuitions we have are telling us otherwise! Or circumstances in which we have to violate it to fulfill a moral obligation.

I believe I have shown that we have obligations to family members that, other things being equal, override the NAP. The need to preserve the lives of one’s own can justify the use of coercion to do it; the value of life itself defeats eschewing force if force is what is needed. Our legal system, which includes components devoted to elder law, has been built up to recognize this, along with as many checks are necessary to minimize abuse of the system. It seems right, too, that local government can penalize someone who fails to exercise commonsense judgment on such matters, whether with children or with infirm elderly people who no longer possess cognitive faculties.

Were all human beings responsible moral actors, such legal machinery would not be necessary, of course. There is an adage, of course: you cannot legislate stupidity out of existence; you can only attempt to minimize its effects on (often uninvolved) others.

Are there other circumstances, related not to family but to markets themselves, where the NAP can be overridden? This is a truly dangerous question for a libertarian. Markets would seem to be the place where the NAP is of most importance, to be upheld as the closest we are likely to come to a moral absolute.

It should be clear that unimpeded markets, to the extent they exist, simply give the masses what they want: no more, no less. Supply something the masses want in large quantities, or can be persuaded to buy in large quantities, and you will get rich. Simple as that. This is how Bill Gates did it, and it is how celebrities do it. The masses like technology, and they like entertainment. Subjective valuation and all that. Economic values, according to this argument, exist in the minds of individuals; they do not exist in things, or in acts of production (labor). Hence the rejection of the “labor theory of value.”

We do, however, live in an objective universe. That’s a controversial claim, philosophically; I understand that. But our experience in the world confirms it pragmatically. Real causes have real effects. If you touch a hot stove burner, you’ll get burned. If you drink four glasses of wine, you’ll get stinking drunk. Three more, and you’ll probably get sick. With more complex states of affairs determining the exact cluster of causes may be difficult, but it is a reasonable assumption that there are causes. In this sense, cigarette smoking has a propensity to lead to lung cancer. Why someone got lung cancer even though she never smoked a cigarette in her life might be tough to explain, but we never assume it happened for no reason at all. (Kant, a transcendental idealist, contended that we look for causes because that is how our rational faculties work. A realist responds: our rational faculties work this way successfully because that is how physical reality works.)

Now suppose the masses want something we have discovered is destructive, in large enough quantities as to do general health damage, or some other kind of damage within communities. Call this something C. Because population P wants item C, does it follow logically that despite the problems C will cause I ought to supply C knowing that if I supply C I will get rich, because P wants to buy C and give me their money?

What is the exact logical entailment here? Again, we are dealing with a range of circumstances, not all circumstances where people are buying products in markets. There are many products for which there is no reason whatever why I shouldn’t supply them if I’m able and can make money doing it. But if C causes harm…. Harm to whom is then the question. Mill argued famously in On Liberty that persons’ free choices should not be interfered with to prevent them from harming themselves. I tried to show above that there are familial obligations that can override this judgment — in circumstances Mill couldn’t have known about. But on a larger scale, how many actions can we take that risk harming only ourselves? If I smoke cigarettes in public in your vicinity, I am not merely harming myself, I am polluting the air you breathe and thus placing you in danger of harm. Cigarette smoking thus conceivably falls outside of Mill’s argument and can be legitimately discouraged. Of course, if there was a way of isolating all smokers in hermetically sealed rooms so that their actions could harm only themselves, then Mill’s argument might apply. But in the actual world, there appears to be no ideal way of isolating smokers — no sealed rooms where they could contaminate one another’s lungs exclusively.

Most of our actions affect others. That is the problem. And in a complex, advanced civilization in which strangers are constantly coming within one another’s reach (unlike the world Locke and Mill inhabited), “your rights end where my nose begins” becomes much less straightforward. Back in 2000 in South Carolina an issue arose with privately owned video poker machines in private clubs or other facilities. Free marketers defended the companies that distributed them and profited from them, the rights of bar and night club owners to have them on the premises, and finally the rights of people to make the choice to play video poker. Problems had emerged, however. There was a case of a woman who left a very young daughter in a hot car in the middle of July while she went into a bar to play video poker. The child died. Other cases emerged of children asleep on floors in bars while one or both parents played video poker. While the parents may have made a “free choice” to play video poker, their children did not.

In addition, both neighboring businesses and nearby residents could reasonably complain that the clubs and bars sporting video poker machines drew troublemakers, or simply people who irresponsibly threw trash everywhere and sullied the area. These sort of behaviors eventually drive customers away from neighboring businesses, harming them, and drive down property values of both those businesses’ landlords (or the businesses themselves if they owned the property) and residents — additional species of harm. (A partial solution to such dilemmas is zoning, which violates the NAP on a large scale but which is taken for granted in cities and suburbs.)

Moreover, and finally, scientific studies have been able to show that activities such as video poker are actually accompanied by a release of endorphins into the brain’s pleasure centers. They are, in other words, another species of addiction. Some forms of addictive behavior can be explained this way, which throws doubt on the idea that these choices are “free” (hence the scare quotes). While somewhere in here is the longstanding “free will” versus “determinism” controversy, the exploration of which would throw us off track, addictive behaviors, by definition, are not “free” in any reasonable sense of that term, and so again, fall outside the scope of Mill’s harm argument.

We may designate such products that harm not just oneself but threaten harm to all others in one’s vicinity, many of them against their will, D, and then ask, If some population P wants to buy D, and I can make money by supplying them D, then am I morally allowed to supply D? If the use of D is against the will of those of a larger population outside of P, then have I initiated force against them by allowing P to have D?

Where do the rights of those in the larger population begin?

We have arrived at the third party problem: if A transacts with B voluntarily, and their transactions negatively affect a population P against their will, sometimes without their knowledge, then should the transaction between A and B be prevented? Note that P did not sign off on whatever agreement was made between A and B. Were they coerced, or possibly defrauded (promised benefits that never materialized)? It is hard to say they were coerced, but they were clearly negatively affected. Real world example: the millions of people P who lost manufacturing jobs following NAFTA and GATT II, signed off on by corporate elites who had governments bought and paid for, and media shills who rationalized their decisions for the befuddled masses by telling them “free trade” would create jobs when foreign customers wanted to buy new American products. Those foreign customers never materialized, of course; a system built on cheap labor does not generate a population with disposable income. The promised American jobs thus never materialized. Thus if A and B are multibillionaires, their voluntary decisions may impact involuntarily on millions of people. Which militates in favor of legal restrictions on voluntary decisions in very large systems with such large-scale impact: what most economists disfavor and call protectionism.

There is, of course, a huge problem of logistics here — when the multibillionaires have the governments that sign these trade deals bought and paid for. But that’s another post. Here, we are just considering what moral reasoning tells us. Moral reasoning is not economic reasoning, which is limited to material considerations of extraction and production, supply and demand, profit and loss. Much modern economic reasoning makes an assumption I have come to believe is false: civilization consists of individuals running around making autonomous, self-interested decisions. It also places material considerations ahead of all other ones, and thus inclines us towards materialism as a worldview. Civilization consists of systems of a variety of sorts, ranging from personal habits and inclinations to familial systems to institutional arrangements and larger community practices, in which flesh-and-blood human beings are embedded, and in which they interact with a variety of motivations, some self-interested, some not. We need to choose: are we defending the interests of an abstraction, homo economicus (the individual as exclusively economic actor, always self-interested, considered apart from the systems in which he or she is embedded) or actual flesh-and-blood persons as they are found: “the world as we find it.” This includes the information available to them, and whether they have a right to information about the potential harm involved in using a particular product. (In interests of full disclosure, I have never been tempted by cigarette smoking; and I believe I have a right to know what is in the food I consume, and to refuse to eat certain foods if they contain certain ingredients, such as MSG and GMOs.)

In sum: the NAP does not give us a complete moral perspective for a complex, advanced civilization. It works under the arguably false assumption that all persons can assume full responsibility for their lives all of the time, and can (should?) suffer the consequences when they do not. It assumes that we act atomistically, and that we can disregard the effects our actions have on others. It is not rich enough to handle the third party problem: if A transacts with B voluntarily and the results of that transaction negatively impact on person or population P or perhaps cause other externalities (pollution or other damage to the environment being one obvious possibility), then we have a problem since obviously P did not sign off in full knowledge or voluntarily on the transaction between A and B. I would be happy to see a response to this that does not boil down to a question-begging, “Tough luck if you can’t or didn’t assume responsibility; or if you’re in population P. The free market should nevertheless trump these other considerations because of its efficiency.”

Now it is true that the trade agreements I criticized were hardly actual free trade agreements a libertarian could support; they were, to use the libertarian vocabulary, statist through and through. But herein lies the problem: given the realities of today’s world, how would we create actual free trade agreements that were meaningful, or that had any force?

What realities are we talking about? The realities of power. Libertarians appear not to understand (or believe it unimportant) where the locus of power really is. They believe government is the root of all evil, because it represents the power of the sword: the power to coerce and thus to systematically violate the NAP. As a non-producer, they argue, it can survive no other way. The actual locus of power, however, is, and has been for many decades now (if not centuries), global corporations. As economic actors they represent the power of the purse: the power to buy governments, control markets, and dictate terms of employment to the masses by determining what skills are wanted, etc. They have never wanted free market competition, and thus have always worked out ways of circumscribing or constraining actual markets. Their prevalence is not compatible with either a democracy or a republic. They represent plutocratic oligarchy, the system that has arguably fallen across all of Western civilization — as a recent study has shown in great detail.

I would argue that the separate disciplines of economics and political science should be abolished. We should go back to the political economy that was eliminated in the late 1800s by the Rockefeller-endowed University of Chicago to hide the political actions taken by ostensibly economic entities, often to thwart the aspirations of competitors and create protections for themselves while continuing to use the language of free markets.

How does one fight plutocratic oligarchy? One cannot do so by refusing to name it, or misgrasping where actual power lies!

We cannot create real free trade agreements, as we do not live in the kind of world where such agreements are possible.

So where do we go from here?

There probably are no systemic solutions other than to allow the plutocratic oligarchic era to run its course and eventually collapse, as empires inevitably do. In the meantime, we can arrange our lives to minimize our contact with it, while agreeing to deal with one another peacefully and voluntarily, and take the moral responsibility to ensure, as much as possible, that our dealings do not harm others. And to fulfill our perceived obligations to others, be they family or stranger. That, I submit, would be easier done in a world of smaller systems, organizations, and communities. We touched on this idea in the material a few weeks ago on Leopold Kohr, who had some of the most interesting ideas along these lines. Remembering that small is better wouldn’t solve all the problems, many of which are products of human nature itself (our default setting is neither automatic rationality nor morality), but it will make them easier to manage.

But for now, it seems that those who would try to reconcile such abstract principles as the NAP with the realities of real human beings living in a complex, advanced civilization where some cannot take full responsibility for their lives, where strangers’ interactions have the potential to harm others on a daily basis, and where massive irresponsibility and outright stupidity is sometimes (often?) the norm, have their work cut out for them.

Posted in Libertarianism, Political Economy | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Modern Moral Philosophy (Part I, Being a Longwinded Inquiry into Whether There Are Any Such Things As Moral Facts, Preliminary to Our Investigation of the Libertarian Non-Aggression Principle (NAP))

Introduction: this post was inspired by a thread on Facebook in which I participated. It raised issues that couldn’t be handled without more depth than is possible in one or two already-lengthy comments. The main topic was the viability and defensibility of the libertarian non-aggression principle (NAP) also accepted by anarcho-capitalists, as well as the broader question of whether morality is a matter of knowledge, opinion, or something else. The former seems to me to presuppose that morality is a matter of knowledge, but others might disagree. To a philosopher (lost generation or otherwise) the question of whether morality is revealed, discovered, or invented is one of the largest in the history of Western philosophy. Professional philosophers will criticize the longwindedness and say this discussion breaks no new ground. For them it might not. I am unaware of new treatments of the subject since the turn of the millennium, which might be a product of my status as an outsider. Be that as it may, I hold no guns to anyone’s heads as I offer what I hope is a reasonably lucid discussion of the main topics. Because of how this material grew in size I finally had to divide it into two parts to make it manageable. Part One provides the conceptual and theoretical background. Part Two looks at the NAP and related applications.

“Can virtue be taught?” asked Meno of Socrates in the Platonist dialogue bearing his name. Plato thought virtue was recollected from the soul’s pre-mortal existence. But whether recollected or gained by some other means, this assumes there is something definitive to recollect or gain — something standing above “mere opinion” and cultural consensus. Today, many people including scholars of undeniable intelligence and insight hold that morality is a matter of mere opinion — or, at most, a cultural artifact. The most sophisticated versions of this idea appear to hold that morality’s purpose is to increase a culture’s chances for survival, a product of evolution with no higher significance. In this post we will explore some of the consequences of the idea that there are no moral truths akin to scientific truths, aside from the higher-order claim of morality equating to cultural artifact. Academic opinion is divided, but were we to seek a semblance of consensus, I am reasonably sure it would be on the side of morality as cultural artifact. Since most academic philosophers are agnostics or atheists, they aren’t going to see it as revealed; and the idea of its being discovered issues in a sense of permanence still lends too much comfort to anti-progressive views seeking radical changes. What are the progressives afraid of? They look back at the past and see moral authority as puritanism emanating from the Church: a source of repression and moral authoritarianism rather than enlightenment. Moral absolutism, in that case, is to be viewed with suspicion as a cover for authoritarians who rule through fear and naturally want to maintain their dominance.

Let us work out the background. Modern Moral Philosophy (from which this post derives its title) was the name of a slim but significant text by W.D. Hudson (1970), written back in the day when textbooks were readable and relatively inexpensive. It began with a distinction that is useful despite somewhat unfortunate terminology: moralism, according to Hudson, tried to tell us what to do, i.e., ethics in the traditional sense. Traditional ethical theories include Christian ethics, Aristotelian virtue theory, Kant’s deontology, Mill’s utilitarianism, and so on. Fans of Ayn Rand will want to add her brand of ethical egoism. Moral philosophy, also called meta-ethics in this context, is the province of analytic philosophers: its purpose was to describe what we are doing when we talk about what we ought to do. Philosophers, in other words, are not moralists despite their past; they should confine themselves to (what else?) moral philosophy. Their questions are not, What should we do? but rather, What do key moral terms such as good refer to (if anything); are the propositions of ethics true or false; can ethical conclusions be rationally justified on the basis of evidence? I will refer to Hudson’s moralism as first-order ethical discourse and his moral philosophy as second-order ethical discourse. I happen to believe both are important. For well over 2,000 years, philosophers have been interested in what we ought to do, whether the source of what we ought to do is a supernatural agency such as the Christian God or whether it has some other source, including the idea that living virtuously is an end in itself. Hence the potential for lively first-order ethical discourse. Traditional systematic philosophers also wanted rational justification for their conclusions; hence the importance of second-order questions, especially if we believe our ethical pronouncements can be rationally justified. Both Plato and Aristotle would have agreed, by the way. When teaching about ethics, I used to distinguish ethics, the discipline, from morality, its subject matter. Ethics, in this case, is the branch of axiological philosophy concerned with the theory of morality (conceived as covering both first- and second-order discourse).

Meta-ethics, emphasizing language and reasoning about morality, tends to divide into cognitivism and noncognitivism. Cognitivists conclude that the propositions of ethics do refer, and can be true or false: that doing ethics is a matter of reasoning, and that moral conclusions are therefore more than matters of opinion. Cognitivism therefore stands at the foundation, so to speak, of most first-order ethical discourse. Noncognitivism rejects this. Beginning with “Hume’s fork” (the division of language into is-statements and ought-statements, the latter not logically derivable from the former), noncognitivists conclude that morality does not consist of matters of fact that can be debated rationally. It is more a matter of passion or sentiment (Hume used both terms), or emotion, or persuasion. Hume was far from believing moral discourse was not useful in society; its primary purpose was social utility. But as philosophers were writing two centuries later, this meant that where first-order ethical discourse is concerned, philosophical analysis has to give way to psychology and sociology, because there are no such things as moral facts about which we can reason, period. Moral fact is, indeed, a contradiction in terms, a failure to grasp the meaning of “Hume’s fork.”

Logical positivists, relying on Kant’s analytic-synthetic dichotomy, regarded all cognitively meaningful propositions as either factual (synthetic), or formal and definitional (analytic). The former are true or false by empirical fact, shown as such by natural science. (To keep this discussion somewhat manageable I am bypassing theories of truth.) The latter are true because of the meanings of the signs they contain, and are not factually informative. Ethical propositions, they noticed, didn’t seem to fall into either category. Statements of the form, X is good, or, I ought to do A, could not be verified, tested, confirmed, falsified, etc., by any natural science. Nor were they true by definition. Logical positivists concluded that they had emotive instead of cognitive meaning: ethical emotivism, according to A.J. Ayer who defended the position forcefully in his Language, Truth and Logic (1936). The idea seemed to make little sense of our actual moral lives and so didn’t last long (even Ayer soon abandoned it). So for noncognitivists, what do ethical propositions do? Hume had the basic idea right. They do involve passions, but accomplish something in society, and this is all we need to clarify. This was easier said than done, however. Another possibility held that the utterance of an ethical proposition is an attempt to prescribe for others (and, presumably, oneself): prescriptivism. A variation on this theme held that such propositions are efforts to propagandize and control, establishing moral authority by psychological means. A final variation, a “self-prescriptivism,” might hold that the adoption of a set of ethical propositions by oneself constitutes an attempt to legislate for oneself without making the assumption that one can presume to legislate for others: a kind of ethical subjectivism.

In the last analysis, though, noncognitivists cannot really “do” ethics. They do not see ethical propositions as true or false. They see them is primarily about influencing behavior. They do not believe there is any such thing as a single morality that applies to all persons. The most they can say, to be consistent, is, “This is where I stand.” The most they can assert or imply (as, e.g., prescriptivists) is, “You ought to stand here, too.” They will say that these are the consequences of rejecting absolutism, the old-fangled view that “morality” is something “out there” somewhere, whatever this is supposed to mean.

Such results are closely tied to cultural relativity, and to ethical relativism. Cultural relativity, which was popular among cultural anthropologists for a time, observes — correctly — that different cultures have different moral belief systems. Ethical relativism infers from this that morality is a cultural artifact. (Ethical absolutism, in this case, is the thesis that morality is not a cultural artifact but is independent of culture, whether revealed or discovered.) Ethical relativism does not maintain that morality is a matter of personal opinion. This is a common misunderstanding. Ethical relativism is compatible with the idea that a common morality is necessary to hold a culture together, and so resurrects the idea of moral authority. The moral authorities are said to “know” something of “morality,” or have special “insight” not available to the unwashed masses. But what is this special “knowledge” or “insight”? The moral authorities will rarely if ever stand exclusively on their own feet. They will appeal to God or the gods, or — in modern times — to Progress (perhaps — the grip of that one may be slipping). One suspects that they have, indeed, gained command over a vocabulary and skillfully given it emotive force. This will generally lead to social sanction against disapproved-of ethical opinions, e.g., those today deemed politically incorrect. Appeal to the majority, however, has never satisfied logical minds; so when such ploys fail, all that truly distinguishes moral authorities is that they can employ majoritarian numbers or sometimes even police power to enforce their opinions. A relativist society is very likely not a free society. Moreover, those asking for the rational grounding of the opinions of the moral authorities are likely to be regarded as dangerous troublemakers who need to be suppressed — as was Socrates.

Hence the chief danger of noncognitivism: it doesn’t just renders ethics a matter of opinion, or even shared opinion for those who happen to agree to “stand” in the same place; it sets the stage for a moral totalitarianism — the very thing the progressives see as abominable in Christian ethics and other forms of absolutism. It should be clear, though, that one cannot really debate morality if it reduces to personal opinion, or even consensus. If my opinion is that abortion is immoral because it takes a human life and your opinion is that it is acceptable because it is a woman’s choice, then if noncognitivism is true there is no further ground we can appeal to: it is opinion “all the way down.” All we can do is go to court over it, let the judge decide, the result being a cultural decision where one side wins and the other loses. We are back to moral totalitarianism.

But consider rape. Surely everyone in civilized society believes rape is wrong, and that the wrong is not merely a legal wrong! There is no debate: no one that I know of will argue that rape is acceptable. Is it merely the law that impels us to reject it as abhorrent, though? How could we even conceive of its being acceptable if we were in a society whose law said nothing, or perhaps allowed members of the elite to rape with impunity (as is the case in a few actual cultures)? So is our rejection of rape still merely a cultural decision and opinion? This implies the possibility that in those other cultures with contrary opinions, rape is sometimes acceptable, and that again in the last analysis, the idea that all we have is opinion “all the way down.” If we deny that such a culture could possibly be moral in its practices, we are granting the idea that there is grounding for morality outside judicial opinion, cultural consensus, or other human authority.

Hence the usefulness of revisiting the likelihood that cognitivism is the more defensible of the two, just based on what we use moral language to do and how we argue our cases. If cognitivism is true, then ethics is not mere opinion, decision, or consensus; moral truths are discovered; or revealed and disclosed; they are not merely invented, decided, or propagandized. Nor did they merely evolve, as here, too, there would need to be reasons why certain kinds of moral codes evolved while their contraries did not. For example, in no culture I know of is wanton lying to one’s fellows a moral imperative (it may be a different matter for strangers and possible enemies, of course). Obviously no culture could survive with that sort of imperative; but this implies a connection between morality and physical reality, the world around us, and conditions for our general survival that would require that morality be more than opinion or decision or consensus. Cognitivism makes sense of our actual moral lives, how morality functions, and how it works in the sometimes-nasty issues that emerge when cultures come into contact or when we speak of progress being made over past stages of our own culture (e.g., the U.S.’s repudiation of slavery). It even makes sense of the deep disagreements over, e.g., abortion, by noting the logical presumption of a moral fact of the matter over which the disputants are disagreeing.

Cognitivism, understood metaphysically, equates to moral realism, the idea that there is a world of moral facts, or truths, not physical themselves but perhaps predicated on physical states of affairs or metaphysical truths which ethics seeks to elucidate. Noncognitivism denies all this. To sum up the predicament of the noncognitivist or moral antirealist: even what we would take to be the most heinous offenses against persons (e.g., rape, or enslavement) come down to matters of opinion, whether personal or cultural, or imposed using police power by a moral authority, as there are no facts of the matter, just strong feelings or predispositions. While the collective opinion and sanction of society or even moral totalitarianism may be enough for the legal positivist mind, it should never be enough for the philosopher who quietly points out that the moral seriousness of our condemnation of rape, or slavery, or genocide, is severely undermined if it reduces to ungrounded consensus or authority — if, indeed, we are in earnest with philosophical inquiry. Those who see philosophy as a debaters’ game played by academics will, of course, disagree. But are we morally serious or not?

So to sum up, when we say that certain actions, other things being equal, have varying degrees of morally rightness (like telling the truth, keeping one’s promises and agreements, securing the needs of elderly relatives who have become infirm, etc.), or that others have varying degrees of moral heinousness (like rape, slavery, telling lies, breaking contracts, etc.), we do not intend to be expressing subjective opinions. Nor are we simply appealing to the authority of societal consensus. The debate over abortion is not a trading of opinions but a debate over whose opinion is better, i.e., grounded in moral fact, or moral reality. We mean to say that as a matter of fact, certain actions are right and should be done while others are wrong and should not be done. Cognitivism rejects “Hume’s fork,” in other words, at least in the sense of their being an absolute fact-value dichotomy with cognitively meaningful and true propositions only being possible for the former. It asserts that there are moral facts, about which we can utter true sentences. That is, when one of us asserts, “Rape is a morally heinous act,” she means to say just that, as a matter of fact (not opinion) that rape is morally heinous, incompatible with a moral view of the universe in which there are definite rights and wrongs.

In this case, is the libertarian non-aggression principle a moral fact? This will depend, as with all basic moral propositions, on what the NAP asserts, what it implies and does not imply, and how it holds up in our moral lives.

(Part Two will appear next week.)

Posted in Culture, Libertarianism, Philosophy | Tagged , , , , , , , , , , , , , | 1 Comment

Not-Quite-Random Thoughts on Conservatism, Anarchism, and the Breakdown of Modern Civilization

Yesterday I found myself outlining an article entitled “Needed: A New Russell Kirk.” Russell Kirk (1918 – 1994) was a conservative philosopher & author of The Conservative Mind (1953), The Roots of American Order (1991) and other books including some gothic-style fiction. He laid out ten principles conservatives ought to believe. How many of those who presently call themselves conservatives are even familiar with Kirk’s principles, much less understand them, much less believe them, is open to debate. Conspicuously absent from Kirk’s list was support for the exclusively economic view of the world so prevalent today, which lends its support to what presently passes for capitalism. While endorsing the idea that freedom and property are inextricably linked, and while promoting voluntary interactions within the community, nowhere does Kirk endorse absolutely free markets or free trade, or say that because there is a demand for something, the market ought to supply it. This is because economic freedom presupposes social and ultimately moral responsibility. Without these, it breeds the massive accumulations of wealth we have seen in recent years at the top, with increasing dysfunction and misery at the bottom, and in the middle.

I do not know if that essay will get written or not. I have a lot on my plate these days (beginning with the stack of tests to grade sitting here). What I believe we really need is another Leopold Kohr. Kohr, as I noted in an earlier post, was a philosophical anarchist, not a conservative, although their views overlapped in crucial areas. Whether he believed in the “enduring moral order” Kirk defended I am unsure; it wasn’t his main focus. Kohr assuredly did believe in the necessity of a morality in which we respect life and each others’ space. He did not see the need for a mono-culture, and opposed globalization. He would have had no patience with what “neoconservatism” became in the 1990s, and even less with what it has become since: the biggest war machine on the planet. Kohr saw the fundamental problems of modern civilization as problems of size and scale, not structure. Kohr would look at situations such as the riots in Baltimore and their background in the combination of a failed progressivism and a burgeoning police state which aids and abets sociopaths. He would say that all these are products of a society grown too big, too centralized, and hence dysfunctional. Those at the top have no idea of the lives of those at the bottom, or in the middle.

The systemic logic of bigness and centralization began with a slow accumulation of wealth and power regardless of the specifics of ideology. This was well underway before the twentieth century even got here. Study the rise of the Rockefeller dynasty. Bigness and centralization led to dependency of various sorts. What it bred was not merely the familiar dependency of the welfare state. If an employee in a corporation cannot leave his or her job either because other jobs are not available or because his skills at doing other jobs have atrophied, and if he is paid too poorly to have saved enough to obtain more education, is he or she not dependent? From the 1800s to the 1900s, the U.S. went from approximately 90% of people being self-employed to around 5% being self-employed. Do the systems of industrial civilization itself not encourage this sort of dependence? This logic also coerces a specific form of secularization, as the workaday need for scheduling and a focusing on tasks to avoid chaos overwhelms any sense of what endures in time. One’s worship of God, if one worships God, is left for Sunday. The rest of the week is taken up by wheeling and dealing for the bosses and struggling for survival for the masses. Readers might find it interesting to examine this meditation on the true purpose and effects of the 40-hour work week.

As the twentieth century progressed we saw an undermining of crucial family systems (helped along by no-fault divorce), the developing of an urban environment of anonymity in which sociopaths can thrive, increasing societal dysfunction, and policies that only made the dysfunction worse because they often treated symptoms with quick fixes rather sought to identify and correct the real problems. Today, of course, we have the full corruption of a bought-and-paid-for political class that could still deliver a “choice” in 2016 between a member of the House of Clinton and a member of the House of Bush (two of the arguably most corrupt dynasties in American politics). There is massive denial that something has gone seriously wrong, and increasingly desperate efforts to avoid the truth. Add to all this the economic dislocations that really began decades ago when globalization was just getting started, but which worsened in 2008, and we have a collapse in moral sensibilities generally, and the increasingly repressive authority structures we have seen since the 9/11 attacks. Fundamentally good and decent people have withdrawn or retreated into the wilderness to “tend their own gardens” as it were. With real leadership gone, collapse becomes inevitable. All we can say is that we do not know when. Our present-day systems are extremely complex, at a level never before seen. What we can say: the U.S.’s massive debts, both the official national debt and other, off-the-books debts, are unpayable. Default will happen. It’s in the mathematics, which unlike politicians, never lies.

The U.S., and indeed much of Western civilization, is at the point where most moral sensibilities are gone from the public square, as opposed to a do-your-own-thing hedonism. Authority structures, unsurprisingly, are increasing without bound as those behind them know how to exploit moral subjectivism and hedonism for their own purposes. Violence increases both with authority structures and with undisciplined reactions to them. The independent-minded among us have already withdrawn our services, whether in becoming an expat or simply choosing to live on the margins of society. We cannot look to academia for solutions. It is mired in worsening problems of its own, from the microspecialization of most academics to the corporatization of university systems themselves, resulting in organizations whose very output, in terms of ideas, has grown less and less reliable over time.

A massive economic downturn is coming. Of this we can be sure. With civil unrest very likely, the federal government is already preparing for what will amount to instituting totalitarianism. No one in power will call it that, of course. They might not even suspend the Constitution! Semantics, too, has become the plaything of propagandists and deniers. Whatever is put in place won’t work in the long run, of course. It will only work for a time, and during that time it will bring about a huge level of misery. But the miserable won’t be able to say they weren’t warned.

What we are talking about is nothing less than the breakdown of modern civilization, and barring the development of a new source of energy able to power a technological society in the future, a lower standard of living. Such thoughts need not engender pessimism. Empires have risen, peaked, then gone into decline and collapsed many times before. We just happen to be in yet another period of decline now, with collapse likely to follow. Although the collapse of authority structures all around a population of hopeless dependents will mean a period of anarchy, those who can prepare properly and weather this period in their private wildernesses wherever they may be, may discover that freedom and the rebuilding of cooperative systems have again become possible. They may also enjoy breathing cleaner air, and drinking cleaner water! And if you are growing your own food (or purchasing it from someone who is), you have far less worry of what is in it!

Will we get it right next time? That depends on whether we humans ever really learn anything. We know the line from Santayana. History is not especially encouraging on this point.

Posted in Political Economy, Where is Civilization Going? | Tagged , , , , , , , , , , , , , | Leave a comment

What Is a Liberal Arts Education For?

Liberal arts education has suffered from increasing neglect for a very long time — for at least 40 years, possibly longer. While it continues to exist in a few private liberal arts colleges, obviously, it long ago ceased to be the prevailing philosophy of education either in higher education generally in the U.S. or in society at large. Its influence, i.e., its capacity to affect the national conversation, has waned. The neglect, whether in the form of defunded programs, the poor opportunities for liberal arts graduates who are routinely warned away from certain majors, or the general disdain in contemporary culture for the “products of the intellect,” has helped hollow out philosophy as well as other disciplines associated with the liberal arts (often conflated with humanities).

Some of the harm just within the humanities has been self-inflected, as we’ve seen in previous posts. Be this as it may, it is easy to see that in many respects, liberal arts education in the larger sense has lost its way because, having been overwhelmed by the seemingly more “successful” STEM subjects (science, technology, engineering, math) it has lost its identity. What is it that liberal arts actually contribute to civilization? This is a fair question, and I doubt that many faculty members in these disciplines could give it a clear answer. It’s possible that if you asked ten professors in the humanities today what their disciplines actually contribute to society, you would get ten different answers assuming they didn’t dismiss the question out of hand as the product of philistinism. The question is worth revisiting, because it can be answered. Fortunately we don’t have to begin from scratch. Authors as well-known as Fareed Zakaria have defended liberal arts education (Zakaria’s most recent book is entitled In Defense of a Liberal Education) and decried education reduced to technical training, calling this emphasis “dangerous in today’s world.”

First, what exactly is a liberal arts education? We can start by analyzing the language. What does the liberal in liberal arts mean? It does not mean liberal as in liberal versus conservative. There is no such thing as a “conservative arts education.” Hopefully no one was tempted by anything so simple-minded. The liberal in liberal arts is derived from the Latin verb liberare, meaning to free. The classical sense of liberal honors this derivation and was in vogue in the 1800s but is mostly defunct today. Arts should likewise not be confused with art in the sense of drawing and painting, or the fine arts, or the performing arts, all of which are recent developments. This word, too, has a Latin root: ars, a generic word meaning skill. Words like artisan and artifact have the same root. Valerie Strauss, drawing on Gerald Greenberg’s material, does a good job of sorting all this out here. What, then, do the liberal arts do? With a nod to Morpheus, in The Matrix, they are intended to develop the mental skills to free your mind. They indeed open a door; the student, must choose to walk through it. What, then, is behind that door? What frees your mind? And from what?

If one goes back far enough in time, a liberal arts education involved a progression through seven distinct domains. The first three were known as the triviumgrammar, logic, and rhetoric, as Aristotle used those terms. The second four were called the quadriviumarithmetic, geometry, music, and astronomy. Mastery of these was prerequisite for the serious study of philosophy and theology (fancy that!). The liberal arts were contrasted with the mechanical arts, which included agriculture, masonry, cooking, weaving, trade, weapons training, and so on. Some people were more suited for the former; others, for one or more of the latter. This is still true today, obviously.

A liberal arts education thus involves an integrated curriculum — a structured set of courses in writing, humanities (history, philosophy, literature, foreign languages), social inquiry (political economy — prior to the modern artificial division of the two into “economics” and “political science” — sociology, anthropology, etc.), mathematics from basic arithmetic at least through algebra and geometry, essential sciences (physics, chemistry, biology). It is important that content mastered in one domain not contradict content presented in other domains. A liberal arts education should have the intent of bringing about a certain turn of mind: able to think critically and judge independently; respectful of learning, truth, and morality; mindful of the differences that exist between people and how societies have changed through time; willing to ask the questions that need asking; capable of making decisions and understanding their consequences both short and long term. When some of us speak of liberal arts as central to what we call real education, this is what we are talking about. It should be clear how a liberal arts education enables students to examine the ideas they encounter, whether from politicians, employers (would-be or actual), journalists, etc., critically. Are their ideas logical? Do they fit one’s own experience? What do others with different experiences have to say? Does history suggest that a given idea has been tried before and found wanting? And so on.

So individual persons can free their minds from untested dogmas and slogans of various sorts through a liberal arts education. This will clearly cause problems for that minority in power which does not want its policies examined in detail and questioned, and we are not far from an explanation of the dwindling support for the liberal arts in our centralized society. Can liberal arts education free not just our minds but our society? This is a much more difficult question. For a society to remain free for longer than a generation, it is necessary that a critical mass of any population have this turn of mind and be able to transfer it to a critical mass of their children.

By a critical mass I mean a group sufficiently large and articulate to move the larger body and gain respectful notice from that minority in power. An educated critical mass will be able to draw on history for examples of what happens when the problems posed by power are neglected, and it will be able to communicate its concerns to a large enough audience to have a larger conversation: those in power may actually fear being deposed from their comfortable offices, possibly by those working within the rules. The members of this critical mass will be independent minded in more than thought. They will be able to take care of themselves by having achieved as much economic independence as is possible, as opposed to dependence on employment by others. They will have some understanding of what it takes to make a free society work, achieving whatever balances are possible and necessary, at least for a time, between personal aspirations and community needs. Determining the size of this critical mass would be a worthy pursuit, as well as limits on the size of the community. We are not referring to the majority of the overall population in either case, obviously. The majority never assumes leadership responsibility, for it is typically incapable mentally of doing so. Those in the majority can generally direct their own lives competently if trained in one of the mechanical arts, but the larger systems that circumscribe their lives must be put and kept in place for them. Only a benign critical mass can see to it that abuses of these are kept to a minimum.

We’ve sketched a kind of ideal here, and that can have hazards of its own. What can we do in the real world we are stuck in? Liberal arts education today can teach students writing and critical thinking skills steeped in close reading of important texts ranging from ancient Greek philosophers like Plato and Aristotle to influential modern thinkers from Bertrand Russell to Thomas S. Kuhn, examining patterns of reasoning, identifying assumptions both stated and unstated, and then writing down one’s own responses and questions clearly to size up their accomplishments and the reasons why, as thinkers, they are worth knowing about. Liberal arts education would teach how fallacious reasoning works, and how rhetoric can be used to play on people’s emotions and lead them by their noses. In other words, it would do many of the same things as its ancestor, with an eye to creating that critical mass. If successful, it would teach appreciation for the ideas of others on their own terms, and not merely seen through the lenses of one’s own assumptions. It wouldn’t shy away from the difficult problems of how both physical nature and the human worlds work: what physical and biological laws govern the natural order — both because some of us want to know these things out of sheer intellectual curiosity, and because the knowledge of specifics might impact on situations or prove useful in ways we have not yet envisioned. For this last reason, pure intellectual curiosity should be encouraged, not belittled. We need to know as best we can, finally, what motivates different groups of human beings. What do they want, and what do they need? This includes peoples living on other parts of the planet. What would make their lives better? How can we work together to solve difficult problems?

Ideally, then, an educational system focused on liberal arts education should enable as many people as possible to be productive, contributing members of society who simultaneously have a sense of community and hence of others, while simultaneously accepting responsibility in an environment where we often will not agree on what to do. Those with a real education in this sense will understand such concepts as moderation in all things, an idea also going back to Aristotle’s concept of the golden mean; they will not confuse liberty with license; they will be very cautious with points of view held as absolute. They will be see when a certain policy is not working, or has brought about injustices, and be able to make principled, mid-course corrections that themselves can be checked so as to not lead to abuses and excesses.

Where, in this case, does STEM education fit in? Obviously it is meeting genuine needs, and should not be belittled as philistinism. STEM education is a clear descendent of the mechanical arts mentioned above for an advanced civilization, and can be developed once a liberal arts core (grammar, logic, mathematics, etc.) has been satisfied. The problem today is that STEM education has been ripped free of its educational moorings, these (with the exception of mathematics) dismissed as unnecessary. Education then becomes education for jobs. This is clearly not enough, one reason being a free and humane civilization’s need for that critical mass. Fareed Zakaria identified an important and very down-to-Earth reason why serious education cannot be mere job training. Technology is changing sufficiently rapidly that by the time a student finishes four years of university education, the job he had in mind four years before may be obsolete. He/she will not have the qualifications for what is available. It would be far better to educate students to think, analyze, plan, strategize, evaluate, etc. — all traits a liberal arts education equips a person to do — rather than train him for specific employment. A workplace in turmoil needs such people, as they can think clearly, recognize problems quickly, and work to solve them effectively. Is this not more likely to benefit employers?

Liberal arts educated people will not, of course, be satisfied with many of the employment options available today. They will not want to be temps, for example, as they will know when resources for full time work with decent pay are available (example: large universities). A solution some will suggest is to work for oneself. That is only a solution for those with something immediate to sell, and will prove to be an exercise in frustration and futility otherwise, for they will know this is not the best use of their skills. They will know, moreover, that many of those traits necessary to address some of our worst long-term problems have been ratcheted down within authority structures. These have to do with the consolidation of wealth and power in a few key organizations (mostly leviathan corporations) and extended families, the idea that central banks can control economies by controlling the money supply without causing long term economic dislocations, the relationship between extractive capitalism and the larger environment, and so on. Educated people will understand that the marketplace has a very important place in a free community but cannot solve every problem. Allow it to solve the problems it is equipped to solve, such as the production and distribution of foodstuffs and other basic items, but recognize its limits, as (for example) with people who lose the ability to participate, or with externalities such as the impact of pollution on surrounding ecosystems.

What, then, is a liberal arts education for? It is for those who wish to have free minds and live free lives, in free communities where freedom includes not just self but incorporates a sense of responsibility for oneself, those around oneself, and the world at large. Today, of course, this is increasingly difficult. A vast array of forces — economic, political, cultural, and so on — is working against this kind of education. Here is where the prevailing philosophy of education emphasizing technical training, or training for business skills, goes off course. Employers want skilled workers, but are baffled when university graduates cannot read and understand simple instructions. People generally want to be able to send representatives to Washington (or their state capitols) who represent their interests, and are frustrated when those considered “electable” continue to represent the interests of the rich. The culture seems deviant and out of control, manifesting a clash of worldviews one cannot understand without knowing the relevant history and philosophy (to oversimplify, as the matter deserves a separate lengthy post: Christianity versus secular materialism). What we need is a national conversation on how liberal arts education can be tailored to address these and other issues tearing at the foundations of our civilization.

Having defended liberal arts education, I definitely believe there are certain courses and majors now associated with it that students ought to avoid like the plague. They should stay away from women’s studies or gender studies as it is now sometimes called, and anything focusing on a single minority group. Why? Because nearly all are exercises in collective grievance, a sort of pseudo-liberal educational affirmative action. Sadly, today’s students need to look closely even at traditional subjects like comparative literature and foreign languages. They should find out who the faculty member is, and if that person has a reputation for using his/her podium as a launching pad for political activism. For example, if a professor considers Thomas Jefferson’s having owned slaves more important than his having authored the Declaration of Independence, then to get a real education, steer clear! Sadly, this is probably truer in “private” liberal arts colleges than it is in “public” universities.

A liberal arts education should free one’s mind from untested dogma and unquestioned authority. It should prepare a person to be part of that critical mass that places or maintains checks on power. Pursuing such an education all the way to an advanced degree is not for everyone, surely; but neither is it the mere indulgence of a self-proclaimed and somewhat narcissistic intellectual elite. It is a condition for prolonged freedom in any sense of the term conjoined with necessary community and worldly responsibility in any sense of those terms. Such a critical mass exists nowhere today, and has not existed for a long time. There is hope, however, within the unrest currently existing in the world. In the U.S., some of this unrest is found in Tea Party conservatism which does not, in my opinion, deserve the ridicule it often receives; also in what is left of the Occupy movement. Both are conscious of control by power elites with unearned privileges who control most of the world’s wealth and resources. Opposition is forming, so far not unified. Ranging from the rising tides of protest against the adjunctification universe in American neoliberal universities, joining with the rising unrest against the enormous and growing gulf between the power elites and the majority that must answer to its dictates or risk war and starvation, there exists hope that in the near future, such a critical mass might be built.

Posted in Higher Education Generally | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

A “Rape on Campus”? Radical Feminism & the Rolling Stone Fiasco

[Note: I’d planned on doing a piece entitled “What Is a Liberal Arts Education For?” But the culmination of the events described here, and their implication for the sorry state of both higher education and popular journalism today, seemed more urgent, so I will post the Liberal Arts Education piece next week.]

“A Rape on Campus: A Brutal Assault and Struggle for Justice at UVA,” the lurid account of an alleged gang rape at a University of Virginia fraternity party unleased on the world late last year by Rolling Stone (Nov. 19), is now completely discredited. Rolling Stone has taken the article down, replacing it with a scathing review penned at the Columbia University Graduate School of Journalism (CUGSJ, report published April 5) which outlines Rolling Stone‘s multiple blunders.

In retrospect, what went wrong in the largest sense was that “A Rape on Campus” ever made it into print. I mean this in all seriousness, even though any radical feminists who happen to wander in here will probably stop reading at this point. (I doubt there are many academic radicals reading my material, anyway.)

The Rolling Stone article is down, but fortunately I archived it before Rolling Stone had time to make it disappear. Here, in the words of writer Sabrina Rubin Erdely, is what supposedly happened at the Phi Kappa Psi fraternity house at University of Virginia on the night of September 28, 2012:

Sipping from a plastic cup, Jackie grimaced, then discreetly spilled her spiked punch onto the sludgy fraternity-house floor. The University of Virginia freshman wasn’t a drinker, but she didn’t want to seem like a goody-goody at her very first frat party – and she especially wanted to impress her date, the handsome Phi Kappa Psi brother who’d brought her here. Jackie was sober but giddy with discovery as she looked around the room crammed with rowdy strangers guzzling beer and dancing to loud music. She smiled at her date, whom we’ll call Drew, a good-looking junior – or in UVA parlance, a third-year – and he smiled enticingly back.

“Want to go upstairs, where it’s quieter?” Drew shouted into her ear, and Jackie’s heart quickened. She took his hand as he threaded them out of the crowded room and up a staircase….

…. Drew ushered Jackie into a bedroom, shutting the door behind them. The room was pitch-black inside. Jackie blindly turned toward Drew, uttering his name. At that same moment, she says, she detected movement in the room – and felt someone bump into her. Jackie began to scream.

“Shut up,” she heard a man’s voice say as a body barreled into her, tripping her backward and sending them both crashing through a low glass table. There was a heavy person on top of her, spreading open her thighs, and another person kneeling on her hair, hands pinning down her arms, sharp shards digging into her back, and excited male voices rising all around her. When yet another hand clamped over her mouth, Jackie bit it, and the hand became a fist that punched her in the face. The men surrounding her began to laugh. For a hopeful moment Jackie wondered if this wasn’t some collegiate prank. Perhaps at any second someone would flick on the lights and they’d return to the party.

“Grab its [sic.] m*****king leg,” she heard a voice say. And that’s when Jackie knew she was going to be raped.

She remembers every moment of the next three hours of agony, during which, she says, seven men took turns raping her, while two more – her date, Drew, and another man – gave instruction and encouragement. She remembers how the spectators swigged beers, and how they called each other nicknames like Armpit and Blanket. She remembers the men’s heft and their sour reek of alcohol mixed with the pungency of marijuana. Most of all, Jackie remembers the pain and the pounding that went on and on.

As the last man sank onto her, Jackie was startled to recognize him: He attended her tiny anthropology discussion group. He looked like he was going to cry or puke as he told the crowd he couldn’t get it up. “Pussy!” the other men jeered. “What, she’s not hot enough for you?” Then they egged him on: “Don’t you want to be a brother?” “We all had to do it, so you do, too.” Someone handed her classmate a beer bottle. Jackie stared at the young man, silently begging him not to go through with it. And as he shoved the bottle into her, Jackie fell into a stupor, mentally untethering from the brutal tableau, her mind leaving behind the bleeding body under assault on the floor.

When Jackie came to, she was alone. It was after 3 a.m. She painfully rose from the floor and ran shoeless from the room. She emerged to discover the Phi Psi party still surreally under way, but if anyone noticed the barefoot, disheveled girl hurrying down a side staircase, face beaten, dress spattered with blood, they said nothing. Disoriented, Jackie burst out a side door, realized she was lost, and dialed a friend, screaming, “Something bad happened. I need you to come and find me!” Minutes later, her three best friends on campus – two boys and a girl (whose names are changed) – arrived to find Jackie on a nearby street corner, shaking. “What did they do to you? What did they make you do?” Jackie recalls her friend Randall demanding. Jackie shook her head and began to cry. The group looked at one another in a panic. They all knew about Jackie’s date; the Phi Kappa Psi house loomed behind them. “We have to get her to the hospital,” Randall said.

Their other two friends, however, weren’t convinced. “Is that such a good idea?” she recalls Cindy asking. “Her reputation will be shot for the next four years.” Andy seconded the opinion, adding that since he and Randall both planned to rush fraternities, they ought to think this through. The three friends launched into a heated discussion about the social price of reporting Jackie’s rape, while Jackie stood beside them, mute in her bloody dress, wishing only to go back to her dorm room and fall into a deep, forgetful sleep. Detached, Jackie listened as Cindy prevailed over the group: “She’s gonna be the girl who cried ‘rape,’ and we’ll never be allowed into any frat party again.”

Time out. Set your emotions aside. I ask you, in all honesty: does what you just read make an ounce of damn sense?

Let’s look at it. Here’s this somewhat naïve freshman (oops, I mean first-year) girl (oops, I mean woman) at a Phi Kappa Psi fraternity party with her date, Drew, identified as a member — whom she’d met at the campus’s Aquatic and Recreation Center where they both worked as lifeguards. She goes upstairs with him, lets him lead her into a totally dark (“pitch black”) room where she encounters another guy behind her and screams. It isn’t clear why she screams at this point, but then the two of them crash through a glass table and…

Okay, hold the bus. Are we really to believe that “Jackie” and these guys were rolling around in broken glass for three hours?

She and “Drew” had gone upstairs because it was “quieter,” and screamed when she realized they weren’t alone. What was this, a soundproof room?

Did no one enter or leave during all that time?

She is able to count seven guys, and recognize the last one as being from her anthropology discussion group, even though the room is “pitch black.”

“Jackie” says she finally got up hours later, found herself alone, barefoot and bloodied, and made her way out of the fraternity house where she called her friends telling them something terrible had happened. When “Jackie” meets up with her friends, instead of breaking every speed limit getting her to the nearest emergency room, they debate the future of their social lives on campus. Or so Rolling Stone said (read it above; see below).

Had “Jackie’s” story any veracity, she would have been covered with still-bloody cuts from broken glass, with severe trauma to her pelvic region. Emergency room personnel would doubtless have contacted city police immediately. Assuming the police were on the ball (admittedly an assumption), they would have gone to the fraternity house. This was more than a mere university matter; it was a major felony. They would probably been able to nail “Drew” and his cohorts in a matter of hours.

For there should have been blood easily identifiable as “Jackie’s” on the floor (or carpet) and on the shards of broken glass from the table. “Jackie’s” blood should have been elsewhere in the house: on the doorknob to the room, on the floor, on the stairs she went down, on the handrail. What happened to her shoes? Were they still in the room? What of the red dress she claimed she’d worn. Where was it? Was it still bloodstained?

None of these matters were investigated, of course, because no one went to the authorities immediately. “Jackie” never filed a formal complaint against the fraternity or any of its members.

“A Rape on Campus” appeared on Nov. 19 and hit the Charlottesville campus like an earthquake. There were immediate protests, acts of vandalism including bricks thrown through first-floor windows of the Phi Kappa Psi house, and even death threats. A group of women faculty led a protest in front of the house. Phi Kappa Psi members, fearing for their safety, fled the house and left campus. The university closed down the fraternity system pending further investigation, punishing thousands of students who’d had no involvement in this case. What was there to investigate, though, after over two years? Charlottesville police investigated “Jackie’s” story as best they could and came up empty. “No substantive basis” was a phrase they used.

Over the next couple of weeks, doubts quickly developed about elements of “Jackie’s” story. Imagine that, given the number of things about her story that a rational mind would have questioned right off the bat. Consider the supposed conversation with her soon-to-be-former friends, which reminds me of people who have ruined their brains with reality TV. T. Rees Shapiro had followed up for the Washington Post and tried to get clarification on what happened during that meet-up. Her friends related a quite different account. They stated that “Jackie” had had a date that night and called them at 1 am, crying. She told them she’d been forced to perform oral sex on five guys. They did not recall any visible injuries, or her being barefoot. They told Shapiro — insisting on the same pseudonyms used in the Rolling Stone story — that it was “Jackie” who insisted on being taken back to her dorm room when they wanted to go to the police.

They also said Rolling Stone had not contacted them for their stories — an unbelievable lapse of judgment — much less attempted to identify any of “Jackie’s” attackers. It was unclear she’d even specified Phi Kappa Psi as the scene of the crime, or any other fraternity, for that matter. There were discrepancies of place and time. While “Jackie” had told Erdely she’d met her friends outside the fraternity house at past 3 am, they recalled meeting her at least a mile from there, and that it was closer to 1 am.

“Jackie” had gone to sexual assault counselors and university officials who could take no action, given the lack of forensic evidence that obviously would have need to be collected immediately for any criminal prosecution. Among those she spoke with was a rape survivor named Emily Renda, originally contacted by Sabrina Rubin Erdely, who also reported that “Jackie” had told her of “five” men who had forced her to have oral sex with them, only to learn from Rolling Stone that the number of attackers had increased to “seven,” excluding “Drew” and one other guy “coaching.” The Rolling Stone account had no mention of oral sex.

In fairness, Erdely began to have doubts of her own before the story went to press, beginning when “Jackie” steadfastly refused to identify the ringleader of the attack. She was still afraid of him, she insisted. “Jackie” had described quitting her job at the Aquatic and Fitness Center so as not to see “Drew” who she alleged still worked there. Erdely could not find evidence of any such person. Phi Kappa Psi chapter records indicated that none of its members worked there that semester. Finally, after assurance it would not be published, “Jackie” supplied a name. It is unclear from the public accounts whether the name she supplied is the same name that came from her three friends as someone she had met in a chemistry class: Haven Monahan. She’d supposedly had a date with this person at the time of the incident. The problem was, searches of University of Virginia enrollment records failed to turn up any such student.

Matters soon got worse. A photograph “Jackie” had supplied of “Haven Monahan” turned out to be of a former high school classmate now in another state. “Jackie” supplied a second name. This person was indeed a student at the University of Virginia and worked at the recreational facility, but when questioned, claimed he barely knew “Jackie,” had not been on a date with her, was not a Phi Kappa Psi member, and was not named “Haven Monahan.” Finally Rolling Stone mistakenly gave up its pursuit of the truth about “Drew.”

Clearly, Erdely erred in not putting more pressure on “Jackie.” She never spoke to the three now-former friends to get their perspective; she only relied on “Jackie’s” accounts which included an alleged conversation during which one of them, whose real name is Ryan Duffin, had refused angrily to talk to Erdely, citing his loyalty to the fraternity system at UVA. Duffin denied making any such refusal. The conversation had never happened, he said. What was exceedingly strange was an email Duffin claims to have received five days after the supposed attack from this “Haven Monahan” in which Jackie gushes over him … using material that turned out to have been partially plagiarized from a couple of TV series, Dawson’s Creek and Scrubs. The email account turned out to have been deactivated. Other numbers “Haven Monahan” had used to chat with Jackie’s then friends turned out to be Internet phone numbers enabling a user to send a text message from a computer or iPad that looks like it came from an actual phone. “Jackie” could have sent the messages herself.

“A Rape on Campus” was an utter disaster, however one looks at it. Almost nothing about her story, which was riddled with inconsistencies and a few outright absurdities, could be verified. “Jackie’s” date, “Drew,” appears to have never existed. The dress she said she’d worn that night had disappeared. “Jackie” said her mother had thrown it away. One thinks of that line about the dog eating your homework.

As if to put icing on the cake, Phi Kappa Psi records indicated the fraternity had not held an event on the night of Sept. 28, when the gang rape was supposed to have occurred.

Rolling Stone had little choice except to issue its infamous December 5 partial retraction and try to do damage control. The damage control wasn’t sufficient.

There are no provable matters of fact here, just allegations that, taken on their own terms, make little sense. Feminist-friendly mainstream media had already run with them, however.

Among these allegations is that “one in five university women will be raped, or sexually assaulted” when in college. This seems to have become part of radical feminist sacred writ. But do such numbers make any sense? Do the math: on a campus where, let us say, there are 28,000 students, 15,000 of them will be women. The one-in-five number implies that of this 15,000, 3,000 will be raped or sexually assaulted during their years on campus. Although it is true that many students these days fail to finish college in the traditional four years, we’ll take four years as the baseline (the number is easily adjusted). This implies that in a one-year period, there will be 750 rapes or sexual assaults on this campus.

Keep in mind, too, that the majority of students are away from campus during the three summer months. That means the majority of these 750 rapes will occur during the nine month academic year. This comes out to an average of between two and three rapes or sexual assaults per day during the academic year!

Adjust the numbers for any actual campus. Does anyone in his right mind believe there are this many rapes or sexual assaults occurring on American campuses, especially in this era of radical feminism (yes, folks, there are radical feminists on the more traditional Southern campuses of which the University of Virginia is the most prestigious, because all colleges and universities are required by federal law to have affirmative action programs that get radical feminists hired).

What might actual numbers say, even given the likelihood that some rapes go unreported? University of Michigan Ann Arbor economist Mark Perry ran some numbers for his campus based on reported sexual assaults on his campus for 2012: 32. His calculations yielded the figure that a woman on his campus had had a 1 in 155 chance of being sexually assaulted that year. Perry has called for more accurate use of statistics on campuses. (See Mark Perry, “A renewed call for accurate government reporting of statistics,” American Enterprise Institute, 23 January 2014).

There is little reason to think those pushing an ideology — now the norm on college and university campuses — are interested in facts or data, however. We see little evidence of such interest in the above case. What we see is a seasoned reporter looking for and seizing upon an opportunity to portray a somewhat traditional university campus in the worst light possible.

Radical feminist ideology may be targeting men as a group, but there is no reason to think it is helping women. With visible cases such as this one being exposed as probable hoaxes — which may seem a rash judgment, but given the absence of factual, forensic evidence what are we supposed to think? — actual victims will indeed be more, not less, reluctant to come forward. Yes, there are actual victims of rape or sexual assault on college and university campuses; nothing I’ve said should be read as denying this. But Can she be believed? is clearly more a valid question now than it was before “A Rape on Campus.”

The truth is, we have no way of knowing what actually happened that night, even if we assume the incident wasn’t fabricated. Ryan Duffin remained convinced that something had happened. Maybe something did. We don’t know, and probably never will. One thing we do know is that psychological certainty is not evidence — evidence of the sort necessary to put rapists in prison! The Rolling Stone article received over 2.7 million hits prior to its removal. A college’s reputation has been sullied as has Phi Kappa Psi fraternity which was turned upside down for no good reason. Lives have been affected by what one administrator reasonably called “drive-by journalism,” based on what may turn out to be the work of a pathological liar. Some women, after all, are very good actresses, and lies have a way of unraveling when enough people put them under a microscope. Am I being overly harsh? Remember, “Jackie” dragged innocent others into this thing. Aside from those at the fraternity, she produced a picture of a former classmate, and then ID’d a guy, clearly innocent, who worked at the campus rec center. Those others were confronted and questioned.

Now for all I know, there might be people reading who will wonder, What’s your stake in all this? Do you have a dog in this fight? If not, why are you involving yourself at all?* As an outsider who lost academic positions to less-qualified women courtesy of the favoritism towards women in academia, Yes, indirectly, I do have a dog in this fight. As an outsider, I believe I have insights that are lost on the majority of established academics (and established journalists) who cannot see the forest for the trees. The leftist mindset that dominates academic faculties and, in a slightly more modest form, major media, predisposes its readers to take lurid stories like “Jackie’s” at face value, despite major lapses in judgment and basic common sense, the biggest lapse at Rolling Stone being its reliance on a single source despite multiple red flags about that source’s credibility.

I have met academic radicals. I wouldn’t turn my back on one. I especially wouldn’t turn my back on those characters who label themselves “male feminists”! One such person I chanced to have a hostile exchange of letters in the Letters to the Editor column back in the early 1990s in the American Philosophical Association’s flagship publication clearly had a few loose screws.

It is clear to me when a writer approaches a theme with an agenda, not a desire for truth and justice. That was the case with “A Rape on Campus.” One wonders how many alleged rape cases Sabrina Rubin Erdely passed over before she found one she could portray as sufficiently lurid to make her case that universities are “rape cultures” with administrations that do not respond properly to allegations of sexual assault.

Academic leftists just don’t get it. Judgments about truth and falsity, guilt or innocence, need to be based on factual evidence and logic, not ideology and emotion. Academic leftists do not understand the world of those of us who insist on evidence and logic. They see these as “constructions” evidencing “male domination,” which has the silly implication that women can’t really think logically or make deductions based on evidence. To the extent they get special favors, academic leftists create and perpetuate an environment where truth and justice take a back seat to ideological beliefs, and where propositions (e.g., university campuses are part of a “rape culture”) become true, a fortiori, because a politically favored group says so.

Radical feminists’ commitment to their ideology is as fervent is that of any religious fundamentalist. They are intent on “sticking it” to the “patriarchy,” and if they see a chance to damage reputations associated with that (e.g., fraternities), they will take it. If innocents get harmed along the way, well, just as in times of war, that’s just collateral damage. Few religious fundamentalists have tenure, teaching undergraduates at major universities. They are not in a position to cover up or sometimes invent facts instead of revealing them, citing trendy rationalizations about factual evidence being a “social construction” and “gendered.” Let’s use common sense: if a rape occurs, is it a “social construction”? Again, and finally, lest there be any confusion: I am not saying campus rapes do not occur. There is just no good evidence that this one did.

According to the CUGSJ report, “Jackie” declined to be interviewed for its writers. She’s retained an attorney who commented, “It is in her best interest to remain silent at this time.” No surprises there, with the UVA Phi Kappi Psi chapter preparing what will doubtless be a whopping lawsuit against Rolling Stone. Why would they not? They were lynched in a media environment far more concerned with political correctness and sensationalism than getting to the bottom of what really happened, if anything did.

Some (e.g., at CNN) expressed surprise that no one at Rolling Stone has been disciplined, much less fired. Insiders always land on their feet. Always.

Is it any wonder that the closest thing to a bestseller academia has seen in over 30 years is Harry Frankfurt’s infamous On Bullshit (2005)?

 
*It might be worth noting that I have no association whatsoever with the University of Virginia, and have relied exclusively on publicly available accounts of this story.

 

Posted in Culture, Higher Education Generally | Tagged , , , , , , , , , , , , | Leave a comment

How Higher Education in the U.S. Has Slowly Self-Destructed

There can be little doubt that at one time, the U.S. had the best higher education system in the world — rivaled only by, perhaps, by institutions in Great Britain such as Oxford and Cambridge. It still lives on that reputation, as people still come from all over the world to study at major U.S. universities. For a period probably lasting around 65 years now, however, American colleges and universities have been going downhill. The process has been long and arduous, and was far from obvious for a long time, but over the past couple of decades, the problems are manifest and have been accelerating. What has happened is a long story, obviously; many lengthy books have been written about the decay of higher education from numerous perspectives. I can only share a small part of mine.

If one looks at higher education in the 1950s — the era in which my generation was born — one doesn’t get the sense of an enterprise that would one day head for a cliff. While things were not perfect, there is a sense in which colleges and universities seemed to enjoy a golden age that began around 1950. Universities opened their doors to World War II Veterans attending on the GI Bill, for one thing, and the student population skyrocketed. My father, a World War II veteran, went to an Illinois university on the GI Bill and earned two degrees in chemistry (a B.S. and an M.S.). Many Veterans were the first in their families to go to college which, at the time, was very affordable!

Academic disciplines like mine also enjoyed their golden ages during this period — especially if you were an analytic philosopher (those who took their cues from, e.g., Heidegger, might feel differently). There was a general sense of accomplishment and enthusiasm. W.V. Quine, of Harvard, published his “Two Dogmas of Empiricism” paper in 1951, and it was immediately clear that an event of the first importance had just taken place. Two years later, Ludwig Wittgenstein’s posthumous Philosophical Investigations appeared, bringing Wittgenstein’s later philosophy together in one important volume. Dozens of dissertations and hundreds of journal articles discussed the implications of Quine’s criticism of the analytic-synthetic distinction, his shift towards pragmatism, his naturalized epistemology which blurred the boundaries between analytic philosophy and natural science. Wittgenstein’s natural language approach, too, rocked the discipline. Wittgensteinian expressions like language game crept into its official lexicon. The influence of the later Wittgenstein was soon felt in philosophy of science, manifesting itself in the major works of Stephen Toulmin, Norwood Russell Hanson, Thomas S. Kuhn, Paul Feyerabend, and others many of whom had made close enough studies of the later Wittgenstein’s ideas to see how they applied to the special languages of the sciences and how these changed over time and with discovery and scientific revolution.

Its intellectual life aside, there were jobs in academia! Universities were expanding to satisfy student demand, and this meant hiring more faculty in every field, philosophy included. It mattered little that many of the new students were more vocationally oriented as opposed to those moved by intellectual curiosity. For there was plenty of state money being lavished on these institutions, and it only increased when the Soviet Union launched Sputnik in 1957 and caused a sense of panic that we Americans were falling behind! Whatever conflicts of educational philosophy arose could be minimized.

American society in the late 1950s had its dark side. On the one hand, the economy was roaring. We were in the middle of the longest and largest genuine expansion in history. New technologies had been appearing for decades, ending with the newest of the new: television. The largest middle class in history was rising in stature. On the other hand, this was the era of the conformist “organization man.” Also that of the “status seekers,” the “people shapers,” the incipient “power elite,” the Beat Generation, and so on — indications that all was not well. The civil rights revolution was on the horizon, as decades of America’s shabby treatment of its minorities caught up with her.

A cultural myth had arisen surrounding higher education: Everyone Should Go To College. It was a foolish myth, of course, and still is. There were then, and still are, many good vocations one can pursue that do not require a four-year university degree. At most, they might require an apprenticeship of perhaps two years or even less. One does not need a university degree to sell real estate or insurance, or repair televisions or other equipment. These were jobs that needed doing.

Sadly, however, employers bought the myth. They held the purse strings. So people went to college who weren’t comfortable there, and would rather have been out earning their livings instead of sitting in classrooms memorizing material to spit back on tests. They continued to attend due to parental and social pressures as well as employer expectations. One of the consequences is that the value of a college degree started to drop over time. Supply and demand is real, after all. The greater the supply of anything, the less the value of any single unit. The same would eventually be true of university faculty.

While I tend towards the view that my field experienced a few golden years during the 1950s and 1960s, it is happens to be the case that a lot of mediocre people made their way into tenured positions during this period of relative abundance. Many of these people might have published a chapter from their dissertations, if that much — or a book review or two — and then do nothing productive for the rest of their careers. Some didn’t do that much. Many who were somewhat more productive produced what came to be known as “secondary literature” which ran the gamut from good to mediocre, although occasionally, it fell in quality to downright abysmal. In fairness, many of those hired during this period excelled in the classroom; that was their vocation. Some, however, did not. I encountered my share of the latter in my journey through American academia, first as a student and then as an aspiring academic.

We all know what happened in the 1960s. There is no means of summarizing that complex era — which really began with the Kennedy assassination — in one blog post; again, it’s been done elsewhere. Like most such events, it had its pluses and its negatives. The aftermath, however, was a changed view of the universities by America’s ruling elites. Higher education, having been ground zero of the late 1960s disruptions, was no longer trusted. The ruling class realized that an independent middle class, pampering its children and allowing them to turn into intellectual idealists who would criticize the system (especially its wars), was in fact dangerous to their privileges as well as their goals. A subtle attack on universities began in 1971 with the Powell Memorandum, which was widely circulated through the elite business community via the U.S. Chamber of Commerce and recommended business take a stronger hand in shaping universities from the top. Some of its readers made their way onto university boards of trustees, and sometimes into administration. Such moves dovetailed nicely with the mid-1970s job market collapse that ended the golden years.

The Powell Memorandum named names, among them neo-Marxist philosopher Herbert Marcuse, then at the University of California at San Diego. Marcuse’s thinking derived from the combination of Marx and Freud developed at the Frankfurt School, brought from Frankfurt, Germany, to the New School of Social Research in New York City, a hotbed of the New Left. Marcuse, who had become the New Left’s most respected philosopher, had a substantial following. The Marcusans, one might call them, would play a definite role in the decline of humanities and social sciences that actually became useful to the ruling elites later, although the former get madder than wet hornets if you point this out to them.
Marcuse’s most influential tract for these purposes was not his books Eros and Civilization (1955) or One-Dimensional Man (1964), but rather the short essay “Repressive Tolerance” (1965). This essay, which promoted the idea that equal opportunity was not enough, that mere anti-discrimination laws were not enough, as in the absence of actual institution of repressive measures against the white majority they would only preserve the latter’s advantages all throughout society. The gradual institution of the new repression to allow blacks and soon women to thrive, began to influence the humanities where it took such forms as identity politics, critical race theory, and so on. It would help demonize the white male as modern history’s biggest villain through a process of highly selective pseudo-scholarship. It would push special studies aimed at women and minorities (“women’s studies,” “gender studies”) in increasingly extreme directions when the results did not magically materialize across the board, and we began to hear about systemic as opposed to systematic discrimination. The radicals wanted to place white men (eventually straight white men) at a disadvantage, although no one was supposed to say that! That was offensive, the sign of a closet Klansman! That was how the new “repressive tolerance” was to work. It was selective tolerance. It tolerated some at the expense of others, as those with political agendas always do.

In the universities, the extreme lack of jobs made conformism an imperative. People visibly not acceding to new views were simply not interviewed for the few positions that became available. Even for the rest, networking, positioning, “people skills,” etc., became far more important as survival skills than the sort of accomplishment that produced a Quine or a Wittgenstein, or their highest quality commentators and protégés. This was the environment in which lost generation philosophers such as myself went to school, and it was the environment in which we hit the job market in the 1980s or later.

What we didn’t see immediately — or, at least, I didn’t — was the corporatization of the universities that was occurring in accordance with the Powell Memorandum, mainly because it was occurring in the upper-echelon administrative level, not in departments where we were. The Powell Memorandum had implied a need to place a more business-oriented mindset in charge. Administrators began to rise to the occasion (with equivalent new “subjects” like “educational administration”), and the universities began to adopt the values of the larger culture that were apparent during the it’s-morning-in-America Reagan years: mass consumption, commodification of everything and everyone, and a latent anti-intellectualism perhaps expressed in Ronald Reagan’s doubts, years before when he was governor of California, that (to paraphrase) we “shouldn’t be subsidizing intellectual curiosity.”

This was the rise of neoliberalism, whose godfathers were the economists Friedrich A. Hayek and Milton Friedman. While long in the making, neoliberalism came of age in the 1980s and became the dominant philosophy of higher education administration (along with much else!) in the 1990s. Its supposed philosophy: let the free market decide everything!

Today, of course, the full corporatization of higher education at the hands of the neoliberal mindset is sufficiently well-documented that I can probably assume it here. The move towards hiring part-time faculty and phasing out tenure is one aspect of this mindset; the view of students as consumers is another. The new consumers continued to attend college in huge numbers despite gradually skyrocketing tuition. This was made possible by readily available federally-guaranteed student loans. So much for the free market deciding everything. Student loans, which guaranteed that institutions would make money, made it possible to continue raising tuition to levels that would support the lavish salaries being paid to top administrators, the new buildings and campus beautification projects to make campuses appear more glitzy and business-friendly, the instructional technology, and so on. To be sure, less money for administration would have made it possible to pay more full-time faculty, but that is water under the bridge now; one could plausibly argue that the continued production of Ph.D.s as if the golden age of the pre-1975 era still existed, or would come back with the projected “wave of retirements,” was foolish. Supply and demand again: increase the supply of x beyond all reason, and certainly beyond the actual demand for x, and the price x’s can command drops like a rock.

But we should be wary of mechanical appeals to supply and demand. As this scathing blog post notes, once an entire endeavor (such as higher education) has embraced the business, profit-maximization model, it will automatically move towards replacing the more expensive with the less expensive, whether we are talking technology or human labor. Working conditions will deteriorate across the board. With the rise of MOOCS (massive open online courses) and similar moves towards Internet-based education which can be offered at very low cost, if students begin to choose these as preferable to tens of thousands of dollars of student loan debt, this could begin wiping out tenured faculties all across the country! Tenured faculty appealing to supply and demand as their best defense against the shabby treatment of adjuncts by universities should beware of where these appeals actually point, that they, too, could become expendable faster than they think! Among the things we should be thinking about is if certain ideas, or practices, should be exempt altogether from the whimsy of the marketplace, if only because the functionality of the marketplace depends on them. But that is another post.

The real tragedy here is the talent lost to academia via a kind of brain drain, as the best minds refuse to be treated like crap simply because they had the misfortune of finishing their Ph.D.s after 1975. Corporatized higher education is actually a very wasteful system. I recently noted this squandering of talent on a comments section on Brian Leiter’s philosophy blog — on a thread generated by a poll assessing the correctness of Harry Frankfurt’s comment on academic philosophy being in the “doldrums” that inspired my second post here. It is interesting that of those who responded to the poll, 48% said Yes, philosophy is in the doldrums; 36% said No; and 16% were not sure.

I observed again the near-absence of significant figures not in their 60s and 70s, at least in the U.S. Timothy Williamson, in his late 50s, is British; David John Chalmers, in his late 40s, is Australian. To be sure, assessing the long-term stature of a philosopher is not necessarily easy; but let’s ask again, when Quine published “Two Dogmas” was there really any doubt that something important had happened? When was the last time we saw something like that? Probably when Chalmers first introduced his “hard problem of consciousness.” That was in the 1990s. Nothing of that magnitude has happened since. Possibly I’ve just missed it, from having been an outsider. I don’t think so, however. If someone reads this and thinks I am wrong, feel free to post a comment drawing attention to the landmark event in professional philosophy I have missed.

The hollowing out of the discipline caused in part by the job market collapse of the 1970s, taken further by the political correctness revolution led by the Marcusans beginning in the 1980s and triumphing in the 1990s, and finally the corporatization of universities also beginning in the 1980ss and beyond, all led to the “doldrums” Frankfurt spoke of.

These happened in tandem with the changing technology of the 1990s. Some have retorted, in response to National Adjunct Walkout Day (Wednesday, February 25, 2015) that those who don’t like the labor situation in academia ought to do the obvious thing and find another line of work. I submit that many potentially promising philosophers have done just that! There is no way of knowing how many, but they’ve probably been doing it quietly for at least 20 years now, mentally gauging the hostility of academia and deciding they’d rather be elsewhere! New technology opened a lot of doors, after all, and as every thinking person knows, many of the analytic skills that make a person good at philosophy are adaptable to computer programming, website development, design, and assorted other information technology fields. As someone who walked away from an adjunct position (in a manner of speaking), if I’m ever asked, “Where are your generation’s Wittgensteins, Quines, etc.?” I’ll tell them, “Probably working for Google, or involving themselves in tech start-ups.”

Do I need to point out that this is talent permanently lost to professional philosophy, whoever we decide deserves the blame?

My comment got an interesting response, from someone wondering where today’s imposing figures are in other academic disciplines as well. This poster suggested plausibly that philosophy is not alone in its “doldrums,” that academia is in dire straits generally.

To begin summing up, disciplines such as philosophy, history, literature, foreign languages, and so on, were once not just the core of liberal arts learning, some mastery of them was at the center of what it meant to be an educated person. Today they have been largely replaced by STEM subjects, these being the subjects employers want, as “education” becomes essentially job training. Students attend unabashedly to get job skills, intelligent enough to know they will graduate as debt slaves. As Napoleon observed, the borrower is always the slave of the lender.

Higher education faces that sort of problem at the student end. At the administrative end are the misplaced priorities created by top-heavy administrations which keep getting top-heavier, empowered by the neoliberal mindset which speaks of free markets. Given today’s structural realities, this means freedom for the point-one-percent to do as they please and rationalize it with free market language, while forcing servitude on the rest of us — for if we haven’t devoted our lives to lining our pockets, then so much the worse for us. With its present priorities, is it any wonder that the higher education is slowly self-destructing, and that the overall educational level of the public is plummeting?

Posted in Higher Education Generally, Where Is Philosophy Going? | Tagged , , , , , , , , , , , , , , | Leave a comment

Leopold Kohr: Unsung Hero of Twentieth Century Social Philosophy for the Twenty First Century

As an outsider, I’ve tended to gravitate towards other outsiders … not because they are outsiders but because very often they have something to say, something which got past the gatekeepers of their time and survived because it was important. Such a person was the economist and social philosopher Leopold Kohr (1909 – 1994).

Over the past couple of weeks I’ve penned what I hope will prove to be a definitive essay on Kohr’s work and life. You can read it here:  Leopold Kohr: Prophet of a Coming Decentralization. I have a very slightly different version here (I fixed a couple of typos and minor verbal mishaps): 06 LEOPOLD KOHR PROPHET OF DECENTRALIZATION definitive version.

Kohr illustrates a point made in last week’s post: the fundamental unsolved problem of political philosophy, practical as well as theoretical, is the containment of power, i.e., the containment of those who are fascinated with power and measure all value and worth in its exercise. Kohr’s work is somewhat depressing in his contention, for which there is abundant evidence, that in this fallen world, it is lack of opportunity rather than any commitment to ethical principle or a moral view of the universe that limits power. The only means of limiting power is to keep our organizations small, be they governments or corporations. Of course, for us, it is too late. But clearly both governments and corporations are unsustainably large, and heading for eventual decomposition. So Kohr’s work is encouraging in that it means that even if the “new world order” feared across the political spectrum gets built, it won’t last for long. Its continued existence would depend on something it could not maintain: institutional loyalty.

Its decomposition would be incredibly messy, however. As will likely be the decomposition of the governmental and economic arrangements we have now.

Kohr was a prophet (sort of) of the small political unit, but he had trouble telling us how best to get there. One of his side projects was advising both secessionist groups and urban planners. Today he would probably be advising secessionist groups and preppers, hoping for the right level of organization and noting that we have something of a choice of where we want to end up. We could end up in a neo-medieval world run by networks of landowning warlords who would control everything by controlling access to whatever technology survived the collapse. Or we could end up with a network of small units of governance, one might call them. The former could well be the result of an unplanned decomposition of our present order. The latter would require extensive planning based on conversations we need to be having now: conversations about what works (autonomous, local free markets) versus what history is proving does not work (a financialized money system based on debt and ultimately, therefore, on control). The question before us: are we ready to have that conversation?

Posted in Political Economy, Political Philosophy | Tagged , , , , | Leave a comment