Consciousness Denialism: Galen Strawson vs. Daniel Dennett

Denialism? The term suggests something irrational at best, maybe even malicious. After all, that’s the word used by climate scientists for those who don’t believe climate change is happening. Is it a good idea to invoke such a concept when we talk about philosophers who have counter-intuitive views on consciousness?

Well, yes, as it turns out. Because these philosophers have convinced themselves that, in the last analysis, there is no such thing. At least not in any recognizable sense.

Galen Strawson, in this recent piece worth reading and commenting on(1)*, calls this idea the Great Silliness. Surely that’s harsher than calling such philosophers denialists. I would have stuck with denialist, but that’s just me.

What do we mean, consciousness? That is, after all, the $50,000 question. In the case of human beings, surely we mean, if anything, this first-person point of view that in my case, tells me (a) immediately that I am alone in my home office (except for two cats) writing this blog entry, that there are pictures and post-it reminder notes on the wall above my laptop, that daylight is coming in my side window, that I can hear random sounds from out there, and so on. Philosophers use the term qualia for such experiences. (b) While being aware of all these things I can become aware of my awareness of them, supposing I choose to be. All of these things I am consciously aware of. Doubtless, since you are receiving a different array of sensory experiences, reading this, your consciousness has different content than mine, but we all have this first-person point of view (as some philosophers have begun calling it), qualia plus awareness of our awareness of qualia, which seems at once elusive because it is so all-pervasive, but also both immediate and — this is crucial — irreducible, not to be explained in terms of something radically different from it.

Is this what is being denied? Why? Does consciousness denialism even make sense, if it means we are, in some sense, not really having qualia, but only seem to be having them?

Strawson identifies the major denialists: folks like Daniel Dennett, author of numerous books on what is called the philosophy of mind, a strange term to use just in case there is no room for anything in our ontology to be called mind. Strawson also mentions Brian Farrell, Richard Rorty, Paul Feyerabend (early in his career, at least); he might have mentioned the Churchlands (Paul and Patricia), two of the leading voices for eliminative materialism, the idea that all of our propositional attitudes — our talk about beliefs, thoughts, experiences, etc., is capable of being replaced in a Feyerabendian wholesale fashion by the language of a completed cognitive neuroscience. The way Strawson describes this: “One of the strangest things the Deniers say is that although it seems that there is conscious experience, there isn’t really any conscious experience: the seeming is, in fact, an illusion.” There we are. Or are we there? If we’re not really conscious, how do we know where we are?

While those final rhetorical questions might seem at first glance like gibberish, they help underscore an important point about denialism regarding consciousness: ultimately it doesn’t make any sense. This senselessness, moreover, reflects negatively on any larger philosophy or worldview that finds itself having to embrace denialism about consciousness, and perhaps on the institutions (and the profession as a whole) which nurtures this sort of thing.

Wittgenstein remarked numerous times about the strangeness of saying things like, “I think I am in pain,” or, “I thought I had a pain, but I was wrong.” What was motivating him? The fact that having a pain is immediate! It makes no sense to “think” it, or “think” I can be wrong about it. So I should say (unless I am delivering a philosophy lecture!) “I have a pain” (or “I am in pain”) said while indicating where the pain is, say, in my left foot.

Having a pain in my left foot may surely be coordinated with certain neuro-chemical events going on there, events which may have a specific cause I could seek out. But these events are not the pain, anymore than what causes them is the pain. Pain is information. It is what we might call a vertical systemic communication that something is wrong. It is, that is, a communication from the complex systems in my foot to that great complex system in my head which manifests as my conscious experience — the experience of a person, that is, not a mere organ or body system — that “I have a pain in my left foot.” To find out what’s wrong, I might slip off my left shoe and discover to my dismay that I stepped on a nail. I immediately recall an earlier conscious stream of experiences, that of walking through a construction area when I was out earlier. I am conscious, moreover, of my consciousness of these memories now.

So what ever motivated anyone to believe there is some kind of philosophical problem here, that something as basic as consciousness needs explaining? Is there something truly mysterious about the fact (for fact it is) that I am conscious — and conscious of being conscious?

While we don’t have time here to relive the whole historical trajectory that eventually led to so strange a conundrum, we can single out the metaphysics (and worldview) that has had the most problems with consciousness: materialism, which has come in many forms: mechanistic and reductive, Marxian, identity, and eliminative being the Big Four of the past century, with much of the debate in philosophy of mind being between advocates of one over the others. Modern philosophy of mind, in fact, developed essentially around a single question: how can consciousness exist and have the properties it seems to manifest (e.g., intentionality, capacity to formulate and grasp concepts, capacity to receive visceral information in the form of qualia, i.e., of pain, capacity to serve as the agency for correct inference of conclusions from premises) in a material reality?

Much of the activity of the philosophy of mind, in other words, has consisted of trying to jam square consciousness into one or another of the round holes carved for it by materialists. As a branch of philosophy the field is still very active because of how consciousness has resisted every effort to fit its square pegs into one of these round holes: from the “paradox of the thinking behaviorist” formulated almost a hundred years ago by Arthur O. Lovejoy to the “hard problem of consciousness” observed by David John Chalmers in the 1990s.

Maybe the inability to fit something so basic and ineliminable as consciousness into one of the boxes supplied by materialism is a sign of a fundamental problem with materialism, with the materialist theory of reality. Maybe it is a sign of the inadequacy of that theory.

But so strong is the grip of this theory of reality on professional philosophy, though, that to challenge it is to risk being deemed “unscientific” or “irrational” or labeled with some worse epithet. Even Strawson hedges his bets on this point. He says, at a crucial juncture in his account of consciousness denialism, “What people often mean when they say that consciousness is a mystery is that it’s mysterious how consciousness can be simply a matter of physical goings-on in the brain. But here, they make a Very Large Mistake, in Winnie-the-Pooh’s terminology — the mistake of thinking that we know enough about the physical components of the brain to have good reason to think that these components can’t, on their own, account for the existence of consciousness. We don’t.”

Later, Strawson invokes naturalism, from which, as he puts it, it does not follow that consciousness does not really exist, only that if it exists it is a natural process … not something supernatural or non-natural (whatever this means). What do we mean, naturalism? Is naturalism something meaningfully distinct from materialism? Are their nonmaterialist forms of naturalism? Strawson tells us, “Naturalism states that everything that concretely exists is entirely natural…,” which could serve as a textbook illustration of the circular definition. But “given that we’re specifically materialist or physicalist naturalists (as almost all naturalists are), we must take it that conscious experience is wholly material or physical. And so we should, because it’s beyond reasonable doubt that experience … is wholly a matter of neural goings-on: wholly natural and wholly physical” (emphases Strawson’s).

All of which means that by invoking a separate concept, naturalism, we went nowhere except in a large circle. Strawson would disagree, of course. He contends that the problem is our vast ignorance of the physical, especially how the brain and central nervous system “generate” consciousness. The problem: he is still trying to explain consciousness in terms of something consciousness is not, and thus participating in a research program that has given us a hundred years of false leads and dead ends. His position is closer to Dennett’s than he thinks.

Suppose one could have a full and complete account of every event in the brain, accepting for now that the word physical applies to events in the brain. Would we have an account of, say, the intentionality of consciousness, of the fact that when I am conscious I am invariably conscious of something in my proximate environment (or it could be awareness of an idea of which I am thinking at a given moment)? I submit that in asking the question we show that we would have pushed the problem back a step rather than resolved it, creating another explanatory conundrum (or epicycle): how intentionality as a property emerges from a concatenation of physical events, or how we can justify assuming that it does in advance of empirical results that point specifically to this. Such results might emerge … or, if history is any guide, they might not.

I believe the “problem” of consciousness has resisted and continues to resist solution is because consciousness is not the sort of thing that can be explained within the constraints supplied by materialism — in any form. We cannot claim to explain it by having “reduced” it to something else; we cannot honestly claim to have “eliminated” it!

So what should we do? Go back to being Cartesians? No. Lapsing back into dualism is probably the worst thing we could do! But we could begin by suggesting that the most radical-seeming of the post-Cartesian British empiricists, Bishop George Berkeley, was onto something by denying, not mental substance as early post-Cartesian materialists like de la Mettrie did, but material substance. “To be is to be perceived,” said Berkeley; what exists, that is, is mind-dependent. But with Berkeley, we are still stuck inside substance metaphysics. What happens if we jettison substance metaphysics (no matter, never mind — at least not as Western philosophers have usually talked about these things!). Do we need to talk about substances to provide descriptions of our experiences, or, for that matter, whatever it is that is going on in my brain and senses when I have an experience?

Going back to my consciousness of being here, in my home office, typing this on my laptop, two cats in here (both sound asleep), light coming in from the window, random sounds also coming in from outside. Have I left anything crucial out by omitting all references to substance? Or is such a concept anything more than an abstraction, the philosophical equivalent of an item of debris just shaved off by Occam’s Razor? I have referred to what is within my proximate environment, that of which I am conscious, with no intended implication that there aren’t many, many things that are quite real but of which I am not immediately conscious: the fact that my laptop is made of molecules, for example, its molecules made of atoms, etc.; or that the light and sound have specific properties for which physicists have reserved the term physical, e.g., wavelengths of light, that fact that processes are occurring in my brain, triggered by my having experiences of events or processes in my proximate environment?

Where is the material substance?

I submit that it isn’t anywhere, because there simply is no such thing! Philosophers should get rid of it!

This means not being materialists? (And, for now, leaving the naturalism question aside as an evasion.)

In favor of what? My suggestion (although I am aware, this is only a start): take consciousness seriously. Take it for what it is, observe what it actually does, and do not try to eliminate it, reduce it to something it is not, or otherwise explain it away. Do, at the very least, what phenomenology started to do, which is provide a description of its operations and contents (regrettably, like so much academic philosophy, phenomenology as a research program soon got lost in a forest of neologisms). Since consciousness seems to be ineliminable without the results being either self-contradictory or unintelligible, I suggest taking the existence of consciousness as a basic datum in an account of reality that includes intelligent beings, and perhaps all living things (as capacity for self-awareness may be a mark of intelligence).

For there isn’t just human consciousness, there are many kinds and levels of consciousness. My cats are conscious when they are awake. They don’t have the same consciousness of a first-person point of view that I have, but they are capable of experiencing pain, aware of their surroundings or proximate environment, and able to navigate around in the latter.

Go back to the pain in my left foot, which arose because the tissues in my foot received a sudden injury. Nerve-endings were able to communicate this vertically: I received the information as pain. Clearly, materialists are not wrong in saying there are chemical and neurological events that correlate with my experience of pain (although there are “phantom pains” which amputees claim to “feel” in the amputated limb even though the limb is no longer there; thus we have qualia which correspond directly with nothing at all). Where are they wrong, in this case?

Their orientation is wrong, as it forces them to deny the basic nature of consciousness by trying to explain it in terms of that which is not (on their terms) conscious. The orientation I will suggest is supplied by replacing material substance … to the extent we should replace it … with systems: structured wholes consisting of cooperating elements or components that are themselves systems and not basic units (subsystems, that is). To ask, Are systems material? is then no better a question than, Are systems mental? All we can say … and I will try to bring this to a close with this thought: systems are by their very nature conscious, if by this we mean the capacity to apprehend, sometimes anticipate, interact with, and respond to, events including other systems in a proximate environment. By this last term (as I have used it several times now) all I mean is the aggregation of surroundings able to affect a system or be affected by it, whether through interaction or mere passive recognition.

Looking at consciousness and its objects this way will require a mental-cognitive leap, if for no other reason than because philosophers are so accustomed to talking about such things as the interactions of systems and transmissions of information (in, e.g., the human body) in materialist terms. But I will submit that if we can cease the attempts at reduction, or the attempts to explain which just explain away … much less the attempts at elimination — denialism — our capacity to appreciate the richness of our experience and our interactions with other systems, especially that category of system we call persons, will be that much more enhanced.

We may even find new ways of thinking of systems, taken as they are, in ethical terms. We do this now, of course; it’s another thing the materialist view of the universe has difficulty making sense of. Eliminate materialism instead of consciousness, though, and we may see new integrations of thought, experience, and morality that would never have occurred to us before.

Dennett tried to rebut Strawson, and Strawson replied at the same place.(2) The thrust of Dennett’s attempted rebuttal is that he doesn’t really deny that consciousness exists: “I don’t deny the existence of consciousness; of course, consciousness exists; it just isn’t what most people think it is,… I do grant that Strawson expresses quite vividly a widespread conviction about what consciousness is. Might people — and Strawson, in particular — be wrong about this? That is the issue.” (Italics his.) What, then, does Dennett think consciousness is?

Invoking the “pragmatic policy of naturalism” (there’s that word again) in attempting to sigue explanations of consciousness into the world as the materialistic naturalist understands it, Dennett argues against the first-person point of view calling it “privileged insight” and adding that “we have no immunity to error on this score.” While it is true enough that I can be mistaken in some of my direct perceptions (seeing the pencil in the water appearing to be bent, immediately correcting the error with reasoning from my knowledge of refracted light), I do not see how I can be mistaken about having that pain in my left foot. That’s what we non-denialists mean by the “privileged insight” of the first-person point of view.

Strawson stands his ground, quoting numerous past statements by Dennett where he certainly does seem to be denying that any such thing as conscious experience of qualia exists. Consider his zombie analogy: a “zombie” in this context is (these are Dennett’s words from his Consciousness Explained (1991)) “is behaviorally indistinguishable from a normal human being, but is not conscious.” Dennett proceeds to say, “Are zombies possible? They’re not just possible, they’re actual. We’re all zombies.”

Later, Strawson quotes him as saying, “When I squint just right, it does sort of seem that consciousness must be something in addition to all the things it does for us and to us, some kind of special private glow or here-I-am-ness that would be absent in any robot. But I’ve learned not to credit the hunch. I think it is a flat-out mistake” (Intuition Pumps, 2013).

So is Dennett denying that consciousness really exists in any form other than a purely behavioral sense, in which we are to assume that if two entities are behaviorally alike they are psychologically alike? It seems to this reader that he is. What separates first-person consciousness from other kinds of phenomena able to be studied, whether by scientists or by philosophers desperate to seem scientific, is that in the final analysis, we are inside of it. This is what makes it ineliminable.

When Dennett has a splitting headache, as all of us do from time to time, does he really and truly refuse to “credit [that] hunch” that he is experiencing qualia, and that his being mistaken about his having a splitting headache ultimately does not make any sense?

It does seem likely, of course, that materialism combined with the sort of scientific empiricism that refuses to credit the validity of any argument or the truth of any conclusion reached through private introspection leads logically to the idea that first-person consciousness can be eliminated from our philosophical bestiary. But to some of us, this does not count against first-person consciousness, it counts against materialism. I would conclude with the note that unless we have a materialistic explanation of this sense of “being inside” streams of experience about which skepticism or denial simply fail to make sense, we should bite the bullet on this one and try to consider alternatives to materialism as our metaphysical worldview — for this is, after all, what it is: not an a priori truth nor the conclusion of specific empirical studies but a philosophical and methodological presumption, and, to my mind (a word I just had to get in here somewhere), an unnecessary one.

(1) http://www.nybooks.com/daily/2018/03/13/the-consciousness-deniers/

(2) http://www.nybooks.com/daily/2018/04/03/magic-illusions-and-zombies-an-exchange/

*For some reason WordPress is not allowing me to add links or otherwise format this text, and have proven impossible to contact about this. Please accept my apologies for the lack of italics where there should be, in-text links that are missing, etc.

Posted in analytic philosophy, Philosophy, philosophy of mind, Philosophy of Science | Tagged , , , , , , , , , | 1 Comment

A Look Behind the Cultural Marxism Controversy: A Sort of Introduction

[Author’s note: this developed in response to an exchange of opinions on a Facebook forum, during which it became clear to me that any fruitful advance of the discussion called for something more detailed than could be attempted in a setting like that. Hence the material that follows, which hopefully will prove to be just the first installment of more material to follow. Incidentally, if you believe this to be worth your time, please consider supporting my writing on Patreon.com.]

What, precisely, is cultural Marxism? To some, the term epitomizes all that has gone wrong with Western civilization, especially higher education. It represents an insidious attack on Western civilization’s premises, especially its Christian ones. To others, it is but a conspiracy theory and a code word for resentment against the advances made in recent decades by ethnic minorities, hatred of immigrants (legal or otherwise), of homosexuals, and other now-protected groups — perhaps also expressing the fear of the dominant group as its demographics start to shrink and it is slowly stripped of its power.

Both of these represent extremes. Is there a truth we can get at?

At first glance, the term seems to be a misnomer. Classical Marxism embodied an economic philosophy of history and a prediction of capitalism’s end. Since according to the dialectical materialism at classical Marxism’s core, “it is not the consciousness of men that determines their being, but, on the contrary, their social being that determines their consciousness” (Preface to Das Kapital, 1859), Marx had little to say about culture, regarding it as part of the “superstructure” generated by the relations of production except to note that it would express these relations in various ways.

Capitalism, meanwhile, proved to be more resilient than Marx had imagined. It was unclear that the proletariat wanted to overthrow the bourgeoisie. If anything, the proletariat wanted to join the bourgeoisie. Many of their children did, as capitalism raised the standard of living everywhere it was practices. It did not raise it evenly, however, and during the first decades of the twentieth century the left turned towards reforming capitalism instead of abolishing it. What philosopher Richard Rorty called the reformist left thus emerged to “save capitalism from itself” as the saying goes, saving it from its own worst tendencies (see Rorty’s Achieving Our Country: Leftist Thought in Twentieth Century America, 1998). The result was the social security system, Medicare, the minimum wage, stronger unions, and so on. Prophesies of inevitable doom for capitalism continued, but in the hands of writers such as Joseph Schumpeter (see his 1947 classic Capitalism, Socialism and Democracy) they were far less revolutionary. If capitalism ended, it would be because the masses voted for more and more socialism at the ballot box. And it would happen quietly, as socialism had become a dirty word in America (as opposed to, say, entitlements or safety nets).

A cultural left arose during the 1960s. It shifted its emphasis from the class of economic Marxism to race/ethnicity, gender, and eventually sexual orientation. As racial/ethnic minorities, women, sometimes religious and eventually sexual minorities came to play an analogous role to that of the proletariat in economic Marxism, with straight white Christian males coming to play the role of the bourgeoisie, the term cultural Marxism was as inevitable as that of identity politics. Both terms came to be used by critics to describe the philosophy and impetus behind the political correctness which appeared in American colleges and universities in the 1980s (that term coming into use in 1990-91 although Lenin had used it back in the 1920s for those who adhered to the party line too closely).

Many of us looked initially just at political correctness, saw it beginning to disrupt both teaching and scholarship, urged that it be opposed, and were stymied when those who claimed the mantle of conservatism were either unable or unwilling to act. We sought reasons for its entrenchment as well as the fact that we critics of political correctness found ourselves deconstructed and demonized (when we weren’t simply ignored) instead of responded to with the same kind of reasoning we tried to employ. It was assumed, that is, that if you questioned the logic and justice of (e.g.) affirmative action programs, your motives had to be racist and misogynist, or “homophobic,” and that motivation was more important than any reasoning we brought to bear: almost as if our reasoning was part of that Marxian “superstructure,” determined by our social being and presumed status instead of products of autonomous consciousness. (Some of us, incidentally, had no special status. We were struggling, absent Ivy League connections, to get academic careers started. This was in the 1990s. I should note that the situation is vastly worse today.)

Hence the idea of cultural Marxism, and the rest, as the saying goes, is history; yesterday’s preferential favors have evolved into today’s “safe spaces,” open attacks on freedom of speech, etc. But does the phrase refer to an actual movement, or is it just a so-called conspiracy theory put forth by a number of renegade conservatives (e.g., Paul Gottfried) starting in the late 1990s? Is it responsible for what has happened to higher education, especially the humanities and liberal arts, or was their vulnerability to it less an actual cause and more a symptom of deeper problems? What was the actual status of these disciplines under even the reformed capitalism of the twentieth century? Did this status substantially weaken them? Finally, was economic Marxism truly dead outside the minds of a few stodgy academics — thought to be buried forever when the Soviet Union went out of business and history ended (Fukuyama) — or has it been given a new lease on life by the obvious massive consolidation of wealth and power, and the dislocations spawned by the global neoliberalism that has since risen to global dominance?

These are complicated questions, and sorting them out in a single essay is an impossible task. I will try to do two things in the material to follow in future installments under this general title. First, I wish to outline the evidence of a genuine movement with a certain level of impact. Second, though, I will argue that its capacity to have this impact has a separate explanation, because indeed, much of higher education outside vocational disciplines has had an uneasy status under even reformed capitalism. In many respects, humanities and liberal arts subjects had been effectively marginalized within academia, as they were not as profitable to the economic system’s most powerful actors, corporations, as were subjects like business, engineering, to some extent economics (once separated from the earlier political economy), and certain of the sciences (e.g., chemistry). Finally I will want to float the issue of whether what is now being sought is anything truly revolutionary, in the sense of offering challenges to the legitimacy of the global hegemony of neoliberal elites, or just a diversity of faces within their corporations, controlled universities, and other institutions left structurally unchanged.

Moreover, it was not as if even reformed capitalism had delivered Utopia. It was true enough that rampant racial discrimination had existed not just in the workplace and professions but throughout American culture. This was well-documented. Academics knew this. They had provided much of the documentation. Many of the practitioners of fields like history, literature, philosophy, and so on, leaned left, therefore, and had few problems when a cultural left relevant to their disciplines arose. The economic, educational, and sociological problems of marginalized groups did not lend themselves to immediate solutions: one can argue causes, but not this specific fact. They grew impatient and continued to speak of discrimination and oppression; the very meaning of discrimination was shifted from specific acts to an absence of politically acceptable outcomes, courtesy of the Supreme Court’s decision in Griggs v. Duke Power (1971). Impatient demands for proportional representation had begun to give rise to their own literature by the 1980s; a great deal of this literature emphasized oppressed-perspectives and rejected traditional norms of the neutrality of perception, objectivity, and rationality (documented, with hostility, in works such as Roger Kimball’s Tenured Radicals: How Politics Have Corrupted Our Higher Education, 1990). This multiple-perspectives approach is very much in alignment with the Marxian idea quoted at the outset that our consciousness is determined by our social being / relations and not the reverse. Traditional notions of perception and objectivity had come under independent criticism by the “historicist” strain in recent history and philosophy of science at the hands of writers such as Thomas S. Kuhn (The Structure of Scientific Revolutions, orig. 1962) and Paul Feyerabend (Against Method: Outline of an Anarchistic Theory of Knowledge, orig. 1975). Arguably, these works destroyed the naïve, positivistic model of science as the theorizing of autonomous individual scientists working from neutral observations and hypothesis-testing. They emphasized the social embeddedness of even hard-scientific research in physics. What was true in the physical sciences was surely far truer in “softer” disciplines, and in our lives as participants in institutions and as members of communities generally.

Thus the Anglo-American world, especially its leading higher educational institutions, was vulnerable — on multiple fronts — to something that became known as cultural Marxism. But what is it? How did it develop? Are we able to critically evaluate it, or is any critical evaluation of it almost automatically tainted, e.g., by the fact that the present writer is a white male — a Christian, to boot! — and does my status as an ex-academic impact on my evaluation (assuming I have the resources to complete it? These last questions I must leave to others, having made my status and methods as transparent as possible. But if all of us are perceived as biased by whatever social embeddedness and unavoidable group characteristics we have, the paradox should be evident: the knowledge-seeking endeavor itself is invariably tainted and destroyed; and this, of course, reflects back on all that is or can be said about social-embeddedness and group-bias theorists themselves. If all there is, is the collective subjectivism of identity, then all claims about this collective subjectivism are just as tainted, and their conclusions can be discounted unless we are to give them a kind of reverse-privilege status born of (I suppose) past suffering. But any considerations in favor of this gambit will be equally tainted unless we are just to support them with a Kierkegaardian “leap.”

Dealing with all these quandaries won’t be possible here. Perhaps in essays to come down the pike. What follows is about cultural Marxism. I will limit myself to its development, its results, and then attempt some commentary on whether it is an effective problem-solving approach or more of a distraction from the kinds of issues an economic Marxist would raise in the era of neoliberal globalist capitalism.

Posted in Culture, Higher Education Generally, Political Economy, Where is Civilization Going? | Tagged , , , , , , , , , , , , , , | 1 Comment

Why Donald Trump Won in 2016, Chapter Umpteen Thousand and Counting….

Matt Bai’s Yahoo! columns are usually worth one’s time, however annoying they often are with their unexamined assumptions.

Bill Kristol, son of neoconservative godfather Irving Kristol interviewed here, doesn’t seem to distinguish between conservatism and neoconservatism, and the problems only go downhill from there. There haven’t been any visible conservatives in charge in the Republican Party since Reagan left office in 1989. In the meantime, the neocons (1) lost the universities and the intellectual world more broadly to political correctness, largely because of their sheer terror of being called racists. Now, engineers are being fired from their jobs in corporations like Google for questioning radical feminist dogma about why they can’t seem to recruit more women engineers; (2) neocons have contributed (they aren’t the only villains here) to the decline of the U.S. middle class with the “internationalism” Kristol praises (i.e., open borders, pseudo free trade deals such as NAFTA, etc.) as corporations were always more than willing to ship good paying jobs to foreign countries for cheap labor, a process that has continued no matter which party controlled the White House; (3) greatly worsened the situation in the Middle East with the Iraq War, surely the worst U.S. foreign policy blunder of our lifetimes which guys like Kristol supported (interestingly, this disaster was also backed by mainstream center-left liberals like Thomas L. Friedman); and (4) by not paying attention to the reckless and sometimes criminal policies dominating Wall Street, contributed massively to the financial meltdown of 2008.

In short, on every front we examine, the Establishment elites blew it!

That is why Donald Trump rose to the top in the 2016 primaries including over corporate favorites like Jeb Bush, and since Hillary Clinton was widely regarded as part of that same Establishment, with the arrogance of someone who wouldn’t even campaign in states filled with “deplorables,” that is why Trump defeated her and became president. In addition, whatever one thinks of the Electoral College, it spells out the rules by which elections are conducted in the United States, which were originally written to prevent elite-dominated enclaves like Washington, D.C. and New York City from dominating the entire political system. So in that since, the system worked. 

The Establishment elites made this bed (Trumpism), however little they like lying in it. Establishment Republicans sure aren’t going to undo their blunders by trying to nominate someone like Nikki Haley who wouldn’t have a prayer in the event that the Democratic Party comes up with a real candidate in 2020, however unlikely that is.

One thing I left out. Neocons from the start have been on the bandwagon trying to inflame tensions with Russia, whether through the allegation, so far unproven, that the Trump campaign colluded with Russians to win the election, or through equally unproven allegations of Russian aggression (e.g., against Ukraine) in their part of the world. What do they want? War with Russia? Or just maintaining and managing a chronic level of distracting distrust, while their corporate masters in the global banking networks continue consolidating their wealth and power?

Posted in Election 2016 and Aftermath, Political Economy, Where is Civilization Going? | Tagged , , , , , , , , | Leave a comment

Leopold Kohr: the Political Philosopher / Economist Who Predicted the Rise of the U.S. Empire & Police State

Who was Leopold Kohr, and does his work matter today? 

Kohr (1909 – 1994), about whom I’ve written at greater length here, was both a trained economist and political philosopher. His background included obtaining doctorates at the University of Vienna and at the London School of Economics, after which he observed independence movements in places like Catalan, the region of Spain and began to assemble both the conceptual arguments and empirical data in support of his skepticism about the growing “cult of bigness,” as he called it. 

Inadvertently, he predicted the rise of the post-9/11 police state U.S. This prediction is to be found in his major work The Breakdown of Nations (1957), possibly the most interesting work of twentieth century political philosophy almost no one has ever heard of. Kohr didn’t predict any specifics, of course. What he predicted was the replacement of an ideal of governance, one of the country’s founding ideals, which refrained from interfering in the internal affairs of other nations, with a model based on increasing economic dominance and, when efforts at economic domination failed, aggression.

Such efforts had already begun, of course; Kohr, an Austrian who had only recently gained U.S. citizenship, may not have been sufficiently versed in U.S. history to know of Woodrow Wilson’s desire to make the world “safe for democracy,” or of such events, current in his time, as the CIA-backed overthrow of the democratically elected Mossadegh government in Iran, or other “revolutions” (e.g., in Guatemala) undertaken on behalf of leviathan corporations. Kohr’s thesis again: it is not economic structure or ideology of a governing entity but size, which affords a capacity to exercise power that cannot effectively be countered. This state of affairs, in Kohr’s view, led inevitably either to an explosion of aggression against others or against its own people or both at once. He called this the Size Theory of Social Misery.

Kohr’s words (pp. 70 – 72; but pay especial attention to the final paragraph):

” … [T]he United States … so far has seemed to provide a spectacular exception to the size theory. Here we have one of the largest and, perhaps, the most powerful nation on earth, and yet she does not seem to be the world’s principal aggressor as in theory she should be….

“This is quite true but … to become effective, power must be accompanied by the awareness of its magnitude. Within the limits of the marginal area, it is not only the physical mass that matters, but the state of mind that grows out of it. This state of mind, the *soul* of power, grows sometimes faster than the body in which it is contained and sometimes slower. The latter has been the case in the United States….

“After World War II … there [was] no longer a possibility of the United States *not* being a great power. As a result, the corresponding state of mind, developing as a perhaps unwanted but unavoidable consequence, has begun to manifest itself already at numerous occasions as, for example, when President Truman’s Secretary of Defense, Louis Johnson, indicated in 1950 the possibility of a preventive war, or when General Eisenhower, in an address before Congress in the same year, declared that united we can *lick the world*. The latter sounded more like a statement by the exuberant Kaiser of Germany than by the then President of Columbia University. Why should a defender of peace and democracy want to lick the world? Non-aggressively expressed, the statement would have been that, if we are united, the entire world cannot lick us. However, this shows how power breeds this peculiar state of mind, particularly in a man who, as a general must, knows the full extent of America’s potential. It also shows that no ideology of peace, however strongly entrenched it may be in a country’s traditions, can prevent war if a certain power condition has arisen….

“… [G]enerally speaking, the mind of the United States, being so reluctantly carried into the inevitable, is still not completely that of the power she really is…. But some time she will be. When that time comes, we should not naively fool ourselves with pretensions of innocence. Power and aggressiveness are inseparable twin phenomena in a state of near critical size, and innocence is a virtue only up to a certain point and age…. So, unless we insist once more that Cicero’s definition of man does not apply to us, the critical mass of power will go off in our hands, too.”

Here we are, of course, 70 years after Kohr introduced these ideas to an utterly indifferent academic audience in a world getting increasingly addicted to the “cult of bigness”: of government, of corporations, of international organizations (e.g., the UN and its satellites; the World Bank; the International Monetary Fund; etc.). For 40 of those years we continued to wear the white hats, as it were, at least in the court of public opinion, for we could point to the Soviet Union as surely the more totalitarian of the two superpowers. But from 1989-91, Soviet Communism collapsed, and we saw the prospects of an entirely new geopolitical world order, one based on peace and prosperity instead of violent conflict. Works like Francis Fukuyama’s The End of History and the Last Man (1991) tried to define this hope as well as warn of a few of its dangers. 

It was not to be, of course. Global capitalism thrived, but so did many of the problems for which global capitalism is now blamed: stagnating wages (which had been stagnating for years as they failed to keep up with the dollar’s decline in purchasing power), inequality (specifically: the slow but growing concentration of wealth and power in the hands of a shrinking elite), and environmental issues of which supposedly man-made climate change became the most prominent. One of the more interesting consequences of Kohr’s ideas is that ideologies do not “succeed” or “fail” in an intellectual vacuum as it were, for conceptual reasons only. This is contrary to the dictates of Austrian school economics. Either capitalism or socialism can be made to work under the right circumstances; either, given different sets of circumstances, will go down in flames miserably. The imposition of capitalism in the form of Western modernity on the rest of the world failed to bring peace or prosperity outside elite circles; on the other hand, it often brought discord and resentment among common people who saw only once-stable local economies, religious traditions, and their lives, overturned. One reads a book such as John Perkins’s Confessions of an Economic Hit Man (orig. 2004) and one realizes that via predatory corporations based primarily in the U.S. and a government whose official policies protected them from accountability, indeed, the U.S. had become the very kind of aggressor it once rightly opposed. How much of militant, fundamentalist Islam, moreover, motivated as it is by hatred of the “Great Satan,” is indeed traceable to constant, chronic interference in the internal affairs of the nations in that part of the world, typically on behalf of American-based corporations seeking dominance over their natural resources (chiefly: oil). Will this turn out to be one of recent history’s great tragedies? 

The Iraq War, for example, has turned out to be one of our biggest blunders ever, not to mention one of the most expensive … fought against someone who had not threatened us in any way militarily (remember: no weapons of mass destruction were ever found in Iraq), and which (alongside the war in Afghanistan also begun by the U.S.) led the way to the destabilizing of the entire region which continues to this day, with the Russians now involved. The fact that a power of equivalent strength to the U.S. is also involved in the Middle East may well have prevented the forcible overthrow of the Assad government in Syria (hated by the West but with the support of the majority of his own people): a good illustration of one of the corollaries of Kohr’s main thesis: if aggressive power will be exercised in circumstances when it cannot be effectively opposed, it will be restrained only by an equal or equivalent power. It may be fortunate for the world that powers equivalent to the U.S. still exist. 

So in answer to my original question above: yes, Leopold Kohr’s work does matter today. Unlike the majority of academic work, it explores timeless theses ultimately traceable to what Christians would describe as the human condition in a fallen world.

 

(If you read this article and believe it was worth your time, please consider become a Patron by going to my Patreon.com page and signing up. Thank you in advance.)

Posted in Political Economy, Political Philosophy, Where is Civilization Going? | Tagged , , , , , , , , , , , , , , | Leave a comment

Dichotomous Thinking in Western Philosophy and Political Economy (An Occasional Philosophical Note #2)

If there is any trait more characteristic of the mainstream of Western philosophical thought than the prevalence of dichotomieseither-ors, one might say — it would be difficult to identify what it might be. Another useful term for the phenomenon is bifurcations.

Dichotomies or bifurcations or either-ors, whatever we call them, introduce abstractions on opposite sides of divides and suggest vast separations between their referents. The problem is, the absolute separations of traits usually do not exist in the world — even the world of experience — and the abstractions therefore typically do not apply. They accomplish little except to mislead and confuse our thinking: despite Wittgenstein’s warning, they allow the bewitchment of our intelligence by means of language.

Dichotomous thinking goes all the way back to Plato and Aristotle, whose thought provided the foundation for the main tendencies (the mainstream, I call it) in all that followed. Both distinguished between the essential and the accidental, and between form and substance. Epistemologically, these created the longstanding problem of knowing when a trait or property is truly essential to a thing and when it is merely accidental: that is, whether it must have the trait or property as a condition of being what it is, and not something else. But as our knowledge has grown and changed over time, with new discoveries abounding, what had seemed to be essential often turned out not to be, and we’ve had to continually revise our claims about what was essential. There seems no reason why this process could not continue indefinitely; viewing science historically, we see not a fixed body of knowledge but something that is constantly changing. From the standpoint of specific “natural kinds,” we can never, it seems, truly know that a given property is essential to them, because future discoveries might void any specific claims (think: Kripkean essentialism which wavers on this point).

However one looks at it, the essential-accidental dichotomy allows epistemological skepticism to enter through the back door — or, to avoid such, suggests a pragmatism in which “essentialist” claims are retained to the extent they solve the outstanding problems (they “work”). This dodges the, er, essential question.

What might occur, were we to eliminate not one or these or the other but the dichotomy itself?

Wittgenstein showed the way. We observe objects and classify them on the basis of family resemblances that may change depending on our purposes, which will determine our interactions with them. Our purposes will depend on the problems we are trying to solve. Inherent is this view is that we are problem-solvers, and how we construe problems will vary from person to person and from case to case for a single person. We do not need to speak the language of essence versus accident at all! (The problem in producing “clear” genus and difference definitions is not necessarily the definer’s “lack of clarity” but the fact that the precise nature of what is being defined can change somewhat from case to case.)

Western philosophy has given us myriad other dichotomies, some of which cloud our thinking just as much, even to this day — even for those who believe they’ve escaped from or transcended them or set them aside.

With Descartes, the chief dichotomy is between the corporeal and the incorporeal, between mental substance and physical substance, or in today’s parlance, between mind and matter (body): the “mental” and the “material.” This has arguably been the most influential of the modern dichotomies.

What are the “marks of the mental”? (as Rorty poses the query) has been the hallmark question of that entire branch of philosophy known as the philosophy of mind, an area in which some of the best and most interesting work in contemporary philosophy has occurred, it should be noted.

Or, a more down-to-Earth variant on the same theme: How is first-person consciousness possible in the “material universe” exhibited by modern science, especially including modern neuroscience?

The question behind the question: did Descartes erect an uncrossable barrier between his two substances? Subsequent philosophy latched on to either “material substance,” in those philosophers who became materialists of one sort or another, or to “mental substance,” in those philosophers who became idealists of one sort or another. Arguably, the growing cultural power of science, which purported to theorize effectively about a material universe that existed and had the properties it had independently of us as observers and experimenters, ensured that materialism would win the day.

Materialism really is just Cartesian dualism with one of its “substances” eliminated, with whatever rationalizations or tortured linguistic maneuvers we need to make it seem credible that the “mental” really is gone: dissolved, explained, or rendered kaput.

True, the lengthy debates over such matters as “marks of the mental” and more recently, the “hard problem of consciousness” (Chalmers) have been grist for the dissertation-writer’s mill for quite a few years now … but what might occur if we eliminated the matter-mind dichotomy itself, rather than one “side” or the other?

How might we go about doing that? My suggestion is just to treat first-person consciousness as a starting datum instead of something to be “explained” in terms of that which is obviously not conscious. Phenomenology does this, with often very interesting results.

Another confused and unnecessary dichotomy: think of that mare’s nest, the “problem” of “free will versus determinism.” Here is a case of abstractions that sound meaningful but which invariably disintegrate whenever proper conceptual analysis is brought to bear on them. Without going into all the technicalities, “free will” has always sounded at first glance like a sensible notion, perhaps even a good starting datum or given — I direct many of my own actions in some meaningful sense, do I not? — but does the concept not imply the existence of sets of events, self-caused actions, that are outside the “causal structure of the universe”? But on the other hand, what is this “causal structure of the universe,” and how do I know about it or rationally justify my claims about it? What can it mean, on the other hand, to say of an action that it is externally “uncaused,” i.e., “caused” internally by my having willed it?

So what “caused” me to will action A instead of action B? Ultimately, when pursuing such lines of thought, eventually we should experience the sort of Wittgensteinian “mental cramp” that tells us we are not making sense.

The result, however, is that “free will” as some kind of metaphysical given makes no sense. Does that mean that some form of determinism is necessarily true? Or is it just as likely that determinism makes as little sense?

Determinism is the view that, in one sense or another (it takes more than one form, obviously), everything that occurs has an external cause, which sounds okay but invites an infinite regress unless we posit an uncaused, ontologically basic First Cause (as Aquinas understood, and Aristotle before him). Without getting into theological or other metaphysical implications, if we begin with the universal efficient causation of determinism we reach the result that there must be at least one entity that is uncaused. Thus universal determinism self-destructs.

There is, however, a problem closer to home. Still leaving aside the technicalities, to many philosophers rationally justifying anything using evidential rules supplied by logic has seemed to required ontological independence of the reasoning process from causal determinacy as a brain process, as the latter only yields “justification” as subsequent brain event, and not necessarily a reason, as the brain event could well result in an utterance that is false. Some of our brain processes, after all, misapply evidential rules, or rely on errors of fact, and what justifies normatively correct rules over the incorrect ones cannot itself be a set of brain events, as it would not have the supervening status it needs to do its work. (Alvin Plantinga is the most important philosopher to have realized this, in his 1990s writings on “proper function” lost by metaphysical naturalism in whatever form).

The upshot is that not just moral agency, as Kant held, but the normative rational justification of anything whatsoever, has seemed to presuppose “free will” in reaching these justifications. Determinism seems to render the concept of rational justification either meaningless or impossible, meaning that if stipulated as true, its truth is a transcendental noumenon, incapable of rational proof or knowledge. Of what use is it to discuss it further, in this case?

So what might happen were the dichotomy of “free will” and “determinism” jettisoned altogether?

Instead of saying, “I exercised my free will,” why not just say, “I took an action” or some down-to-Earth equivalent of the sort uttered thousands of times per day? Why not just note that as our self-knowledge accumulates, we are able to identify more and more of the “causes” of our actions, whether we call them reasons, rationalizations, motivations, or whatnot? Why not note that our choice of wording here depends on factors other than purely “rational” ones? Why not note, furthermore, that some of my actions seem to be relatively free, as when I make a list of possible courses of action given a particular situation, think through the consequences of each, and choose the course that seems to have the best outcomes given my limited knowledge? Other actions appear to be determined, at least in their general outlines. I have choices of what to eat, and when (within limits involving a spouse); I cannot choose indefinitely not to eat. There are determining factors traceable to biology. To cite an obvious example, I cannot choose to fly by flapping my arms, and no motivation or course of study in the mechanics of flight is going to change that. Somewhere in here is the gender politics mare’s nest, much of which is nothing more than a denial of human biology in favor of a different set of abstract categories called “social constructs” to delegitimize biology.

Whether it is helpful or useful to dichotomize between “free will” and “determinism” has bearing on the world of political economy, in which defenders / advocates of certain ideological views of society maintain an equivalent absolute dichotomy between the “voluntary” (or “chosen”) and the “coerced” (forced or not “chosen”). Again, at first glance, what we begin with seems to make perfect sense. Trade is “voluntary” if we engage in it of our own “free will,” satisfying needs or desires that are ours.

As just noted, though, I am not free not to eat, and so I must trade for food — implying that the general idea of trading for food is not ultimately “voluntary” despite my wide range of “choices” over the specifics of how I satisfy my hunger cravings. Biology alone places limits on the “voluntary.” Are there other, more interesting limits on the “voluntary”?

Under capitalism I am not free not to work; I must trade my labor or skills for money — implying that the general idea of trading for money is not “voluntary,” although again I may have (or seem to have) a wide range of “choices” over the specifics of how I satisfy my need for money to purchase food, the roof over my head, utilities, etc. Somehow, that is, economic necessities created within civilization place limits on the “voluntary.” Further limits are created not by the mere fact of having some money but by how much money I have. I always have a specific amount, that is.

In the not-too-distant past, Stefan Molyneux (author of this rather slipshod discussion of logic) and Peter Joseph (founder of the Zeitgeist movement which like all such movements has its strengths and weaknesses, and author of this book), debated whether civilization embodies structural coercion, we might call it: economic necessity requires that one be either an entrepreneur of some sort, an employee of some sort, or independently wealthy. (View their debate here.)

My view, for whatever it is worth, is that Joseph won that debate hands down. While I might not have agreed with him on every point, Joseph sounded like he was living in the real world. Molyneux sounded like he was living in a world of political-economic abstractions which apply only under ideal conditions. Becoming an entrepreneur means bending one’s will to the wills of one’s clients or customers, because by “free will” they can go elsewhere. Entrepreneurs are not “free” to do as they please unless they begin extremely rich. Becoming an employee means bending one’s will to the will of one’s boss, because the boss can engage in an act of “free will” and fire you. Employees — ordinary laborers — are considerably less free than entrepreneurs. (No, you can’t simply quit your job if you don’t like it, if you have bills to pay and kids to feed, and if there is no readily available alternative job you can step into or entrepreneurship opportunity without a learning curve likely to occupy several months if not years, including getting the word out.)

The liberal idea that rational individualism, if applied universally and exemplified in markets, will reconcile everyone’s legitimate interests, seems highly improbable in light of modern history alone. (Unless the free-marketer cheats with language and declares that an interest not so reconciled is thereby deemed illegitimate.)

As for a core component of that debate, to libertarians like Molyneux, “society” by definition cannot coerce, only individuals can coerce since only they can act (i.e., exercise “free will”). They will say, “the state can coerce,” although “the state” as an institution is just individuals legally vested with the authority to coerce.

But what, precisely, is “coercion”? Most would agree that if a gun is pointed at my hand and my money being demanded, I am being coerced: a fairly standard example.

The example is surprisingly flawed, for those who always engage hypotheticals and tally up the points they’ve supposedly scored. For I am still making a “choice” to reach into my back pocket for my wallet and remove my money and hand it to the thief, am I not? The contrary choice would be to call the thief’s bluff under the belief (which may be irrational and misguided under the circumstances but is a logically possible course of action) that he will not shoot me — or maybe that his gun isn’t loaded and he just has it to scare me into complying with him. So even here, a purist could argue that I’ve made a “choice,” i.e., engaged in an act of “free will,” i.e., was not truly “coerced.”

In other words, the question of when I exercise “free will” and when I am “coerced” inevitably dissolves into a morass of confusion if we truly explore it. Is it not clear what a convoluted tangle of concepts and rationalizations such dichotomies as “free will” versus “determinism,” and its political-economic equivalent, that between the “voluntary” and the “coerced,” has led to?

What happens if we eliminate the dichotomies and suggest that we are looking at continuities or continuums of various sorts — matters of degree and gradation that appear better able to describe the situations that truly arise in the world we inhabit?

My being able to act “freely” or “voluntarily” is then a matter of degree, conditional on various circumstances I can enumerate, and not an abstract entity which is either present or absent, absolutely. I am considerably less free when facing the thief than I am in the outdoor market and knowing that I need to buy food and trying to decide what to buy. There are entire ranges of other circumstances in which I am somewhat “free” and somewhat “coerced,” in any reasonable senses of these terms.

It is one thing to promote “free trade” in the abstract, declaring it to have raised the “general” level of prosperity by having given “consumers” what they want. It is quite another to note the organic reality, that those who are truly freest under existing “free trade” systems are the billionaires whose connections which include other billionaires elsewhere as well as those in governments they’ve bought and paid for. Those who have lost jobs as a direct result of the “free” decisions of the billionaires have been made less free. The “consumers” who can afford cheap Chinese imports are often those rendered less free, and pursuing the cheap imports not from “choice” but “necessity”; cheap Chinese imports are all they can afford. (I leave aside credit, originally introduced for large, once-or-twice-in-a-lifetime purchases but now used for everything, as credit spending for everything reconciles two things contemporary capitalism needs to reconcile: mass consumption and low wages).

My suggestion, throughout many of these notes past, and in those to come, is that we need a different sort of worldview than modernity, with materialism at its core. Modern materialism, in whatever form, is parasitic on a Cartesian philosophy, postulating a rational “ghost in the machine” within confronting the “machine” without. Philosophically, of course, that “ghost in the machine” has become less and less substantial, reduced to rational economic calculation.

Yet we approach the world as conscious agents with a variety of motivations in addition to the obvious economic ones. Thus the reduction of consciousness to rational calculation by economists seems as problematic as those efforts by philosophers to eliminate it altogether, to treat it (as Daniel Dennett does) as in some sense an illusion. I confess to a longstanding fascination with philosophies, from Berkeleyan subjective idealism down through phenomenology, that take consciousness as its primary datum instead of something to be “explained” in terms of something manifestly not conscious (“matter”), which typically means explaining it away. Refusing dualism beginning from this standpoint, which takes first-person consciousness as given, may mean it is “matter” that needs to be eliminated! This, of course, will astound or merely bewilder the materialists accustomed to dominating the philosophical conversation.

But in physical investigations of “matter,” what do we discover? As contemporary subatomic physics has penetrated deeper and deeper into the “nature” of the atom’s components, their components, etc., it has moved ever closer to states of affairs required exclusively by initial conditions and mathematics. Mathematics consists of relations of ideas, however complex and intricate. Relations of ideas are not autonomous entities, unless we try to postulate a kind of Platonist realm for them to exist in, and obviously we don’t get anywhere by doing that. The experimenter in contemporary physics, moreover, affects the outcome of the experiment as has been known since the days of the “Schrodinger’s cat paradox,” and so cannot be dichotomized as apart from it. In this way, again, a meaningful dichotomy between “consciousness” and “the material world” breaks down. Perhaps our effort should not be to eliminate “consciousness” from the equation but to eliminate “matter” as having meaningful ontological independence:

And this may be what some “lovers of wisdom” of an immaterialist bent have been trying to say all along.

Bringing back our discussion to its point of departure: if there is a single criticism to be made of the central strains of Western philosophy, it is that it long ago became lost in intellectual tangles caused by its insistence on dichotomous thinking as a general methodological rule. The world, however, is organic and networked, which is why our abstractions usually fail to grasp its richness and complexity. We are, moreover, part the world’s organic and networked nature, not entities standing above it, or outside of it, and reasoning about it abstractly. The materialists have that much right; they just draw the wrong conclusion from it.

I’ve only enumerated a small, select handful of dichotomies here, the ones that seem to me to have been the history of ideas’ worst villains. There are plenty of others, some of them a tad dusty and academic (e.g., between the “analytic” and the “synthetic,” a linguistic descendent of Plato’s “essential” and “accidental”). I’ll leave readers to think about what some of those other dichotomies might be, if some are worth keeping around in some form while others can be safely gotten rid of, and in each case, why.

Posted in analytic philosophy, Language, Libertarianism, Philosophy, Political Economy | Tagged , , , , , , , , , , , , , , , , | 1 Comment

Globalism: Optimism, Pessimism, and Dystopian Visions (Part Three, Plus References)

In replying to Dean Allen’s remarks (Part Two), I had intended to restrict myself to a few points, but fear I have instead written another book (lol) and can only hope readers will bear with me. While part of me wants to apologize for the length, I do believe there is only one way to approach these issues, and that is substantially, in detail, and with supporting examples. Let me first explain what doubtless looks like “pessimism.” First, pessimism is often where you find it. It is a matter of perspective. Globalists convinced that their worldview is right in all its essentials (they may be struggling with some of the details), and that TINA (There Is No Alternative), are the supreme optimists. They believe that globalization, guided by them, will one day deliver Utopia. Both of us disagree. Neither of us, therefore, wants to continue down that road. To globalists, though, disbelief in their coming Utopia, involving as it does the de-industrialization of the First World, outsourcing jobs to cheap-labor countries, insourcing immigrants for cheap labor, open borders (the primary cause of the present mess in Europe, mixing mutually unassimilable European and Muslim cultures), etc., is pessimism about the “liberal world order” they have been building for the past 70 years, and which seemed to be doing fine at one time!

I am pessimistic about secular political-economic solutions, some of which I have also participated in and seen first-hand the infighting, the jostling for position, the impatient demand for instant results, and sometimes the just-plain-pigheadness of those who can’t admit that what they are doing doesn’t work in the real world. This brand of pessimism stems from my sense of the basic sinfulness of the human condition (Rom. 3:23). While disagreement over theological specifics and their consequences is possible, I no longer understand how one can reject the basic idea that we live in a fallen world in the Christian sense. With this you have a common denominator, why every attempt we have made to organize ourselves politically / economically has ultimately failed, even if some failures are more spectacular than others. We indeed formed a political system in the original Constitutional Republic that became the envy of the world, but compromised it very quickly in ways I documented in my book Four Cardinal Errors (Yates 2011).

The truth is, we seem unable to build political-economic systems that don’t shaft somebody. Go back 175 years and you have slavery, which shafted the black man (even if it is wrong to blame that unhappy state of affairs on whites, as both Muslims and blacks themselves were involved in the international slave trade). Let us realize, moreover, that we only got rid of chattel slavery, which is not the only form slavery can take. Today it’s straight white Christian males who are being shafted, systematically demonized in media and academia. Tomorrow it might be somebody else — groups being vulnerable as long as some are able to soar ahead of others and collectivism remains the prevailing ethos.

My “pessimism” in this sense makes me doubtful of the wisdom of trusting wholeheartedly in any one person, such as Trump or anyone else, or one party such as the GOP, or even one political system (so-called democracy which is really oligarchy — see Part One), or the prevailing economic system (call it capitalism or call it something else). Principles are too easily compromised by money and power. Abstractions fail because they are usually too simple and streamlined when applied against the organic complexities of the real world and real human beings and their inscrutabilities. All leading to why I concluded some time ago, and maybe I should do more posts to emphasize this and work out the implications, our only ultimate hope is in Jesus Christ as our personal savior and in winning souls for Him.

To the extent one believes in basic tenets of Christianity, one is actually an optimist and not a pessimist at all. This brand of optimism places its hopes and faith not in this world but in the world to come. It does not suggest sitting on the sidelines, fatalistically, and allow false premises and bad tendencies to go unchallenged. But it does urge us to temper our expectations with modesty and realism. The majority of our secular efforts are bound to fall short because human nature is fundamentally at odds with that which is principled. We should instead remember that “these all died in faith, not having received the promises, but having seen them afar off, were assured of them, embraced them, and confessed that they were strangers and pilgrims on the earth. For those who say such things declare plainly that they seek a homeland. And truly if they had called to mind that country from which they had come out, they would have had opportunity to return. But now they desire a better, that is, a heavenly country. Therefore God is not ashamed to be called their God, for He has prepared a city for them” (Heb. 11:13-16).

Turning to corporations. Keeping in mind that I am talking about globe-spanning entities, not, e.g., supermarket chains with limited reach, Mr. Allen believes we should have more trust in them than my statements evidence. Following up what was just concluded, though, corporations are run by human beings and subject to whatever sinfulness human beings are prone, which may be aggravated by both a materialist philosophy of life and large amounts of money to support it. When deciding when or whether to trust corporations, this general point must be considered before we get to any specifics. The so-called private sector doesn’t have a special, sin-free status because it’s the private sector and supposedly subject to the “discipline of the marketplace,” whatever that is. The globalist power elite originated within and grew up around private banking leviathans, as Carroll Quigley observes (Quigley 1966). I believe not trusting corporations with global reach is justified. To this extent I disagree with Republicans and Libertarians who trust corporations indirectly out of the trust that their favorite abstraction, the “free market,” will control them with Adam Smith’s “invisible hand.”

We need specifics, of course, and they are readily available, in some cases based on personal experience and direct observation. I will cite my own, and I doubt my situation was unique. When managing the affairs of my aging parents, both of whom spent their final years in a private nursing home because they needed round-the-clock care I could not provide, it became clear to me how much of what goes on in such facilities occurs so that corporations can make money. Well, that’s capitalism, some will respond. If so, then “actually existing capitalism” (is there another?) really does have some of the structural faults the “economic left” (not to be confused with the “cultural left”) attributes to it. For starters, the emphasis of so-called “scientific medicine” is not healing but managing chronic conditions using legal drugs. This enriches the multibillion dollar pharmaceutical industry (Big Pharma). Many of the drugs were “approved” under dubious conditions, with Big Pharma’s corporations bankrolling “studies” that “proved” the drug to be safe. When we look at how this happens, a critical thinker is bound to see supposed scientific objectivity compromised by vested interests over and over again (Fitzgerald 2006).

There is truth to the adage that we do not have a health care system but a sick care system — because managing chronic conditions is extremely profitable! Which goes to the heart of why the most advanced civilization in the world is also now the sickest, most drug-addicted, most lethargic, most obese, etc. Unhealthy food is also extremely profitable because nutrition is not taught in schools (private or public); hence the uninformed masses have no idea they cannot live indefinitely on fast food and diet drinks without this eventually damaging their health. They spend their money accordingly. “Health care reform” has never been about health, or healing, but how the accelerating costs of managing chronic conditions are handled. This mess is systemic. I do not want to say that “socialized medicine” is the answer, because (in accordance with earlier paragraphs) that will surely come with its own train of abuses. But what we can note is that a genuinely healthy population does not need to spend money on doctors, clinics, hospitals, pharmaceuticals, etc. Most doctors do what they are trained to do, which is identify chronic conditions and prescribe drugs to manage them. Most are clueless about nutrition-based health which they consider pseudoscientific “quackery.” The truth: good health doesn’t fatten corporations’ bottom lines.

Again, I saw first-hand evidence for this. At one point, the nursing home doctor (who made two visits to the facility per week!) had my father on a narcotic, presumably to control his reactive confusion caused by vascular dementia. I told him in no uncertain terms I wanted my father off of it, and threatened to put both my parents in a different facility. Interestingly, my demand was honored and not countered by an argument that the drug was necessary for such-and-such. Both of my parents were overmedicated, and it is just possible that too much medication shorted their live spans. In the end, during the last six weeks of my father’s life (spent “in hospice”), the facility took him all his medications. Result: for those six weeks we basically had him back! He was his old self again, asking how the stock market was doing, how my classes were going, those sorts of familiar things. My mother had parallel issues, including a blood problem possibly related to her having been on blood-thinners since a 1999 stroke. The only “cures” appeared to be transfusions and, of course, more drugs!

I have no reason to believe my parents’ cases are somehow unique. It is likely that there are millions of overmedicated people in nursing homes across the U.S., and that tens of thousands of prescriptions are written every year unnecessarily, forcing patients, many fearful and therefore not thinking critically, to spend money. One of the biggest causes of preventable deaths in the advanced world is the medical / hospital system itself. There is overmedication; there is physician error including false diagnoses sometimes the product of ridiculously long shifts which can run to 12 hours; and there are hospital-acquired sicknesses, e.g., from bacteria such as MSRA which kills more people every year than AIDS. My sister died in early 2016 from complications related to an MSRA infection she acquired during an earlier hospital stay. She was only 54.

For some history of how we’ve allowed the “artificial” to triumph over the “natural” in food, public health, and medicine over the past 100 years, with the primary benefits going to private corporations often to the detriment of our long-term health, and how we have been taken to the cleaners by decades of pseudoscientific “studies” supposedly showing nonexistent benefits of chemicals additives to foods as well as insidious practices such as factory farming, all in the name of the Almighty Dollar, I recommend  Patrick Fitzgerald’s The Hundred-Year Lie (Fitzgerald 2006).

That’s just one example of why people who want to be free shouldn’t trust corporations, or politicians backed by corporate donors (the mainstreams of both the Republican and Democratic Parties), any more than they should trust expansionist governments. There are plenty of others. John Perkins (2004) described how his employer, a private “consulting firm,” sent him into so-called Third World nations to cajole their leaders into accepting massive IMF loans to build up infrastructure and move in the direction of Western mass consumption. They did so, and the money went not to the country but to huge construction firms. The countries, as the nominal recipients, found themselves in massive and often unrepayable debt that could be used to control them (e.g., forcing them to allow U.S. military bases on their soil), as well as providing another means of making money, via interest payments, for the global banks that ultimately back such loans. Local political elites could do very well if they cooperated and were willing to sell out their own people. If the people elected a “populist” who saw what was happening, that a corporate noose had been drawn down around his country’s neck and began to drag his feet, the country experienced a “coup,” or the man himself had an “accident” such as a fatal plane crash. Someone “more reasonable” would then be placed in power. Has this happened? Ask informed Panamanians. Or Ecuadorians. Or for that matter, many Chileans who will tell you this kind of thing was going on here in the 1960s and explains why they tilted left and elected Salvador Allende president in 1970. They saw just two options and chose that one that seemed most likely, by kicking out foreign interests, to return the country to autonomy. (It didn’t work, but that is a long story.)

This is because common people of whatever stripe resent reaping only the most limited benefits of adopting Western mass consumption while corporate elites on another continent get rich. Note too that indigenous economies and cultures are frequently ruined by this process (cf. again Norberg-Hodge 2009). And let us remember, finally, most of these cultures never had the (somewhat) financially independent middle class that developed in America beginning in the 1950s and whose fortunes arguably were sabotaged in the 1970s. It had become harder and harder for the power elites — the ownership class, if you will — to keep themselves walled off from the expanding middle class and its rising influence, even more so their children, and prevent their challenging and conceivably hijacking their agenda for the world, as the young did with the Vietnam War (for example). The ongoing destruction of the U.S. middle class has not come exclusively from the political class, although it carries its share of the blame. Where is the locus of power? Nearly every Libertarian and Republican gets this wrong. It is with corporations, not governments, because corporations have the money: which is why we can speak of corporatism as the prevailing economic system of our time (the term neoliberalism is often used, but I believe this term is less informative). With enormous corporate donations to establishment politicians absurdly brought under the umbrella of “free speech” with the Supreme Court’s disastrous Citizens United decision (2010), corporations had gained more influence over the U.S. political system than ever before until 2015-16 saw the fruits of the grassroots revolt that sent Donald Trump to the White House, whether for better or for worse (the jury, to my mind, is still out).

Final example much closer to home for all of us who think of ourselves as truthtellers, possibly through personal experience yet again: the war against so-called “fake news,” which is really a war against those of us not beholden to the corporate media leviathans. Who are truth-tellers? Those who see through and hence operate outside the premises of the secularist, materialist, corporatist, globalist 20th-21st century mainstream, which (naturally) controls the bulk of media and academic resources, and so can portray truth-tellers as purveyors of “fake news,” as “conspiracy theorists,” or as “neo-Nazis,” or by swinging any damn linguistic club they please! My point is that the war against “fake news” is again primarily corporation-driven, only this time we’re talking about media and tech leviathans. The Washington Post, which started this “fake news” meme last November with an unsourced article referencing a group whose members didn’t even identify themselves, is privately owned (by tech globalist Jeff Bezos of Amazon.com fame). Google, Facebook, etc., have redesigned their “algorithms” to effectively reduce access to certain content, using the former’s search engine so that alternative news sites simply do not come up in searches which, realistically, are usually limited to a couple of pages of links at the most. Result: by the end of the first quarter of this year we saw a dramatic drop in traffic to alternative websites — including the one I write for semi-regularly, NewsWithViews.com. The idea that something called “the free marketplace of ideas” has been at work here appears to be another myth, because although you and I are smart enough to go to these sites if we want truthful information, Joe Six-Pack typically just follows what his newsfeeds give him. His Internet usage is passive, not active, and so he is easily controlled using incentive systems psychologists (also bankrolled by global corporations and foundations like Ford and Rockefeller) worked out long ago. This has cleared the way, at least somewhat, for media corporations to try to regain some of the credibility they lost last year by (e.g.) predicting that Hillary Clinton would defeat Donald Trump in a landslide.

Is my vision as expressed in Part One dystopian? I plead guilty. But from some perspectives, a long term “decline of the West” wouldn’t be all bad, and so shouldn’t be mechanically labeled “pessimism.” As with any major development, it would have plusses and minuses, and even those depend on your point of view. The plusses might be healthier food produced locally and not filled with preservatives, cleaner air and water, and the possibility of less stressful lifestyles which themselves are unhealthy. The biggest minus might be the fact that millions of people will have to rediscover what work is, understood as physical labor as opposed to sitting behind desks tallying up numbers (or whatever) on computer screens all day. A post-collapse economy, after all, will most likely be agrarian, and this will mean: farming. If this be dystopia, make the best of it.

Truthfully, however, we are probably at least two and probably at least three decades away from any such state of affairs on any large scale. Mass consumptivism definitely has the upper hand, and has widespread support because of the ease it brings about, at least, until it makes you sick with a chronic condition, or until the avalanche of unsustainable debt destabilizes the economy.

Where do we go from here? My first suggestion is to embrace a Christian worldview. I am at work on a book about worldviews entitled What Should Philosophy Do? which argues that what it should do is recognize, formulate more precisely, and where necessary, challenge dominant worldviews. I want to kick open the door to a viable metaphysical pluralism, or pluralism about worldviews, and challenge the supremacy of a single worldview, that of materialism, over Western political economy and Western culture. One may, of course, bypass pluralism about worldviews and simply form Christian parallel institutions, if one can fund them.

Second, work piecemeal at minimizing one’s contact, much less dependence, both on governments and on global corporations. I am not a tech person and do not want to be (I may use Facebook but rarely use Twitter and do not have WhatsApp … owned by Google whom I definitely do not trust). There is abundant evidence of how social media is distorting our social fabric and possibly worse. Instead, work at forming and nurturing communities of the like-minded who are working toward greater local autonomy and self-sufficiency, keeping in mind that this is a process and not a single decision or even strategy. Keep in mind that some of these efforts are bound to fail, as did Galt’s Gulch Chile. From failure we learn. It has taken decades of delusions about the possibilities of secular civilization and mass consumption culture, of educational dumbing down (which, arguably, systems based on mass consumption and obedience to authority require), and just plain complacency and negligence to get us into this mess. Assuming the present globalist system holds together, it might take decades to get out of it by creating viable, stable independent communities.

There are topics I haven’t taken up, such as Mr. Allen’s comments on race. It does seem to me that those of us designated “white” built something important: Western civilization, with its sciences, its advanced technological and other systems, and its many creature comforts. No other race can claim such achievements.  Whether this is something specifically “white” or whether it is attributable to other factors is the question, and as we’re pushing at the limits of people’s attention spans here, I am sure, I would prefer to save this discussion for another occasion. So I will just submit the point made by historian Rodney Stark (2005), that without Christianity in place first we would not have had modern science, with presumes as its starting point the intelligibility of the universe to the human mind, the universe having had a rational Creator, the idea (integral to a capitalistic economic system) that its raw materials can be brought under our control, transformed into useful goods, or finally the Enlightenment conception of human rights which takes its point of origin at the idea that we are created in God’s image. These being ideas, nothing makes them race-specific. Western civilization, as it has developed and secularized itself, moreover, moved away from them, proving then to be a mixed bag, with its obsession with growth, which meant imposing mass-consumptivism on the rest of a sometimes-reluctant world. Is there, finally, nothing to be said about the state of affairs in which a group of people small enough to fit into a high school auditorium owns or controls more wealth than the entire bottom half of the world’s population? I don’t think one need be a collectivist, or an egalitarian, or some kind of socialist, to believe capitalism has gone off the rails when it yields such results which are bound to provoke resentment around the world, including at home (for one example see Mishra 2017).

Summing up in this case: I am not a globalist, because I do not see the likelihood of a world as diverse as ours being pulled under a single political-economic order without the necessity of increasingly authoritarian controls—already in place if one knows what to look for. I am only a pessimist when viewed through the secularist lens, because I do not believe in the long-term viability of secular solutions to human problems. I am ultimately an optimist, because I believe in a Kingdom to come. But I have no timetable for its coming (I am not the sort of Christian who expects to be “raptured” off the world any day now!). I am open to the possibility that we are in for a rough ride in the short haul, especially if the Deep State reasserts itself or if such measures as the tax bill just passed by Congress and on its way to President Trump’s desk indeed turns out to be a Christmas gift for the point-zero-zero-one percent, as its critics insist. Therefore, to the charge of having dystopian visions, I again must plead guilty.

 

REFERENCES FOR PARTS ONE, TWO, AND THREE

Dean Allen. 2012. Rattlesnake Revolution: The Tea Party Strikes! Hill-Pehle Publishing.

Patrick Fitzgerald. 2006. The Hundred-Year Lie: How Food and Medicine Are Destroying Your Health. Dutton.

Francis Fukuyama. 1992. The End of History and the Last Man. The Free Press.

Martin Gilens & Benjamin I. Page. 2014. “Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens.” Perspectives on Politics Vol. 12, #3, pp. 564-81.

Helena Norberg-Hodge. 2009. Ancient Futures: Lessons from Ladakh for a Globalizing World. Sierra Club Books.

Pankaj Mishra. 2017. Age of Anger: A History of the Present. Farrar, Strauss & Giroux.

John Perkins. 2004. Confessions of an Economic Hit Man. Berret-Koehler Publishers.

Carroll Quigley. Orig. 1966. Tragedy and Hope: A History of the World in Our Time. Macmillan; GSG & Associates.

Chuck Schumer & Paul Craig Roberts. 2004. “Second Thoughts on Free Trade.” New York Times, January 6.

Rodney Stark. 2005. The Victory of Reason: How Christianity Led to Freedom, Capitalism, and Western Success. Random House.

Steven Yates. 2011. Four Cardinal Errors: Reasons for the Decline of the American Republic. Brush Fire Press International.

Posted in Christian Worldview, Election 2016 and Aftermath, Political Economy, Where is Civilization Going? | Tagged , , , , , , , , , , , , , , , , , , , , , , , , | 7 Comments

Globalism: Optimism, Pessimism, and Dystopian Visions. (Part Two.)

It gives me a certain gratitude to be able to introduce the first post on Lost Generation Philosopher the bulk of which was written by someone other than myself. Dean Allen — author (see Allen 2012*), longstanding Republican Party activist and proud Tea Party member from South Carolina, my former home state, someone whose intellect I respect and who respects mine — penned the following in response to a slightly earlier version of what appeared two days ago as Part One. Why is it here? Because I thought it was a well-intentioned and fair-minded critique, and that many of its points are worthy of further discussion. (In other words, one may note this as exemplifying respect for Free Speech!) I worked out a response over a period of two days which was rejected by whatever algorithms are in play on Facebook, whether the post was too controversial or simply too long and containing too many big words. I decided to bypass Facebook, and my response will appear in two more days as Part Three.

In case this discussion seems unusual for a philosophy blog: we would agree, I think (Dean and I) that philosophy is nothing if it does not or cannot address real world problems at some point, using the analytic tools at its disposal which I’ve discussed in previous posts. As a philosopher who is not a professor and will never see tenure, I am not bound by the limitations professors face these days, which is why I feel free to discuss globalism, “conspiracies,” etc., as I find them, rather than kowtow to institutions and a profession that was neutered years ago. Professional philosophy may be recoverable, but only if its worth can be rediscovered outside the controlled environment of academia.

In any event, that being a longer story than I can get into here, we continue with Mr. Allen’s post. The only changes I made were to put in paragraphs (which he requested, as it is inconvenient to do on Facebook) and smooth out some wording here and there in ways that hopefully left the meaning unaltered.

 

DEAN ALLEN to STEVEN YATES:

 

Steven, you fail to address another possible dichotomy. While you and I share a common background and many philosophical values, there is still a wide gulf between us. You are correct it is not racial, we are both white and not entirely partisan, I concede a leftist element inside the GOP we have not yet expelled.

However, there is a huge gulf between us that results in your frequently dystopian diatribes. Yes, when it comes to being a pessimist, you are as strident an orthodox, hidebound, pessimist as ever put pen to paper. You have made an honest effort to avoid racial and partisan language where such language stifles honest communication and intellectual discussion. You have tried, with less success, to avoid any left-right dichotomy. I mention with less success because success in that endeavor is quite impossible if we are going to understand and respond to differences between individualism and collectivism. We are on the same side there, both being individualists, therefore that is not our fundamental difference.

The difference between us is you are a pessimist, a defeatist, and have already surrendered to a very dystopian world view. That could serve you well if you made your living writing fiction of the Mad Max variety, rather than teaching philosophy. I on the other hand have a much more optimistic view of the future of both the United States and the world. You have heard the old adage a pessimist laments an adverse wind, an optimist looks forward to the adverse wind changing, while a realist merely adjusts his sails to the prevailing wind. The latter course is, in reality, merely a more pragmatic optimism. You must first be an optimist to believe it is possible to adjust your sails to the wind and you must believe it is possible, before you are able to do it.

We who are the optimistic individualists made a fantastic leap forward last year when we nominated, and then elected, Donald J. Trump to the presidency! Since our president is, by intelligent design, not a monarch with absolute powers, President Trump cannot simply order things put right and have it happen. However, he has made great strides in the past eleven months and I look for a lot more in the next seven years. Nor does my optimism rest upon one man or one political party. Brexit has shown nationalism and individual rights to be ascendant in our mother country as well. Marine Le Pen and others have demonstrated the spirit of freedom is also alive and well on the continent of Europe. While our European cousins have not enjoyed as dramatic a change as the Trump presidency, they are making great progress in the right direction.

Your morbid fear of corporations, like your fear of organized political parties, is misplaced. The greatest existential threats to western civilization do not come from Democrats (in reality communists now), Republicans, or corporations. The greatest threats come from the twin terrors of Islam and the benign acceptance of racism against white men. The enemies of freedom have long understood they must destroy white men and the civilization we built if they are to erect something else in its place. For 60 years the defense of the white man was relegated to the sorriest examples of white men and to organizations devoid of both intellectuals and political strategists. These would-be defenders of white civilization were crippled by the poison of anti-Semitism, a fatalistic view of religion, and hatred of other races based on the same flawed Zero Sum Game philosophy of the far left. The reality is, defense of the white man does not require attacking or tearing down other racial groups. It does require acknowledging and accepting very real differences. Whether we white men are “superior” is a subjective determination. That we are different and have produced unique results in the world, are factual matters we have been too timid to defend. The late Ayn Rand once lamented our lack of attention to epistemology, pointing out, when we allow our adversaries to define our language and the terms we use, we have already lost.

Steven, the time has come for another revolution. Not merely a revolution designed to throw off the yoke of a tyrant by force; but also an intellectual revolution. One where we retake and assert the fundamental truths that the civilization built mostly by English and Irish white men is indeed superior to every other system that has been devised by any group of men in history, anywhere on earth. Ancestors of our British cousins gave the world the English language, common law, and the free enterprise system over a period of a millennium and a half. We here in America, including my own Irish ancestors, refined and perfected those foundations of civilization. We can defend, and are defending, the virtue of the American system. You may question a claim of racial supremacy. What cannot be questioned is the fact one race, white men, almost exclusively British and Irish, produced on the North American continent, a system of government that definitely has been superior to everything else ever produced anywhere else. People of all races within our geographical borders have benefited from our superior system, but only to the degree they assimilated into our unique American culture. As we reassert the supremacy of our culture, we are driving away the dystopian nightmare you fear, making it less probable with each passing day.

*I reply on Thursday, December 21, 2017. All bibliographic references will be listed in the usual place at the end.

Posted in Election 2016 and Aftermath, Political Philosophy, Where is Civilization Going? | Tagged , , , , , , , , , , | 1 Comment