Truth-Teller’s Dilemma, Part 3

For Part 1, click here.

For Part 2, click here.

Most so-called journalists today, moneyed “professionals” whom Paul Craig Roberts bitterly calls presstitutes, would not know the truth if it walked up and hit them. Roberts, who served as Assistant Treasury Secretary under Ronald Reagan and afterwards as an assistant editor with the Wall Street Journal, was excommunicated from the mainstream in 2004 for questioning free trade orthodoxy, on the grounds that changed technology has exacted corresponding changes in how corporations operate since the days of David Ricardo. Comparative advantage, he argued, has been replaced with absolute advantage due to the present-day mobility of capital and its capacity to shift operations to third-world nations where labor is cheap and environmental regulations virtually nonexistent.

Roberts was attacked furiously by evangelists of free trade. It was his last appearance in a mainstream publication. He was, indeed, making assumptions: that laws regarding minimum wages and the environment serve a purpose, because, contrary to the evangelists, capital is not self-regulating. Roberts has since questioned several other mainstream official narratives, from 9/11 to the supposed killing of Osama bin Laden in Pakistan to the recent supposed chemical weapons attack in Syria. He also relies on boots-on-the-ground sources which almost inevitably conflict with official narratives.

His website was listed by PropOrNot as a source of “pro-Russia propaganda.” It, too, is struggling.

One reason Donald Trump won in 2016 was his command of media — all kinds. He bypassed the narrative-manufacturing “experts” and talked directly to his base — whose unhappiness with the cultural left and with the globalism dominating the GOP mainstream was manifest, made concrete with disdain for job outsourcing, illegal immigration, open borders, etc.

Hence a determined effort to take media back.

All of it.

A very-lopsided battle of wits and wills is going on before our eyes: lopsided because we have excommunicated and crowdfunded or self-funded truth-tellers out here in the boonies going up against billionaire-owned, well-connected enterprises from Google to The Washington Post and CNN in the centers of power and opulence.

A fuzzy center-left mindset controls the “brick & mortar” media leviathans, as it has for decades. This mindset gets warm and cozy with “diversity” and its social engineers, turning scathing at what bad men Vladimir Putin and Bashar al-Assad are. This serves the interests of power — in the central banks and other financial centers, organizations such as the Council on Foreign Relations, the CIA and other Deep State entities, Israel, and global corporations who profit from war and free trade evangelism, while well-paid “economists” provide the public with scare tactics about “protectionism” even as the Communist Chinese government protects the industries it owns!

An equally safe cultural left mindset controls the social media and technology behemoths. They would claim that the bulk of their young users accept identity-political mores, and with technology being global and therefore diverse, adopting them is simply good business. Their “good business” practices (as with those of many others) also play into the hands of those whose long-term goal, we should never forget, is establishing a corporation-controlled world state.

Some people believe such an entity is inevitable, and would be good. I think not! Under its watch, financially independent middle classes would be impossible. Very likely there would be stringent licensing of would-be entrepreneurs involving up-front fees most could not afford, leaving working people able only to trade time and labor for money, working ever longer hours for money with diminishing purchasing power: the actual new serfdom. The dollar would probably be replaced by IMF special drawing rights as the world’s reserve currency, which would rapidly drive down the standard of living inside the U.S.

Large corporations have never wanted competition, which John D. Rockefeller Sr. is alleged to have called “a sin.” Whether he really said that or not, it is known that he maneuvered to establish maximum control over the oil industry, while other corporate titans of his time established controls in other industries (Carnegie in steel, Vanderbilt in railroads, etc.; the latter, incidentally, would only carry Rockefeller oil).

For a long time, of course, upward mobility in many areas of the economy was possible through determination and work, alongside safety nets and a few sensible regulations on big business, but then we got the 1960s: a mixed bag of tricks by any description. Identity politics may have got its start in that decade with the writings of cultural Marxist philosophers such as Herbert Marcuse, but so did a great deal of political-economic enlightenment. The Deep State of the time was forced to scrap an unpopular war (Vietnam) its denizens had wanted very badly.

Since then, as part and parcel with the rise of the neoconservative-neoliberal axis, we’ve seen the rise of free trade ideology and open borders policies which are wrecking European nations even as I write: manifestations of kleptocrat globalism that have been fundamentally destructive of middle class existence. The latter depends on stable jobs with upward mobility within a relatively stable cultural environment, as opposed to outsourcing labor to third world countries or importing unassimilable populations from such places.

Precarious employment thus rules the day. Look at academia, which is self-destructing. Fully 70% of new faculty jobs are part-time, “adjunct” positions — while some university presidents are paid seven figures and the head football coach is usually a millionaire. The irony of academia is that while humanities / liberal arts fields have sunk ever more deeply into identity politics, the institutions themselves have become more and more corporatized. The fact that many students have gone into debt “voluntarily” and are graduating with five and sometimes six figures of student loan debt is proving to be an immense boon to their being controlled systemically. You cannot exactly protest the corporate state’s wars of choice when you cannot afford to move from under your parents’ roof due to massive debt and a lack of decent employment.

For “temp jobs” have become the norm in many arenas, not just academia. It’s called the gig economy. This represents control over labor itself, as precarity means uncertainty about one’s future and an inability to plan rationally. Forced into daily worry over their own situations, precarious laborers (Uber, anyone?) are less likely to protest the status quo and will probably not have time or motivation to watch what the kleptocrats are doing.

We are in an era not of mere “inequality,” but one of an unprecedented consolidation of wealth and power at the top, alongside the systemic destruction of conditions for upward mobility and financial independence on any large scale.

The cultural left misses most of it by gender-bending and swinging at windmills of generalized “white privilege.”

The right, including most Trump-supporters, is also missing something important: the impetus to replace human workers with robotics — the ultimate attack on employment from the top, done in the name of “market forces.” For far more jobs are threatened by changing technology than by immigration, legal or otherwise.

These sorts of things are what we should be focusing on, and there should be more focus on it within the Trump camp than there is.

I fear, though, that the cultural left dividers and other sources of theater are winning the day.

The truth again: the kleptocrats want us divided: blacks against whites, women against men, secularists and Muslims against Christians, anti-gunners against gun owners, etc. Divide and conquer has always been the oldest rule in the book. Actual and would-be totalitarians have been using it for centuries. Racism was created and fomented by corporate-state oligarchs of post-War Between the States America who feared that newly freed blacks and underprivileged whites would find common cause and form a populist alliance against them. The KKK came out of the conditions this mindset created.

Cultural leftists will never grasp this. The Real Matrix has them.

Divider ploys are in evidence everywhere today: me-too feminism, the high school kids who suddenly became experts on gun policy, (probably true) allegations against Trump from a porn star and other women of the same ilk, mainstream media providing hours of hysterical coverage of it all and enjoying the ratings as the dollars come rolling in.

All theater.

The Trump administration faces real, existential threats. The most obvious is Robert Mueller’s hunt for collusion between Trump campaigners and Russia, which is morphing into a far more plausible consolidation of evidence that Trump obstructed justice when, e.g., he fired James Comey early last year. Other existential threats are subtler until they strike, like rattlesnakes. There have been snakes under Trump’s nose all along. One leaked an internal memo to The Washington Post following Trump’s call to Russian President Vladimir Putin congratulating him on his reelection.

The memo said: Do Not Congratulate!

Were I Trump, I would have been livid!

Were I Trump, I would not have signed that $1.3 trillion omnibus bill he signed on March 23, which brought upon him justified criticism from real conservatives as it gives plenty to Democrats and other countries in the form of foreign aid but apparently does not specifically appropriate funds for a border wall. We do not know all that it contains, because as Trump pointed out, again and obviously, no one can claim to have actually read its 2,000-plus pages.

It’s easy to believe we’re right back where we started, with out-of-control federal spending.

Trump’s promise never to sign such a bill again rings hollow. Maybe he won’t — until the next time.

More recently, Trump appears to have fallen hook, line and sinker for allegations of the chemical weapons attack at Douma, Syria, that in all probability did not happen as such.

Is the Trump administration itself part of the theater?!?!

There are long-term trends Trump should be paying attention to but apparently isn’t: for example, the national debt that just crossed the $21 trillion threshold, with no end in sight (total indebtedness is much, much higher, of course). Trump’s pick for Federal Reserve chair, Jerome Powell, is another mainstreamer whose political-economic philosophy does not differ significantly from that of Yellen or Bernanke or even Alan Greenspan who gave us the original “bubble-nomics” of the 1990s. This was a disappointment, as Powell’s and other central bankers’ machinations could precipitate economic ruin on a level that would make 2008 look like a cakewalk by comparison.

Trump’s enemies would see to it that he was blamed. As a businessman and straight talker trying to operate in that rattlesnakes’ den known as Washington, D.C., I am very much unsure he understands even now what he’s up against.

Globalists do not believe in de jure national borders, and they are not above setting up someone who does for personal, political, and financial ruin. They do not care if the U.S. bankrupts itself, so long as (1) whatever disruptions ensue are manageable (using corporate media to whet fear and encourage compliance with the authorities in the name of security and safety has always worked in the past); and (2) the U.S. war machine emerges unscathed!

It need to be made clear as crystal: this administration, as understood by the base, is all that is standing in the way of a full-throttle return to the steady march into a corporation-controlled world state.

The Bush-Clinton-Obama axis was/is on board with that program, which is how they got away with murder — sometimes literally!

Some believe Trump is now on it, as he wants his presidency to survive. Perhaps he has been on board all along. What he signed right before Christmas last year contained an uncomfortable quantity of Christmas presents for corporate elites!

[Author’s note, April 17-18: in light of the carefully orchestrated attack on Syria which took place on April 13-14, the likelihood that Trump has been fully compromised now seems many times higher than it did when I wrote the above in late March, as his own remarks delivered Friday night, April 13, were the remarks of one who has gone full neocon. Second, even this assumes Trump was sincere from the beginning, which this throws into doubt. Third, I have not changed my position on the italicized paragraph above, which means that apparent superficial improvements in the U.S. economy will not last; we can safely predict another downturn for which Trump will be blamed as 2020 approaches whether or not he is able to remain in office. Then, during the 2020s, it will become evident to everyone with eyes to see that the U.S. empire has sunk into a terminal, irreversible decline — again except for its military machine.]

It is time to wrap up this discussion. Where do we go from here? What can we do? is a question I am sometimes asked, as if I could press a magic button and this thing I am typing on would spit out a response that applied to everyone.

I speak only for myself. What you do, is entirely your choice. I do not know your circumstances and cannot control your choices, including your choice of worldview. There is abundant information on what will offer you wise guidance. I’ve made my case for one worldview (cf. also here) and against another. Mine tells me this is a fallen world, and that there are therefore no perfect solutions. There are a few imperfect ones, such as actually supporting the persons and causes you believe are worth supporting, or developing a Plan B and freeing yourself from dominance of your daily activities by totalizing employment trading time for money, and by technology. The latter might involve getting off Facebook and Twitter, or putting down your gadget of “choice” long enough to perceive the real human beings around you.

If you believe something is wrong when the world’s supposed bastion of freedom also has the world’s highest incarceration rate — indeed, a higher incarceration rate than Communist China! — then investigate it and take a stand. A point made by Sheldon Wolin is that inverted totalitarianism involves an extremely harsh and punitive “justice” system calculated to inspire fear, especially in those without the money to defend themselves. In the present system, whether anyone likes it or not, money is what counts, not abstractions like justice. What happens when prisons are run by corporations for private profit should not be lost on us. Nor should the continuing epidemic of brutality and what amounts to cold blooded murder by police be lost on us. To the best of my knowledge, Trump’s appeal to a law-and-order America does not involve a stand against murder-by-cop (2017 and 2018)!

In the past I thought of myself as a libertarian, but now realize how naïve I was about how a money-centered economy really works in this fallen world. One might put it this way: media and technology empires are not in the truth business, they are in the money business. The kleptocrats above them are in the power business, as they have more money than they could spend in a dozen lifetimes and then some. They have no ideology as such. Understanding them and determining what to do requires freeing one’s thoughts from the mental self-policing of “isms” — capitalism, socialism, Marxism, liberalism, libertarianism, progressivism, neoliberalism, neoconservatism, anarcho-capitalism, etc. — and from dichotomous thinking and the need to find someone across the aisle to demonize: a conservitard, libtard, commie, fascist, Nazi, sexist, misogynist, racist, white supremacist….

The great twentieth-century philosopher Ludwig Wittgenstein (1889 – 1951) described his discipline as “a battle against the bewitchment of our intelligence by means of language.”

Who is doing your thinking, you or your favorite “ism”?

Free your mind! Morpheus from The Matrix again: “I can only show you the door. You’re the one who has to walk through it.”

The truth-teller’s dilemma is that this is never palatable, much less marketable.

The film The Matrix made money, because it was exciting and entertaining. Truth-tellers generally do not, because their results are neither. In media-saturated and market-driven cultures where truth and evidence aren’t valued, those who try to sell them may survive on the margins but won’t prosper.

Two additional items of linguistic evidence illustrate the disdain for truth-tellers in Establishment media: the words truthiness (coined by Stephen Colbert) and truther, once attached to those skeptical of the official 9/11 narrative but more recently applied more broadly (e.g., “Sandy Hook truthers”). The purpose here is to present truth-tellers as deluded or malicious or hostile to “fact-based” or “evidence-based” reporting, this being part of the mainstream’s attempt to halt and reverse its vanishing credibility. (See also this and this, in which defenses of expertise rest ultimately on defenses of presently dominant paradigms of knowledge and scientific method, as if positive science was an enterprise frozen in time.)

I have begun two simultaneous projects. (1) Together, they will leave me unable to research and write the kind of philosophically as well as culturally-informed commentary I prefer to write, which is time-consuming and not easy to produce; (2) they will provide alternatives to the unsuccessful Patreon.com effort.

I conclude this swansong piece (at least for now) with the admonition that Huxley’s description of a “kinder, gentler” totalitarianism — Wolin’s inverted totalitarianism which is neither kind nor gentle if you inspect it closely — is your future if you remain where you are, complacent, standing by and doing nothing while everyone and everything that challenges real power or which tries to provoke you to think is driven into a digital ghetto.

If you care about truth-telling, support it! If not me, then choose some cause or site that stands for human life and personal or community autonomy against encroaching globalism and donate to it!

I’ve tried to tell the truth. But for now, I must fall silent.

[The intent of this final short paragraph: silent as regards this sort of commentary, as this piece was not originally destined for Lost Generation Philosopher. I will continue posting about philosophical matters and application to the problems of society and of life in general that seem to me important and useful.]

Posted in Culture, Election 2016 and Aftermath, Libertarianism, Media, Political Economy, Where is Civilization Going? | Tagged , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Truth-Teller’s Dilemma, Part 2

[Author’s note: this is continued from “Truth-Teller’s Dilemma, Part 1 and has no connection to previous content here. Readers should go to that article before trying the material below. The projected but not time sensitive “Cultural Marxism” series has been suspended for now due to the pressure of other projects.] 

Facebook, as I noted in passing in Part 1, has gained a reputation for censoring unwanted speech. The site puts users in “Facebook jail”: a ban which can last from three to 30 days. Even before the Cambridge Analytics data mining fiasco blew up in their faces, Zuckerberg & Co. were smarting from revelations that Russian operatives set up fake accounts on the site to promote Donald Trump and bash Hillary Clinton. Even making the tall assumption that such efforts changed anyone’s votes, this is blamed on the masses’ growing proclivities towards believing “fake news,” i.e., on corporate media’s loss of credibility over the years, a loss that accelerated during 2016 as their pro-Clinton bias came through loud and clear.

Facebook has been trying to atone — by censoring conservative and pro-Trump content. In 2017 they, too, changed their algorithms. Conservative comments simply disappeared from newsfeeds.

I should note in fairness that a number of left-leaning sites have also seen huge drops in traffic (e.g., this site; or this one), because they were also blacklisted by PropOrNot.

What was the common denominator? They had been operating outside the official, academically and journalistically sanctioned cultural left, the preoccupations of which their authors sometimes criticized as distractions. These sites had posted articles focusing on: (1) the wealth consolidation of the kleptocrats; (2) the latter’s dominance over the U.S. political process which exceeds anything Russia could conceivably accomplish; (3) the misleading nature of official unemployment numbers; (4) the U.S. as a global economic (not political and territorial, like Rome) empire in which corporations have become the dominant actors; and (4) unnecessary but oh-so-profitable foreign wars, rationalized by “war on terror” hysteria (but don’t secure the borders!). A few sites across the political spectrum have been courageous enough to note the almost-untouchable power of the Israel lobby in Washington, or note how many neocons have dual U.S.-Israeli citizenship (so Americans can fight Israel’s wars?).

All in place of the safe straight white Christian male bashing of identity politics.

All are fair game for the new, Google-led online censorship … or worse.

Yes, Virginia, the Internet has changed. We no longer have the Internet of the original Drudge Report that broke the Clinton-Lewinsky affair, or which posted the items I linked to. True, the Internet is vastly larger, and far more cluttered. But the technology has improved, and should be much easier to use. What we have is an Internet increasingly dominated by corporations all with direct ties to the very state-driven power systems Internet truth-tellers initially sought to expose — different from the owners of the bulk of mainstream TV and print media but serving the same purpose. And it is not necessarily easier to use.

That purpose is to control information — and when the corporations can’t exercise control, they try to confuse and misdirect, e.g., by allowing flat-earth idiocy or videos on how we never went to the moon.*

There are, that is, conspiracy claims floating around now that anyone with a brain ought to be able to see are hoaxes, most likely purposeful misdirections. Are scientists hiding evidence that the Earth is flat? While there’s always been a Flat Earth Society, it’s only recently that anyone made videos promoting the idea. How interesting that this rubbish seems to have surfaced on YouTube. I don’t know if it surfaced before or after Google bought the site. A few years ago, some may have encountered the similarly ridiculous notion that the Clintons are disguised reptilian space aliens.

My working hypothesis: elite-owned corporations allow such stuff visibility. Common horse sense says that the people making flat-earth videos are nuts. This, indirectly, if only by suggestion, sabotages substantive appeals to high-level conspiracy in the real world of political economy as explanation through guilt by association: logically fallacious but psychologically very effective. Result: “conspiracy theorists” all look like idiots.

The overall situation is likely to worsen! Google, Facebook, Twitter, etc., are still refining their algorithms (up-to-date account with more examples here)!

Always keep in mind: mainstream players claim they care about truth, but they do not. They care about official narratives. They are uninterested in questions of evidence when someone with a worldview other than the unofficial secular materialist one threatens their dominance. Alabama Judge Roy Moore, both a Christian and a federalist in the original sense of that term (dual sovereignty, Ninth and Tenth Amendments — a “twofer” threat!) was savaged in the media over 11th-hour sexual misconduct allegations unsupported by even the flimsiest evidence.

Three indicators will tell you that you are dealing with an official narrative. (1) A dramatic event occurs, or is claimed to have occurred. Explanations of it, including identification of perps, fall into place almost at once, before evidence that supports those identifications and explanations could possibly be assembled. (2) Specific images intended to reinforce the narrative are shown over and over again in media, or descriptions and phrases repeated over and over. (3) Anyone who questions the narrative is soon ridiculed, demonized, or dismissed out of hand.

Exhibit A: the March 4 poisoning of former spy Sergei Skripal and his daughter Yulia in Great Britain by a nerve agent, which the entire mainstream blamed on Russia. There has been a ratcheting up of tensions just over this, leading to the expulsion of Russian diplomats from Great Britain, several European nations, and the U.S.; and to a retaliatory decision by the Russians to close the American Consulate there, expelling 60 U.S. diplomats, with more expulsions promised.

A couple of weeks ago, UK Labor Leader Jeremy Corbyn had the temerity to ask for evidence that the Kremlin was behind the poisonings. Prime Minister Theresa Day and others, including members of Corbyn’s own party, denounced him without asking themselves what the Kremlin would gain by doing something so stupid, on the eve of a major election; or if the Russians even possess the agent supposedly used (see this, from Craig Murray).

Russophobes do not get it any more than cultural leftists.

Allegations are not evidence!

Exhibit B: truth and evidence were clearly not priorities regarding Charlottesville. Recall how CNN and other well-heeled outlets regaled viewers with hours on neo-Nazis, with nary a peep about Antifa or Black Lives Matter? To this day we hear about how Heather Heyer, 32, was killed “during a neo-Nazi rally”: exact phrase from the article I linked to in the ever-predictable Washington Post.

But no rally took place that day! The authorities canceled it! Police ordered attendees to leave, forcing them directly into the path of Antifa and BLM, causing the violence and counter-violence that led to the woman’s tragic death.

How did I know all this? I had a contact with a boots-on-the-ground source. Not someone who got paid for her investigative skills, either.

Exhibit C: on Feb. 14 a mass shooting occurred in a Florida school. Leaving aside claims (there were some) of a second shooter, we now have the official narrative of an epidemic of gun violence in schools, with teenagers leading the revolt against private gun ownership because adults have dropped the ball. But as author and dissident economist Paul Craig Roberts shows here, there is no epidemic of gun violence in the U.S. Of the causes of preventable death, gun violence is not even in the top ten! Statistically you are in much greater danger of dying in an automobile accident, or for that matter, from a fall in your bathroom.

Naïve high schoolers such as David Hogg and Emma González, and many others, are being led by their noses even if they are not (as some have accused them of being) crisis actors. Hopefully no one who has read this far will believe these inexperienced teenagers organized their March For Our Lives rally all by themselves — getting attention from Hollywood celebrities, etc. — without someone with vast resources working behind the scenes to make it happen. And to what purpose, since without extensive negotiations in Congress and court battles over test cases, there will be no “gun grabs” in America? The Supreme Court cannot simply “repeal the Second Amendment.” It doesn’t have that kind of power.

Everyone with a working brain knows that a really determined attempt by the federal government to disarm the U.S. citizenry would be fought, possibly in the streets were the battle lost in the courts (it wouldn’t be). Such an effort could conceivably provoke a citizen uprising — which is the last thing the kleptocrats want!

David Hogg and Emma González are part of the theater, folks! It’s all diversionary!

For while an armed citizenry and may be the best possible defense against the rising of a classic totalitarianism out of the past, it is no defense at all against the inverted totalitarianism of the present.

 

(To be continued in Part 3)

 

*A fellow in my high school graduating class earned an engineering degree and took a job with NASA, right around 1980. At our 2015 reunion — our 40th — we had a conversation about this very thing. He told me (rolling his eyes): NASA employs thousands of engineers, who tend to be very smart people. The NASA he worked for was a shadow of its 1960s ancestor. In the 1980s, NASA cut corners, a product of Reagan-era budget cuts, or so he alleged. He argued that astronaut safety was being compromised, leaving in mid-1985 predicting the era would end badly.
On January 28, 1986, the Challenger disaster occurred, killing seven astronauts.
The NASA of the 1960s, he assured me, had some of the best minds in the country working on the technological challenges posed by the moonshots, e.g., the lunar module, designed to operate in a low-gravity environment (using technology now lost). Had the moonshots been hoaxes — all seven, with six moon landings (Apollo 13 being aborted after an oxygen tank exploded) — these folks would have exposed it, and in large numbers.
He added a caveat: with public education having gone downhill, and with technology having turned inward so that narcissistic teenagers take selfies instead of fantasizing about going into space as we did, he understands — somewhat — the bewilderment of those born in the 1980s and later at the very idea of going to the moon, questioning whether it’s even possible.
What does this fellow do now? Fed up with dysfunctional organizations filled with well-connected incompetents (morons good at networking who can’t do anything else was his exact phrase), he builds and repairs computers from his own shop in an Atlanta suburb. He also told me that only 2015 America could produce a parade of clowns like the line-up of GOP hopefuls, as coincidentally the first “debate” had been televised the night before our reunion.

 

 

Posted in Culture, Election 2016 and Aftermath, Political Economy | Tagged , , , , , , , , , , , , , , , , | 1 Comment

Consciousness Denialism: Galen Strawson vs. Daniel Dennett

Denialism? The term suggests something irrational at best, maybe even malicious. After all, that’s the word used by climate scientists for those who don’t believe climate change is happening. Is it a good idea to invoke such a concept when we talk about philosophers who have counter-intuitive views on consciousness?

Well, yes, as it turns out. Because these philosophers have convinced themselves that, in the last analysis, there is no such thing. At least not in any recognizable sense.

Galen Strawson, in this recent piece worth reading and commenting on(1)*, calls this idea the Great Silliness. Surely that’s harsher than calling such philosophers denialists. I would have stuck with denialist, but that’s just me.

What do we mean, consciousness? That is, after all, the $50,000 question. In the case of human beings, surely we mean, if anything, this first-person point of view that in my case, tells me (a) immediately that I am alone in my home office (except for two cats) writing this blog entry, that there are pictures and post-it reminder notes on the wall above my laptop, that daylight is coming in my side window, that I can hear random sounds from out there, and so on. Philosophers use the term qualia for such experiences. (b) While being aware of all these things I can become aware of my awareness of them, supposing I choose to be. All of these things I am consciously aware of. Doubtless, since you are receiving a different array of sensory experiences, reading this, your consciousness has different content than mine, but we all have this first-person point of view (as some philosophers have begun calling it), qualia plus awareness of our awareness of qualia, which seems at once elusive because it is so all-pervasive, but also both immediate and — this is crucial — irreducible, not to be explained in terms of something radically different from it.

Is this what is being denied? Why? Does consciousness denialism even make sense, if it means we are, in some sense, not really having qualia, but only seem to be having them?

Strawson identifies the major denialists: folks like Daniel Dennett, author of numerous books on what is called the philosophy of mind, a strange term to use just in case there is no room for anything in our ontology to be called mind. Strawson also mentions Brian Farrell, Richard Rorty, Paul Feyerabend (early in his career, at least); he might have mentioned the Churchlands (Paul and Patricia), two of the leading voices for eliminative materialism, the idea that all of our propositional attitudes — our talk about beliefs, thoughts, experiences, etc., is capable of being replaced in a Feyerabendian wholesale fashion by the language of a completed cognitive neuroscience. The way Strawson describes this: “One of the strangest things the Deniers say is that although it seems that there is conscious experience, there isn’t really any conscious experience: the seeming is, in fact, an illusion.” There we are. Or are we there? If we’re not really conscious, how do we know where we are?

While those final rhetorical questions might seem at first glance like gibberish, they help underscore an important point about denialism regarding consciousness: ultimately it doesn’t make any sense. This senselessness, moreover, reflects negatively on any larger philosophy or worldview that finds itself having to embrace denialism about consciousness, and perhaps on the institutions (and the profession as a whole) which nurtures this sort of thing.

Wittgenstein remarked numerous times about the strangeness of saying things like, “I think I am in pain,” or, “I thought I had a pain, but I was wrong.” What was motivating him? The fact that having a pain is immediate! It makes no sense to “think” it, or “think” I can be wrong about it. So I should say (unless I am delivering a philosophy lecture!) “I have a pain” (or “I am in pain”) said while indicating where the pain is, say, in my left foot.

Having a pain in my left foot may surely be coordinated with certain neuro-chemical events going on there, events which may have a specific cause I could seek out. But these events are not the pain, anymore than what causes them is the pain. Pain is information. It is what we might call a vertical systemic communication that something is wrong. It is, that is, a communication from the complex systems in my foot to that great complex system in my head which manifests as my conscious experience — the experience of a person, that is, not a mere organ or body system — that “I have a pain in my left foot.” To find out what’s wrong, I might slip off my left shoe and discover to my dismay that I stepped on a nail. I immediately recall an earlier conscious stream of experiences, that of walking through a construction area when I was out earlier. I am conscious, moreover, of my consciousness of these memories now.

So what ever motivated anyone to believe there is some kind of philosophical problem here, that something as basic as consciousness needs explaining? Is there something truly mysterious about the fact (for fact it is) that I am conscious — and conscious of being conscious?

While we don’t have time here to relive the whole historical trajectory that eventually led to so strange a conundrum, we can single out the metaphysics (and worldview) that has had the most problems with consciousness: materialism, which has come in many forms: mechanistic and reductive, Marxian, identity, and eliminative being the Big Four of the past century, with much of the debate in philosophy of mind being between advocates of one over the others. Modern philosophy of mind, in fact, developed essentially around a single question: how can consciousness exist and have the properties it seems to manifest (e.g., intentionality, capacity to formulate and grasp concepts, capacity to receive visceral information in the form of qualia, i.e., of pain, capacity to serve as the agency for correct inference of conclusions from premises) in a material reality?

Much of the activity of the philosophy of mind, in other words, has consisted of trying to jam square consciousness into one or another of the round holes carved for it by materialists. As a branch of philosophy the field is still very active because of how consciousness has resisted every effort to fit its square pegs into one of these round holes: from the “paradox of the thinking behaviorist” formulated almost a hundred years ago by Arthur O. Lovejoy to the “hard problem of consciousness” observed by David John Chalmers in the 1990s.

Maybe the inability to fit something so basic and ineliminable as consciousness into one of the boxes supplied by materialism is a sign of a fundamental problem with materialism, with the materialist theory of reality. Maybe it is a sign of the inadequacy of that theory.

But so strong is the grip of this theory of reality on professional philosophy, though, that to challenge it is to risk being deemed “unscientific” or “irrational” or labeled with some worse epithet. Even Strawson hedges his bets on this point. He says, at a crucial juncture in his account of consciousness denialism, “What people often mean when they say that consciousness is a mystery is that it’s mysterious how consciousness can be simply a matter of physical goings-on in the brain. But here, they make a Very Large Mistake, in Winnie-the-Pooh’s terminology — the mistake of thinking that we know enough about the physical components of the brain to have good reason to think that these components can’t, on their own, account for the existence of consciousness. We don’t.”

Later, Strawson invokes naturalism, from which, as he puts it, it does not follow that consciousness does not really exist, only that if it exists it is a natural process … not something supernatural or non-natural (whatever this means). What do we mean, naturalism? Is naturalism something meaningfully distinct from materialism? Are their nonmaterialist forms of naturalism? Strawson tells us, “Naturalism states that everything that concretely exists is entirely natural…,” which could serve as a textbook illustration of the circular definition. But “given that we’re specifically materialist or physicalist naturalists (as almost all naturalists are), we must take it that conscious experience is wholly material or physical. And so we should, because it’s beyond reasonable doubt that experience … is wholly a matter of neural goings-on: wholly natural and wholly physical” (emphases Strawson’s).

All of which means that by invoking a separate concept, naturalism, we went nowhere except in a large circle. Strawson would disagree, of course. He contends that the problem is our vast ignorance of the physical, especially how the brain and central nervous system “generate” consciousness. The problem: he is still trying to explain consciousness in terms of something consciousness is not, and thus participating in a research program that has given us a hundred years of false leads and dead ends. His position is closer to Dennett’s than he thinks.

Suppose one could have a full and complete account of every event in the brain, accepting for now that the word physical applies to events in the brain. Would we have an account of, say, the intentionality of consciousness, of the fact that when I am conscious I am invariably conscious of something in my proximate environment (or it could be awareness of an idea of which I am thinking at a given moment)? I submit that in asking the question we show that we would have pushed the problem back a step rather than resolved it, creating another explanatory conundrum (or epicycle): how intentionality as a property emerges from a concatenation of physical events, or how we can justify assuming that it does in advance of empirical results that point specifically to this. Such results might emerge … or, if history is any guide, they might not.

I believe the “problem” of consciousness has resisted and continues to resist solution is because consciousness is not the sort of thing that can be explained within the constraints supplied by materialism — in any form. We cannot claim to explain it by having “reduced” it to something else; we cannot honestly claim to have “eliminated” it!

So what should we do? Go back to being Cartesians? No. Lapsing back into dualism is probably the worst thing we could do! But we could begin by suggesting that the most radical-seeming of the post-Cartesian British empiricists, Bishop George Berkeley, was onto something by denying, not mental substance as early post-Cartesian materialists like de la Mettrie did, but material substance. “To be is to be perceived,” said Berkeley; what exists, that is, is mind-dependent. But with Berkeley, we are still stuck inside substance metaphysics. What happens if we jettison substance metaphysics (no matter, never mind — at least not as Western philosophers have usually talked about these things!). Do we need to talk about substances to provide descriptions of our experiences, or, for that matter, whatever it is that is going on in my brain and senses when I have an experience?

Going back to my consciousness of being here, in my home office, typing this on my laptop, two cats in here (both sound asleep), light coming in from the window, random sounds also coming in from outside. Have I left anything crucial out by omitting all references to substance? Or is such a concept anything more than an abstraction, the philosophical equivalent of an item of debris just shaved off by Occam’s Razor? I have referred to what is within my proximate environment, that of which I am conscious, with no intended implication that there aren’t many, many things that are quite real but of which I am not immediately conscious: the fact that my laptop is made of molecules, for example, its molecules made of atoms, etc.; or that the light and sound have specific properties for which physicists have reserved the term physical, e.g., wavelengths of light, that fact that processes are occurring in my brain, triggered by my having experiences of events or processes in my proximate environment?

Where is the material substance?

I submit that it isn’t anywhere, because there simply is no such thing! Philosophers should get rid of it!

This means not being materialists? (And, for now, leaving the naturalism question aside as an evasion.)

In favor of what? My suggestion (although I am aware, this is only a start): take consciousness seriously. Take it for what it is, observe what it actually does, and do not try to eliminate it, reduce it to something it is not, or otherwise explain it away. Do, at the very least, what phenomenology started to do, which is provide a description of its operations and contents (regrettably, like so much academic philosophy, phenomenology as a research program soon got lost in a forest of neologisms). Since consciousness seems to be ineliminable without the results being either self-contradictory or unintelligible, I suggest taking the existence of consciousness as a basic datum in an account of reality that includes intelligent beings, and perhaps all living things (as capacity for self-awareness may be a mark of intelligence).

For there isn’t just human consciousness, there are many kinds and levels of consciousness. My cats are conscious when they are awake. They don’t have the same consciousness of a first-person point of view that I have, but they are capable of experiencing pain, aware of their surroundings or proximate environment, and able to navigate around in the latter.

Go back to the pain in my left foot, which arose because the tissues in my foot received a sudden injury. Nerve-endings were able to communicate this vertically: I received the information as pain. Clearly, materialists are not wrong in saying there are chemical and neurological events that correlate with my experience of pain (although there are “phantom pains” which amputees claim to “feel” in the amputated limb even though the limb is no longer there; thus we have qualia which correspond directly with nothing at all). Where are they wrong, in this case?

Their orientation is wrong, as it forces them to deny the basic nature of consciousness by trying to explain it in terms of that which is not (on their terms) conscious. The orientation I will suggest is supplied by replacing material substance … to the extent we should replace it … with systems: structured wholes consisting of cooperating elements or components that are themselves systems and not basic units (subsystems, that is). To ask, Are systems material? is then no better a question than, Are systems mental? All we can say … and I will try to bring this to a close with this thought: systems are by their very nature conscious, if by this we mean the capacity to apprehend, sometimes anticipate, interact with, and respond to, events including other systems in a proximate environment. By this last term (as I have used it several times now) all I mean is the aggregation of surroundings able to affect a system or be affected by it, whether through interaction or mere passive recognition.

Looking at consciousness and its objects this way will require a mental-cognitive leap, if for no other reason than because philosophers are so accustomed to talking about such things as the interactions of systems and transmissions of information (in, e.g., the human body) in materialist terms. But I will submit that if we can cease the attempts at reduction, or the attempts to explain which just explain away … much less the attempts at elimination — denialism — our capacity to appreciate the richness of our experience and our interactions with other systems, especially that category of system we call persons, will be that much more enhanced.

We may even find new ways of thinking of systems, taken as they are, in ethical terms. We do this now, of course; it’s another thing the materialist view of the universe has difficulty making sense of. Eliminate materialism instead of consciousness, though, and we may see new integrations of thought, experience, and morality that would never have occurred to us before.

Dennett tried to rebut Strawson, and Strawson replied at the same place.(2) The thrust of Dennett’s attempted rebuttal is that he doesn’t really deny that consciousness exists: “I don’t deny the existence of consciousness; of course, consciousness exists; it just isn’t what most people think it is,… I do grant that Strawson expresses quite vividly a widespread conviction about what consciousness is. Might people — and Strawson, in particular — be wrong about this? That is the issue.” (Italics his.) What, then, does Dennett think consciousness is?

Invoking the “pragmatic policy of naturalism” (there’s that word again) in attempting to sigue explanations of consciousness into the world as the materialistic naturalist understands it, Dennett argues against the first-person point of view calling it “privileged insight” and adding that “we have no immunity to error on this score.” While it is true enough that I can be mistaken in some of my direct perceptions (seeing the pencil in the water appearing to be bent, immediately correcting the error with reasoning from my knowledge of refracted light), I do not see how I can be mistaken about having that pain in my left foot. That’s what we non-denialists mean by the “privileged insight” of the first-person point of view.

Strawson stands his ground, quoting numerous past statements by Dennett where he certainly does seem to be denying that any such thing as conscious experience of qualia exists. Consider his zombie analogy: a “zombie” in this context is (these are Dennett’s words from his Consciousness Explained (1991)) “is behaviorally indistinguishable from a normal human being, but is not conscious.” Dennett proceeds to say, “Are zombies possible? They’re not just possible, they’re actual. We’re all zombies.”

Later, Strawson quotes him as saying, “When I squint just right, it does sort of seem that consciousness must be something in addition to all the things it does for us and to us, some kind of special private glow or here-I-am-ness that would be absent in any robot. But I’ve learned not to credit the hunch. I think it is a flat-out mistake” (Intuition Pumps, 2013).

So is Dennett denying that consciousness really exists in any form other than a purely behavioral sense, in which we are to assume that if two entities are behaviorally alike they are psychologically alike? It seems to this reader that he is. What separates first-person consciousness from other kinds of phenomena able to be studied, whether by scientists or by philosophers desperate to seem scientific, is that in the final analysis, we are inside of it. This is what makes it ineliminable.

When Dennett has a splitting headache, as all of us do from time to time, does he really and truly refuse to “credit [that] hunch” that he is experiencing qualia, and that his being mistaken about his having a splitting headache ultimately does not make any sense?

It does seem likely, of course, that materialism combined with the sort of scientific empiricism that refuses to credit the validity of any argument or the truth of any conclusion reached through private introspection leads logically to the idea that first-person consciousness can be eliminated from our philosophical bestiary. But to some of us, this does not count against first-person consciousness, it counts against materialism. I would conclude with the note that unless we have a materialistic explanation of this sense of “being inside” streams of experience about which skepticism or denial simply fail to make sense, we should bite the bullet on this one and try to consider alternatives to materialism as our metaphysical worldview — for this is, after all, what it is: not an a priori truth nor the conclusion of specific empirical studies but a philosophical and methodological presumption, and, to my mind (a word I just had to get in here somewhere), an unnecessary one.

(1) http://www.nybooks.com/daily/2018/03/13/the-consciousness-deniers/

(2) http://www.nybooks.com/daily/2018/04/03/magic-illusions-and-zombies-an-exchange/

*For some reason WordPress is not allowing me to add links or otherwise format this text, and have proven impossible to contact about this. Please accept my apologies for the lack of italics where there should be, in-text links that are missing, etc.

Posted in analytic philosophy, Philosophy, philosophy of mind, Philosophy of Science | Tagged , , , , , , , , , | Leave a comment

A Look Behind the Cultural Marxism Controversy: A Sort of Introduction

[Author’s note: this developed in response to an exchange of opinions on a Facebook forum, during which it became clear to me that any fruitful advance of the discussion called for something more detailed than could be attempted in a setting like that. Hence the material that follows, which hopefully will prove to be just the first installment of more material to follow. Incidentally, if you believe this to be worth your time, please consider supporting my writing on Patreon.com.]

What, precisely, is cultural Marxism? To some, the term epitomizes all that has gone wrong with Western civilization, especially higher education. It represents an insidious attack on Western civilization’s premises, especially its Christian ones. To others, it is but a conspiracy theory and a code word for resentment against the advances made in recent decades by ethnic minorities, hatred of immigrants (legal or otherwise), of homosexuals, and other now-protected groups — perhaps also expressing the fear of the dominant group as its demographics start to shrink and it is slowly stripped of its power.

Both of these represent extremes. Is there a truth we can get at?

At first glance, the term seems to be a misnomer. Classical Marxism embodied an economic philosophy of history and a prediction of capitalism’s end. Since according to the dialectical materialism at classical Marxism’s core, “it is not the consciousness of men that determines their being, but, on the contrary, their social being that determines their consciousness” (Preface to Das Kapital, 1859), Marx had little to say about culture, regarding it as part of the “superstructure” generated by the relations of production except to note that it would express these relations in various ways.

Capitalism, meanwhile, proved to be more resilient than Marx had imagined. It was unclear that the proletariat wanted to overthrow the bourgeoisie. If anything, the proletariat wanted to join the bourgeoisie. Many of their children did, as capitalism raised the standard of living everywhere it was practices. It did not raise it evenly, however, and during the first decades of the twentieth century the left turned towards reforming capitalism instead of abolishing it. What philosopher Richard Rorty called the reformist left thus emerged to “save capitalism from itself” as the saying goes, saving it from its own worst tendencies (see Rorty’s Achieving Our Country: Leftist Thought in Twentieth Century America, 1998). The result was the social security system, Medicare, the minimum wage, stronger unions, and so on. Prophesies of inevitable doom for capitalism continued, but in the hands of writers such as Joseph Schumpeter (see his 1947 classic Capitalism, Socialism and Democracy) they were far less revolutionary. If capitalism ended, it would be because the masses voted for more and more socialism at the ballot box. And it would happen quietly, as socialism had become a dirty word in America (as opposed to, say, entitlements or safety nets).

A cultural left arose during the 1960s. It shifted its emphasis from the class of economic Marxism to race/ethnicity, gender, and eventually sexual orientation. As racial/ethnic minorities, women, sometimes religious and eventually sexual minorities came to play an analogous role to that of the proletariat in economic Marxism, with straight white Christian males coming to play the role of the bourgeoisie, the term cultural Marxism was as inevitable as that of identity politics. Both terms came to be used by critics to describe the philosophy and impetus behind the political correctness which appeared in American colleges and universities in the 1980s (that term coming into use in 1990-91 although Lenin had used it back in the 1920s for those who adhered to the party line too closely).

Many of us looked initially just at political correctness, saw it beginning to disrupt both teaching and scholarship, urged that it be opposed, and were stymied when those who claimed the mantle of conservatism were either unable or unwilling to act. We sought reasons for its entrenchment as well as the fact that we critics of political correctness found ourselves deconstructed and demonized (when we weren’t simply ignored) instead of responded to with the same kind of reasoning we tried to employ. It was assumed, that is, that if you questioned the logic and justice of (e.g.) affirmative action programs, your motives had to be racist and misogynist, or “homophobic,” and that motivation was more important than any reasoning we brought to bear: almost as if our reasoning was part of that Marxian “superstructure,” determined by our social being and presumed status instead of products of autonomous consciousness. (Some of us, incidentally, had no special status. We were struggling, absent Ivy League connections, to get academic careers started. This was in the 1990s. I should note that the situation is vastly worse today.)

Hence the idea of cultural Marxism, and the rest, as the saying goes, is history; yesterday’s preferential favors have evolved into today’s “safe spaces,” open attacks on freedom of speech, etc. But does the phrase refer to an actual movement, or is it just a so-called conspiracy theory put forth by a number of renegade conservatives (e.g., Paul Gottfried) starting in the late 1990s? Is it responsible for what has happened to higher education, especially the humanities and liberal arts, or was their vulnerability to it less an actual cause and more a symptom of deeper problems? What was the actual status of these disciplines under even the reformed capitalism of the twentieth century? Did this status substantially weaken them? Finally, was economic Marxism truly dead outside the minds of a few stodgy academics — thought to be buried forever when the Soviet Union went out of business and history ended (Fukuyama) — or has it been given a new lease on life by the obvious massive consolidation of wealth and power, and the dislocations spawned by the global neoliberalism that has since risen to global dominance?

These are complicated questions, and sorting them out in a single essay is an impossible task. I will try to do two things in the material to follow in future installments under this general title. First, I wish to outline the evidence of a genuine movement with a certain level of impact. Second, though, I will argue that its capacity to have this impact has a separate explanation, because indeed, much of higher education outside vocational disciplines has had an uneasy status under even reformed capitalism. In many respects, humanities and liberal arts subjects had been effectively marginalized within academia, as they were not as profitable to the economic system’s most powerful actors, corporations, as were subjects like business, engineering, to some extent economics (once separated from the earlier political economy), and certain of the sciences (e.g., chemistry). Finally I will want to float the issue of whether what is now being sought is anything truly revolutionary, in the sense of offering challenges to the legitimacy of the global hegemony of neoliberal elites, or just a diversity of faces within their corporations, controlled universities, and other institutions left structurally unchanged.

Moreover, it was not as if even reformed capitalism had delivered Utopia. It was true enough that rampant racial discrimination had existed not just in the workplace and professions but throughout American culture. This was well-documented. Academics knew this. They had provided much of the documentation. Many of the practitioners of fields like history, literature, philosophy, and so on, leaned left, therefore, and had few problems when a cultural left relevant to their disciplines arose. The economic, educational, and sociological problems of marginalized groups did not lend themselves to immediate solutions: one can argue causes, but not this specific fact. They grew impatient and continued to speak of discrimination and oppression; the very meaning of discrimination was shifted from specific acts to an absence of politically acceptable outcomes, courtesy of the Supreme Court’s decision in Griggs v. Duke Power (1971). Impatient demands for proportional representation had begun to give rise to their own literature by the 1980s; a great deal of this literature emphasized oppressed-perspectives and rejected traditional norms of the neutrality of perception, objectivity, and rationality (documented, with hostility, in works such as Roger Kimball’s Tenured Radicals: How Politics Have Corrupted Our Higher Education, 1990). This multiple-perspectives approach is very much in alignment with the Marxian idea quoted at the outset that our consciousness is determined by our social being / relations and not the reverse. Traditional notions of perception and objectivity had come under independent criticism by the “historicist” strain in recent history and philosophy of science at the hands of writers such as Thomas S. Kuhn (The Structure of Scientific Revolutions, orig. 1962) and Paul Feyerabend (Against Method: Outline of an Anarchistic Theory of Knowledge, orig. 1975). Arguably, these works destroyed the naïve, positivistic model of science as the theorizing of autonomous individual scientists working from neutral observations and hypothesis-testing. They emphasized the social embeddedness of even hard-scientific research in physics. What was true in the physical sciences was surely far truer in “softer” disciplines, and in our lives as participants in institutions and as members of communities generally.

Thus the Anglo-American world, especially its leading higher educational institutions, was vulnerable — on multiple fronts — to something that became known as cultural Marxism. But what is it? How did it develop? Are we able to critically evaluate it, or is any critical evaluation of it almost automatically tainted, e.g., by the fact that the present writer is a white male — a Christian, to boot! — and does my status as an ex-academic impact on my evaluation (assuming I have the resources to complete it? These last questions I must leave to others, having made my status and methods as transparent as possible. But if all of us are perceived as biased by whatever social embeddedness and unavoidable group characteristics we have, the paradox should be evident: the knowledge-seeking endeavor itself is invariably tainted and destroyed; and this, of course, reflects back on all that is or can be said about social-embeddedness and group-bias theorists themselves. If all there is, is the collective subjectivism of identity, then all claims about this collective subjectivism are just as tainted, and their conclusions can be discounted unless we are to give them a kind of reverse-privilege status born of (I suppose) past suffering. But any considerations in favor of this gambit will be equally tainted unless we are just to support them with a Kierkegaardian “leap.”

Dealing with all these quandaries won’t be possible here. Perhaps in essays to come down the pike. What follows is about cultural Marxism. I will limit myself to its development, its results, and then attempt some commentary on whether it is an effective problem-solving approach or more of a distraction from the kinds of issues an economic Marxist would raise in the era of neoliberal globalist capitalism.

Posted in Culture, Higher Education Generally, Political Economy, Where is Civilization Going? | Tagged , , , , , , , , , , , , , , | 1 Comment

Why Donald Trump Won in 2016, Chapter Umpteen Thousand and Counting….

Matt Bai’s Yahoo! columns are usually worth one’s time, however annoying they often are with their unexamined assumptions.

Bill Kristol, son of neoconservative godfather Irving Kristol interviewed here, doesn’t seem to distinguish between conservatism and neoconservatism, and the problems only go downhill from there. There haven’t been any visible conservatives in charge in the Republican Party since Reagan left office in 1989. In the meantime, the neocons (1) lost the universities and the intellectual world more broadly to political correctness, largely because of their sheer terror of being called racists. Now, engineers are being fired from their jobs in corporations like Google for questioning radical feminist dogma about why they can’t seem to recruit more women engineers; (2) neocons have contributed (they aren’t the only villains here) to the decline of the U.S. middle class with the “internationalism” Kristol praises (i.e., open borders, pseudo free trade deals such as NAFTA, etc.) as corporations were always more than willing to ship good paying jobs to foreign countries for cheap labor, a process that has continued no matter which party controlled the White House; (3) greatly worsened the situation in the Middle East with the Iraq War, surely the worst U.S. foreign policy blunder of our lifetimes which guys like Kristol supported (interestingly, this disaster was also backed by mainstream center-left liberals like Thomas L. Friedman); and (4) by not paying attention to the reckless and sometimes criminal policies dominating Wall Street, contributed massively to the financial meltdown of 2008.

In short, on every front we examine, the Establishment elites blew it!

That is why Donald Trump rose to the top in the 2016 primaries including over corporate favorites like Jeb Bush, and since Hillary Clinton was widely regarded as part of that same Establishment, with the arrogance of someone who wouldn’t even campaign in states filled with “deplorables,” that is why Trump defeated her and became president. In addition, whatever one thinks of the Electoral College, it spells out the rules by which elections are conducted in the United States, which were originally written to prevent elite-dominated enclaves like Washington, D.C. and New York City from dominating the entire political system. So in that since, the system worked. 

The Establishment elites made this bed (Trumpism), however little they like lying in it. Establishment Republicans sure aren’t going to undo their blunders by trying to nominate someone like Nikki Haley who wouldn’t have a prayer in the event that the Democratic Party comes up with a real candidate in 2020, however unlikely that is.

One thing I left out. Neocons from the start have been on the bandwagon trying to inflame tensions with Russia, whether through the allegation, so far unproven, that the Trump campaign colluded with Russians to win the election, or through equally unproven allegations of Russian aggression (e.g., against Ukraine) in their part of the world. What do they want? War with Russia? Or just maintaining and managing a chronic level of distracting distrust, while their corporate masters in the global banking networks continue consolidating their wealth and power?

Posted in Election 2016 and Aftermath, Political Economy, Where is Civilization Going? | Tagged , , , , , , , , | Leave a comment

Leopold Kohr: the Political Philosopher / Economist Who Predicted the Rise of the U.S. Empire & Police State

Who was Leopold Kohr, and does his work matter today? 

Kohr (1909 – 1994), about whom I’ve written at greater length here, was both a trained economist and political philosopher. His background included obtaining doctorates at the University of Vienna and at the London School of Economics, after which he observed independence movements in places like Catalan, the region of Spain and began to assemble both the conceptual arguments and empirical data in support of his skepticism about the growing “cult of bigness,” as he called it. 

Inadvertently, he predicted the rise of the post-9/11 police state U.S. This prediction is to be found in his major work The Breakdown of Nations (1957), possibly the most interesting work of twentieth century political philosophy almost no one has ever heard of. Kohr didn’t predict any specifics, of course. What he predicted was the replacement of an ideal of governance, one of the country’s founding ideals, which refrained from interfering in the internal affairs of other nations, with a model based on increasing economic dominance and, when efforts at economic domination failed, aggression.

Such efforts had already begun, of course; Kohr, an Austrian who had only recently gained U.S. citizenship, may not have been sufficiently versed in U.S. history to know of Woodrow Wilson’s desire to make the world “safe for democracy,” or of such events, current in his time, as the CIA-backed overthrow of the democratically elected Mossadegh government in Iran, or other “revolutions” (e.g., in Guatemala) undertaken on behalf of leviathan corporations. Kohr’s thesis again: it is not economic structure or ideology of a governing entity but size, which affords a capacity to exercise power that cannot effectively be countered. This state of affairs, in Kohr’s view, led inevitably either to an explosion of aggression against others or against its own people or both at once. He called this the Size Theory of Social Misery.

Kohr’s words (pp. 70 – 72; but pay especial attention to the final paragraph):

” … [T]he United States … so far has seemed to provide a spectacular exception to the size theory. Here we have one of the largest and, perhaps, the most powerful nation on earth, and yet she does not seem to be the world’s principal aggressor as in theory she should be….

“This is quite true but … to become effective, power must be accompanied by the awareness of its magnitude. Within the limits of the marginal area, it is not only the physical mass that matters, but the state of mind that grows out of it. This state of mind, the *soul* of power, grows sometimes faster than the body in which it is contained and sometimes slower. The latter has been the case in the United States….

“After World War II … there [was] no longer a possibility of the United States *not* being a great power. As a result, the corresponding state of mind, developing as a perhaps unwanted but unavoidable consequence, has begun to manifest itself already at numerous occasions as, for example, when President Truman’s Secretary of Defense, Louis Johnson, indicated in 1950 the possibility of a preventive war, or when General Eisenhower, in an address before Congress in the same year, declared that united we can *lick the world*. The latter sounded more like a statement by the exuberant Kaiser of Germany than by the then President of Columbia University. Why should a defender of peace and democracy want to lick the world? Non-aggressively expressed, the statement would have been that, if we are united, the entire world cannot lick us. However, this shows how power breeds this peculiar state of mind, particularly in a man who, as a general must, knows the full extent of America’s potential. It also shows that no ideology of peace, however strongly entrenched it may be in a country’s traditions, can prevent war if a certain power condition has arisen….

“… [G]enerally speaking, the mind of the United States, being so reluctantly carried into the inevitable, is still not completely that of the power she really is…. But some time she will be. When that time comes, we should not naively fool ourselves with pretensions of innocence. Power and aggressiveness are inseparable twin phenomena in a state of near critical size, and innocence is a virtue only up to a certain point and age…. So, unless we insist once more that Cicero’s definition of man does not apply to us, the critical mass of power will go off in our hands, too.”

Here we are, of course, 70 years after Kohr introduced these ideas to an utterly indifferent academic audience in a world getting increasingly addicted to the “cult of bigness”: of government, of corporations, of international organizations (e.g., the UN and its satellites; the World Bank; the International Monetary Fund; etc.). For 40 of those years we continued to wear the white hats, as it were, at least in the court of public opinion, for we could point to the Soviet Union as surely the more totalitarian of the two superpowers. But from 1989-91, Soviet Communism collapsed, and we saw the prospects of an entirely new geopolitical world order, one based on peace and prosperity instead of violent conflict. Works like Francis Fukuyama’s The End of History and the Last Man (1991) tried to define this hope as well as warn of a few of its dangers. 

It was not to be, of course. Global capitalism thrived, but so did many of the problems for which global capitalism is now blamed: stagnating wages (which had been stagnating for years as they failed to keep up with the dollar’s decline in purchasing power), inequality (specifically: the slow but growing concentration of wealth and power in the hands of a shrinking elite), and environmental issues of which supposedly man-made climate change became the most prominent. One of the more interesting consequences of Kohr’s ideas is that ideologies do not “succeed” or “fail” in an intellectual vacuum as it were, for conceptual reasons only. This is contrary to the dictates of Austrian school economics. Either capitalism or socialism can be made to work under the right circumstances; either, given different sets of circumstances, will go down in flames miserably. The imposition of capitalism in the form of Western modernity on the rest of the world failed to bring peace or prosperity outside elite circles; on the other hand, it often brought discord and resentment among common people who saw only once-stable local economies, religious traditions, and their lives, overturned. One reads a book such as John Perkins’s Confessions of an Economic Hit Man (orig. 2004) and one realizes that via predatory corporations based primarily in the U.S. and a government whose official policies protected them from accountability, indeed, the U.S. had become the very kind of aggressor it once rightly opposed. How much of militant, fundamentalist Islam, moreover, motivated as it is by hatred of the “Great Satan,” is indeed traceable to constant, chronic interference in the internal affairs of the nations in that part of the world, typically on behalf of American-based corporations seeking dominance over their natural resources (chiefly: oil). Will this turn out to be one of recent history’s great tragedies? 

The Iraq War, for example, has turned out to be one of our biggest blunders ever, not to mention one of the most expensive … fought against someone who had not threatened us in any way militarily (remember: no weapons of mass destruction were ever found in Iraq), and which (alongside the war in Afghanistan also begun by the U.S.) led the way to the destabilizing of the entire region which continues to this day, with the Russians now involved. The fact that a power of equivalent strength to the U.S. is also involved in the Middle East may well have prevented the forcible overthrow of the Assad government in Syria (hated by the West but with the support of the majority of his own people): a good illustration of one of the corollaries of Kohr’s main thesis: if aggressive power will be exercised in circumstances when it cannot be effectively opposed, it will be restrained only by an equal or equivalent power. It may be fortunate for the world that powers equivalent to the U.S. still exist. 

So in answer to my original question above: yes, Leopold Kohr’s work does matter today. Unlike the majority of academic work, it explores timeless theses ultimately traceable to what Christians would describe as the human condition in a fallen world.

 

(If you read this article and believe it was worth your time, please consider become a Patron by going to my Patreon.com page and signing up. Thank you in advance.)

Posted in Political Economy, Political Philosophy, Where is Civilization Going? | Tagged , , , , , , , , , , , , , , | Leave a comment

Dichotomous Thinking in Western Philosophy and Political Economy (An Occasional Philosophical Note #2)

If there is any trait more characteristic of the mainstream of Western philosophical thought than the prevalence of dichotomieseither-ors, one might say — it would be difficult to identify what it might be. Another useful term for the phenomenon is bifurcations.

Dichotomies or bifurcations or either-ors, whatever we call them, introduce abstractions on opposite sides of divides and suggest vast separations between their referents. The problem is, the absolute separations of traits usually do not exist in the world — even the world of experience — and the abstractions therefore typically do not apply. They accomplish little except to mislead and confuse our thinking: despite Wittgenstein’s warning, they allow the bewitchment of our intelligence by means of language.

Dichotomous thinking goes all the way back to Plato and Aristotle, whose thought provided the foundation for the main tendencies (the mainstream, I call it) in all that followed. Both distinguished between the essential and the accidental, and between form and substance. Epistemologically, these created the longstanding problem of knowing when a trait or property is truly essential to a thing and when it is merely accidental: that is, whether it must have the trait or property as a condition of being what it is, and not something else. But as our knowledge has grown and changed over time, with new discoveries abounding, what had seemed to be essential often turned out not to be, and we’ve had to continually revise our claims about what was essential. There seems no reason why this process could not continue indefinitely; viewing science historically, we see not a fixed body of knowledge but something that is constantly changing. From the standpoint of specific “natural kinds,” we can never, it seems, truly know that a given property is essential to them, because future discoveries might void any specific claims (think: Kripkean essentialism which wavers on this point).

However one looks at it, the essential-accidental dichotomy allows epistemological skepticism to enter through the back door — or, to avoid such, suggests a pragmatism in which “essentialist” claims are retained to the extent they solve the outstanding problems (they “work”). This dodges the, er, essential question.

What might occur, were we to eliminate not one or these or the other but the dichotomy itself?

Wittgenstein showed the way. We observe objects and classify them on the basis of family resemblances that may change depending on our purposes, which will determine our interactions with them. Our purposes will depend on the problems we are trying to solve. Inherent is this view is that we are problem-solvers, and how we construe problems will vary from person to person and from case to case for a single person. We do not need to speak the language of essence versus accident at all! (The problem in producing “clear” genus and difference definitions is not necessarily the definer’s “lack of clarity” but the fact that the precise nature of what is being defined can change somewhat from case to case.)

Western philosophy has given us myriad other dichotomies, some of which cloud our thinking just as much, even to this day — even for those who believe they’ve escaped from or transcended them or set them aside.

With Descartes, the chief dichotomy is between the corporeal and the incorporeal, between mental substance and physical substance, or in today’s parlance, between mind and matter (body): the “mental” and the “material.” This has arguably been the most influential of the modern dichotomies.

What are the “marks of the mental”? (as Rorty poses the query) has been the hallmark question of that entire branch of philosophy known as the philosophy of mind, an area in which some of the best and most interesting work in contemporary philosophy has occurred, it should be noted.

Or, a more down-to-Earth variant on the same theme: How is first-person consciousness possible in the “material universe” exhibited by modern science, especially including modern neuroscience?

The question behind the question: did Descartes erect an uncrossable barrier between his two substances? Subsequent philosophy latched on to either “material substance,” in those philosophers who became materialists of one sort or another, or to “mental substance,” in those philosophers who became idealists of one sort or another. Arguably, the growing cultural power of science, which purported to theorize effectively about a material universe that existed and had the properties it had independently of us as observers and experimenters, ensured that materialism would win the day.

Materialism really is just Cartesian dualism with one of its “substances” eliminated, with whatever rationalizations or tortured linguistic maneuvers we need to make it seem credible that the “mental” really is gone: dissolved, explained, or rendered kaput.

True, the lengthy debates over such matters as “marks of the mental” and more recently, the “hard problem of consciousness” (Chalmers) have been grist for the dissertation-writer’s mill for quite a few years now … but what might occur if we eliminated the matter-mind dichotomy itself, rather than one “side” or the other?

How might we go about doing that? My suggestion is just to treat first-person consciousness as a starting datum instead of something to be “explained” in terms of that which is obviously not conscious. Phenomenology does this, with often very interesting results.

Another confused and unnecessary dichotomy: think of that mare’s nest, the “problem” of “free will versus determinism.” Here is a case of abstractions that sound meaningful but which invariably disintegrate whenever proper conceptual analysis is brought to bear on them. Without going into all the technicalities, “free will” has always sounded at first glance like a sensible notion, perhaps even a good starting datum or given — I direct many of my own actions in some meaningful sense, do I not? — but does the concept not imply the existence of sets of events, self-caused actions, that are outside the “causal structure of the universe”? But on the other hand, what is this “causal structure of the universe,” and how do I know about it or rationally justify my claims about it? What can it mean, on the other hand, to say of an action that it is externally “uncaused,” i.e., “caused” internally by my having willed it?

So what “caused” me to will action A instead of action B? Ultimately, when pursuing such lines of thought, eventually we should experience the sort of Wittgensteinian “mental cramp” that tells us we are not making sense.

The result, however, is that “free will” as some kind of metaphysical given makes no sense. Does that mean that some form of determinism is necessarily true? Or is it just as likely that determinism makes as little sense?

Determinism is the view that, in one sense or another (it takes more than one form, obviously), everything that occurs has an external cause, which sounds okay but invites an infinite regress unless we posit an uncaused, ontologically basic First Cause (as Aquinas understood, and Aristotle before him). Without getting into theological or other metaphysical implications, if we begin with the universal efficient causation of determinism we reach the result that there must be at least one entity that is uncaused. Thus universal determinism self-destructs.

There is, however, a problem closer to home. Still leaving aside the technicalities, to many philosophers rationally justifying anything using evidential rules supplied by logic has seemed to required ontological independence of the reasoning process from causal determinacy as a brain process, as the latter only yields “justification” as subsequent brain event, and not necessarily a reason, as the brain event could well result in an utterance that is false. Some of our brain processes, after all, misapply evidential rules, or rely on errors of fact, and what justifies normatively correct rules over the incorrect ones cannot itself be a set of brain events, as it would not have the supervening status it needs to do its work. (Alvin Plantinga is the most important philosopher to have realized this, in his 1990s writings on “proper function” lost by metaphysical naturalism in whatever form).

The upshot is that not just moral agency, as Kant held, but the normative rational justification of anything whatsoever, has seemed to presuppose “free will” in reaching these justifications. Determinism seems to render the concept of rational justification either meaningless or impossible, meaning that if stipulated as true, its truth is a transcendental noumenon, incapable of rational proof or knowledge. Of what use is it to discuss it further, in this case?

So what might happen were the dichotomy of “free will” and “determinism” jettisoned altogether?

Instead of saying, “I exercised my free will,” why not just say, “I took an action” or some down-to-Earth equivalent of the sort uttered thousands of times per day? Why not just note that as our self-knowledge accumulates, we are able to identify more and more of the “causes” of our actions, whether we call them reasons, rationalizations, motivations, or whatnot? Why not note that our choice of wording here depends on factors other than purely “rational” ones? Why not note, furthermore, that some of my actions seem to be relatively free, as when I make a list of possible courses of action given a particular situation, think through the consequences of each, and choose the course that seems to have the best outcomes given my limited knowledge? Other actions appear to be determined, at least in their general outlines. I have choices of what to eat, and when (within limits involving a spouse); I cannot choose indefinitely not to eat. There are determining factors traceable to biology. To cite an obvious example, I cannot choose to fly by flapping my arms, and no motivation or course of study in the mechanics of flight is going to change that. Somewhere in here is the gender politics mare’s nest, much of which is nothing more than a denial of human biology in favor of a different set of abstract categories called “social constructs” to delegitimize biology.

Whether it is helpful or useful to dichotomize between “free will” and “determinism” has bearing on the world of political economy, in which defenders / advocates of certain ideological views of society maintain an equivalent absolute dichotomy between the “voluntary” (or “chosen”) and the “coerced” (forced or not “chosen”). Again, at first glance, what we begin with seems to make perfect sense. Trade is “voluntary” if we engage in it of our own “free will,” satisfying needs or desires that are ours.

As just noted, though, I am not free not to eat, and so I must trade for food — implying that the general idea of trading for food is not ultimately “voluntary” despite my wide range of “choices” over the specifics of how I satisfy my hunger cravings. Biology alone places limits on the “voluntary.” Are there other, more interesting limits on the “voluntary”?

Under capitalism I am not free not to work; I must trade my labor or skills for money — implying that the general idea of trading for money is not “voluntary,” although again I may have (or seem to have) a wide range of “choices” over the specifics of how I satisfy my need for money to purchase food, the roof over my head, utilities, etc. Somehow, that is, economic necessities created within civilization place limits on the “voluntary.” Further limits are created not by the mere fact of having some money but by how much money I have. I always have a specific amount, that is.

In the not-too-distant past, Stefan Molyneux (author of this rather slipshod discussion of logic) and Peter Joseph (founder of the Zeitgeist movement which like all such movements has its strengths and weaknesses, and author of this book), debated whether civilization embodies structural coercion, we might call it: economic necessity requires that one be either an entrepreneur of some sort, an employee of some sort, or independently wealthy. (View their debate here.)

My view, for whatever it is worth, is that Joseph won that debate hands down. While I might not have agreed with him on every point, Joseph sounded like he was living in the real world. Molyneux sounded like he was living in a world of political-economic abstractions which apply only under ideal conditions. Becoming an entrepreneur means bending one’s will to the wills of one’s clients or customers, because by “free will” they can go elsewhere. Entrepreneurs are not “free” to do as they please unless they begin extremely rich. Becoming an employee means bending one’s will to the will of one’s boss, because the boss can engage in an act of “free will” and fire you. Employees — ordinary laborers — are considerably less free than entrepreneurs. (No, you can’t simply quit your job if you don’t like it, if you have bills to pay and kids to feed, and if there is no readily available alternative job you can step into or entrepreneurship opportunity without a learning curve likely to occupy several months if not years, including getting the word out.)

The liberal idea that rational individualism, if applied universally and exemplified in markets, will reconcile everyone’s legitimate interests, seems highly improbable in light of modern history alone. (Unless the free-marketer cheats with language and declares that an interest not so reconciled is thereby deemed illegitimate.)

As for a core component of that debate, to libertarians like Molyneux, “society” by definition cannot coerce, only individuals can coerce since only they can act (i.e., exercise “free will”). They will say, “the state can coerce,” although “the state” as an institution is just individuals legally vested with the authority to coerce.

But what, precisely, is “coercion”? Most would agree that if a gun is pointed at my hand and my money being demanded, I am being coerced: a fairly standard example.

The example is surprisingly flawed, for those who always engage hypotheticals and tally up the points they’ve supposedly scored. For I am still making a “choice” to reach into my back pocket for my wallet and remove my money and hand it to the thief, am I not? The contrary choice would be to call the thief’s bluff under the belief (which may be irrational and misguided under the circumstances but is a logically possible course of action) that he will not shoot me — or maybe that his gun isn’t loaded and he just has it to scare me into complying with him. So even here, a purist could argue that I’ve made a “choice,” i.e., engaged in an act of “free will,” i.e., was not truly “coerced.”

In other words, the question of when I exercise “free will” and when I am “coerced” inevitably dissolves into a morass of confusion if we truly explore it. Is it not clear what a convoluted tangle of concepts and rationalizations such dichotomies as “free will” versus “determinism,” and its political-economic equivalent, that between the “voluntary” and the “coerced,” has led to?

What happens if we eliminate the dichotomies and suggest that we are looking at continuities or continuums of various sorts — matters of degree and gradation that appear better able to describe the situations that truly arise in the world we inhabit?

My being able to act “freely” or “voluntarily” is then a matter of degree, conditional on various circumstances I can enumerate, and not an abstract entity which is either present or absent, absolutely. I am considerably less free when facing the thief than I am in the outdoor market and knowing that I need to buy food and trying to decide what to buy. There are entire ranges of other circumstances in which I am somewhat “free” and somewhat “coerced,” in any reasonable senses of these terms.

It is one thing to promote “free trade” in the abstract, declaring it to have raised the “general” level of prosperity by having given “consumers” what they want. It is quite another to note the organic reality, that those who are truly freest under existing “free trade” systems are the billionaires whose connections which include other billionaires elsewhere as well as those in governments they’ve bought and paid for. Those who have lost jobs as a direct result of the “free” decisions of the billionaires have been made less free. The “consumers” who can afford cheap Chinese imports are often those rendered less free, and pursuing the cheap imports not from “choice” but “necessity”; cheap Chinese imports are all they can afford. (I leave aside credit, originally introduced for large, once-or-twice-in-a-lifetime purchases but now used for everything, as credit spending for everything reconciles two things contemporary capitalism needs to reconcile: mass consumption and low wages).

My suggestion, throughout many of these notes past, and in those to come, is that we need a different sort of worldview than modernity, with materialism at its core. Modern materialism, in whatever form, is parasitic on a Cartesian philosophy, postulating a rational “ghost in the machine” within confronting the “machine” without. Philosophically, of course, that “ghost in the machine” has become less and less substantial, reduced to rational economic calculation.

Yet we approach the world as conscious agents with a variety of motivations in addition to the obvious economic ones. Thus the reduction of consciousness to rational calculation by economists seems as problematic as those efforts by philosophers to eliminate it altogether, to treat it (as Daniel Dennett does) as in some sense an illusion. I confess to a longstanding fascination with philosophies, from Berkeleyan subjective idealism down through phenomenology, that take consciousness as its primary datum instead of something to be “explained” in terms of something manifestly not conscious (“matter”), which typically means explaining it away. Refusing dualism beginning from this standpoint, which takes first-person consciousness as given, may mean it is “matter” that needs to be eliminated! This, of course, will astound or merely bewilder the materialists accustomed to dominating the philosophical conversation.

But in physical investigations of “matter,” what do we discover? As contemporary subatomic physics has penetrated deeper and deeper into the “nature” of the atom’s components, their components, etc., it has moved ever closer to states of affairs required exclusively by initial conditions and mathematics. Mathematics consists of relations of ideas, however complex and intricate. Relations of ideas are not autonomous entities, unless we try to postulate a kind of Platonist realm for them to exist in, and obviously we don’t get anywhere by doing that. The experimenter in contemporary physics, moreover, affects the outcome of the experiment as has been known since the days of the “Schrodinger’s cat paradox,” and so cannot be dichotomized as apart from it. In this way, again, a meaningful dichotomy between “consciousness” and “the material world” breaks down. Perhaps our effort should not be to eliminate “consciousness” from the equation but to eliminate “matter” as having meaningful ontological independence:

And this may be what some “lovers of wisdom” of an immaterialist bent have been trying to say all along.

Bringing back our discussion to its point of departure: if there is a single criticism to be made of the central strains of Western philosophy, it is that it long ago became lost in intellectual tangles caused by its insistence on dichotomous thinking as a general methodological rule. The world, however, is organic and networked, which is why our abstractions usually fail to grasp its richness and complexity. We are, moreover, part the world’s organic and networked nature, not entities standing above it, or outside of it, and reasoning about it abstractly. The materialists have that much right; they just draw the wrong conclusion from it.

I’ve only enumerated a small, select handful of dichotomies here, the ones that seem to me to have been the history of ideas’ worst villains. There are plenty of others, some of them a tad dusty and academic (e.g., between the “analytic” and the “synthetic,” a linguistic descendent of Plato’s “essential” and “accidental”). I’ll leave readers to think about what some of those other dichotomies might be, if some are worth keeping around in some form while others can be safely gotten rid of, and in each case, why.

Posted in analytic philosophy, Language, Libertarianism, Philosophy, Political Economy | Tagged , , , , , , , , , , , , , , , , | Leave a comment