A few weeks ago I ran across this, and it got me thinking all over again — for the first time in over a decade — about the biggest wrong turn Western philosophy took, at least since the tendency toward the dichotomization of everything and everyone introduced by Plato and Aristotle over a millennium before. And about how what could have been a major program of philosophical research — mine — was derailed. To some extent by circumstances outside my control, and to some extent by a few bad choices I made during the 1980s and 1990s.
A few weeks ago I ran across this, and it got me thinking all over again — for the first time in close to a decade — about the biggest wrong turn Western philosophy took, at least since the tendency toward the dichotomization of everything and everyone introduced by Plato and Aristotle over a millennium before. In my humble opinion, of course. And how what could have been a major program of philosophical research — mine — was derailed, to some extent by circumstances outside my control, and to some extent by a few bad choices I made during the 1980s and 1990s.
Not long out of graduate school (finishing in 1987) I began to think about Descartes, the cogito, and the subsequent history of philosophy. In the wake of the undermining of foundationalism characteristic of the philosophies of the later Wittgenstein, Michel Foucault, Thomas S. Kuhn, Paul Feyerabend, Richard Rorty and others, it seemed a promising pursuit when I could stop applying for teaching jobs or other sorts of work long enough to give it the time it needed.
Right around the turn of the millennium, and with a fresh infusion of systems theory courtesy of the health education degree I’d just obtained, I finally had a substantial paper. This paper went through a number of iterations and incarnations, a few of which I read at meetings during the early to mid-2000s and also began submitting to journals I thought might give preference to the kind of metaphilosophy the paper represented. (Metaphilosophy here means: the kind of philosophical undertaking which explores the first premises, nature, and goals of the philosophical enterprise itself, apart from specific problems such as mind / body or free will / determinism.)
One iteration of the paper, dating from 2002 and arguably not as good as what I came up with later, is still available on the Academia.edu website here.
What I’d hoped for was a longer research program that (1) criticized Cartesian doubt and (2) reconstructed a viable epistemology and ontology along Aristotelian lines, avoiding his essentialism and incorporating elements of Peircean pragmatism and modern systems thinking. The result was to be a new and more viable foundationalism, in which first principle of all things was system.
What had gone wrong with methodological doubt? That was the challenge I originally tried to unravel. Explaining it now means going from how I remember it 20 years ago: the project died when the paper failed to find acceptance in a philosophy journal, I finally stopped sending it out; and then give up my teaching career moving overseas to a place where the cost of living was maybe half that of the U.S. (Now, alas, the various drafts reside in a storage shed several kilometers from where I’m typing this.)
In any event, here’s the basic idea: Cartesian methodological doubt — its goal to find a foundation for knowledge in something both universally true and immune to all possible doubt — was a process of logic-dependent steps reasoned with mathematical precision, in which Descartes came up with justification for the provisional setting aside of all that he’d previously believed. He did this with remote sense experience (e.g., you think that’s Judy you see way down the street but as she comes nearer you see you were mistaken, it’s not Judy it’s someone else) and proximate sense experience (the furniture in this room).
Couldn’t all this be a dream? he asks of the latter. But even when dreaming, two plus two equals four, the sum of the square of the sides of a right triangle is equal to the square of the hypotenuse, and modus ponens is structurally valid. These are true and known whether we are awake or asleep. Descartes wouldn’t have used that last example, of course. But we’re talking about propositions later philosophers would say are knowable a priori.
Descartes sets aside belief in the Christian God and invokes, instead, an evil deceiver, to persuade himself of the viability of applying methodological or provisional doubt to propositions we now deem a priori.
It is following this step that he realizes: no evil deceiver would be powerful enough to throw into doubt his knowledge of his own existence. Cogito, sum. I think, [therefore] I am. This singular proposition, Descartes contended, was utterly immune to methodological or provisional doubt, and therefore alone was suitable as a foundation on which to rebuild philosophical and scientific knowledge: independent of time and place, independent of history and culture, independent of personal psychology (again he wouldn’t have put it this way but this is the gist of his result).
Now for the catch: as a process, methodological or provisional doubt is logic-dependent. That raises a crucial question: does Descartes doubt the propositions of logic or doesn’t he. The text of Meditations is unspecific on this point. It speaks just of mathematics. Maybe he didn’t doubt logic. In that case, can we not suggest a specific category of proposition he did not doubt and wonder why the propositions in this category shouldn’t comprise the foundation of all knowledge — grounded perhaps, where Aristotle grounded his foundationalism, on a principle such as noncontradiction?
On the other hand, maybe he did intend to doubt the propositions of logic. By the late 1990s it had come to seem to me that if so, any furthering of his process amounted to cheating, as it were, reasoning to the cogito employing principles set aside as provisionally dubious and to which he therefore no longer had access.
The dilemma in one paragraph: either Cartesian methodological or provisional doubt missed the propositions of logic on which his own reasoning was based, or it did not. I didn’t see a third option. In that case, either he has a set of indubitable principles embodied in his own reasoning and has no need to proceed to the cogito; or he eliminates his means of legitimately proceeding to the cogito and is stuck in a kind of philosophical limbo. In other words, either the cogito was unnecessary, or it was impossible.
I could not find evidence that a single other philosopher had explored this.
The remainder of the paper outlined how the subsequent history of philosophy would have been entirely different. What we saw, steeped in the cogito’s results: the Cartesian dichotomy between “thinking and incorporeal substance” versus “unthinking and corporeal substance,” created the supposed problem of our knowledge of the “external world,” i.e., of how the “thinking and incorporeal” could interact with the “unthinking and corporeal” sufficiently to acquire knowledge of it. There would have been no need, moreover, for Locke to have distinguished primary from secondary qualities, or to have referred to “material substance” as “a something, I know not what.” Nor would the critiques Berkeley made of such a substance, followed by parallel critiques Hume made of “mental substance,” have seemed necessary.
One philosophical direction fell into subjectivism; the “eternal world” disappeared in stages! The others, almost as if impatient with philosophy and driven to “be scientific,” eliminated “incorporeal substance” and became materialists!
Had there been no cogito, there might never have been a Kantian transcendental turn!
Or more recent strange doctrines such as eliminative materialism!
What philosophy would look like today, of course, is anybody’s guess. That’s the problem with this kind of counterfactual speculation. It might have retained some capacity to guide us in our personal lives: what the Stoics valued philosophy for. But who knows?
We would almost surely have avoided the Cartesian autonomous rational intellect, an abstraction rather than a human person, and its disastrous 19th (and 20th and 21st) century bastard stepchild, the homo economicus of classical liberal economics, positing that the abstraction is always both self-interested, always calculates rational choices on this basis, and that an economy resulting entirely from the interacting aggregate choices of such entities — extracting, producing, transacting, consuming, etc. — will move toward an equilibrium in which the good of all or at least most such autonomous rational intellects will be satisfied.
This idea, which in a more militant form became the cornerstone of neoliberalism, now strikes me as sheer fantasy, and a destructive fantasy at that.
My own subsequent reasoning, following the rejection of the Cartesian ethos, embraced a fusion of Aristotelian ontology greatly updated to incorporate scientific discoveries, e.g., those of Newtonian and later physical cosmologies, with systems theory and thinking. The latter, it seemed to me, opened numerous doors, some bringing about applications to health and disease, others to political economy, still others to languages and symbolic systems, still others to computing and information systems, and of course ending with ecology and the worthwhile pursuit in caring for the planetary ecosystem around us, on which we depend.
The Cartesian ethos saw this last as unthinking, lifeless, and valueless (except in the economic sense) matter and therefore ours to extract willy nilly simply by laying claim to it.
Materialists came to see most human beings the same way: as human resources to use until they were used up or became obsolete, then to be thrown to the wolves, discarded.
Six journals rejected various versions of what I considered the foundational paper (see link above). Only a couple sent referees’ comments. One referee said it was too long. So I shortened it; then the person said the ending was too abrupt. Another referee sent back an outline of the paper that indicated that he (she?) had read it and understood it; but could not recommend it for publication. While he (she?) made a couple of minor observations of unclarities in the paper, easily fixed in an updated draft, the full ending restored, he (she?) still wouldn’t recommend it for publication. He (she?) did not give a reason for the continued refusal, such as an omission of some major consideration I’d overlooked that refuted my conclusions or challenged my reasoning.
(It might be worth noting that one side project, a collaborative account of a systems view of health promotion and education and the case for primary prevention, was published!)
We’re talking several years here: from around 2000 when I finished the first complete draft until around 2007, the last year I sent the paper out, receiving a rejection without referees comments in mid-2008.
By that time, I was in an entirely different environment. I was a “freeway flier” commuting between three campuses, was selling my prized vinyl record collection on eBay because I needed the money more than I needed the records, and the rest of my time was spent managing the affairs of my aging parents both of whom were in deteriorating health and who ended up needing round-the-clock care in a nursing home since (due to work) I couldn’t do it.
There was a second paper I’ve not mentioned here that was also rejected by journals for no discernable reason: on Auguste Comte’s Law of Three Stages and why postmodernity could be regarded as a de facto fourth stage. This was to be a preliminary for a larger argument for transcending the kind of money political economy we have now in favor of something that serves the needs of the many and not just the whims and desires of a few. I’d started out hoping that the two research projects would dovetail down the road.
But I’d all but lost interest in academic publishing … especially when I read a version of the latter at a meeting and the only listeners were the other panelists (the risk of presenting at meetings involving concurrent sessions)!
My interests had also shifted. I now considered it far more important to expose the power structures in Western civilization, those behind such structures whose social engineering redistributed wealth upwards (welfare-statism in reverse, I called it), and the primary consequences in the slow diminution of human freedom in the Western world. I looked to events ranging from the passage of NAFTA with the support of the elites of both major parties in the early 1990s, to the 9/11 attacks in 2001 as important catalysts for those whose main inclinations were war, money, and power.
In 2012 — my parents deceased and their estates settled — I basically said the hell with it, saw an opportunity to leave the U.S., and took it. (I did make one last effort to have the Comte paper published. That was in either 2013 or 2014. It was again rejected. I finally posted a greatly shortened version of it here so that if I croaked the thoughts it expressed wouldn’t disappear utterly.)
By this time, of course, I’d read plenty of important works leading me to question the validity of much that academia has produced in recent decades, based on the systems that have given rise to what gets published and distributed. Many show, or at least imply, the immersion of academia in the larger political economy, resulting in an enterprise that protects, reinforces, and extends the intellectual wing of the economic-militaristic complex and excludes those who explicitly question it.
I return to the blog that prompted this post … by Colin McGinn, excluded from his university (and from the “profession” at large) because of an alleged dalliance with a female graduate student which may or may not have occurred (who knows? I wasn’t there). That’s a microcosm, more or less, of the way academia now operates, as a place where you walk on eggshells so as not to take a chance of offending anyone since a perceived offense can be career-ending (even for someone with over a dozen reasonably well-received books and dozens of articles to his credit).
It’s also an environment relying on the open exploitation of the overproduction of PhDs as adjuncts, some of whom are teaching four, five, six classes at as many as three campuses to survive, with a few reported cases of adjuncts living in their vehicles and showering by stealth in student dormitories. A lifestyle hardly conducive to doing serious intellectual work! (See this for some life advice for aspiring philosophy PhDs! In a phrase: don’t!)
While I’m sure there’s quality work still being done — somewhere — by figures who aren’t famous like David Chalmers or Nick Bostrom, I have to wonder how much such work is possible given the present dominant economic and sociological conditions in academia, conditions unlikely to change because those at the top benefit from them and so have no economic incentive to change them (with today’s haves, neither truth nor morality are factors).