Latest Entries »

In my last post I discussed what I believe to be the most widespread (and, as fallacious as it is, perhaps most convincing) argument in favor of moral relativism, which is based upon the cultural relativism and social constructivism of anthropology and sociology respectively. In my experience, 90% of the impetus for adopting moral relativism lies with this argument and it is often the most difficult to overturn in peoples’ heads  (illustrating fallacies such as Affirming the Consequent for instance, usually requires the presence of a blackboard, which is difficult to keep on one’s person at all times). But having (hopefully) at last dispelled with the myth that cultural relativism has anything to do with the field of ethics, we can now move onto part II of this series of posts, which will concern the remaining reasons that people commonly adopt this view, before in part III I strive to show how it is entirely incoherent. I use the word “reason” loosely here; as I implied in my previous post, relativism itself is an ethical view that essentially sees all of ethics as a worthless enterprise. For that reason there is little point, from the relativist perspective, in engaging in much ethical reflection outside of attempting to convert heathen moral realists to relativism; the view itself asserts that it is social construction and one’s upbringing that causes one to adopt certain ethical beliefs, and scoffs at the notion that human reason can help us to rise above these purportedly all-encompassing social phenomena.

This then, leads us to the next argument commonly used in favor of moral relativism, perhaps constituting another 8% of the argumentative force of the view, which I will label the

“if you had grown up in _____, then you would be a _____”

argument. The basis behind this argument is pretty simple and, once again, at face value seems perfectly plausible. If I had grown up in England, then I would be an Englishman. If I had grown up in Ireland, then I would be an Irishman. Similarly, if I had grown up in Europe in the Middle Ages, I would probably have been a Catholic. If I had grown up in Persia in the year 1000, there’s a good chance I would have been a Muslim. And now, according to the moral relativist, if I had grown up in the American South in 1820, I would probably believe that Slavery was perfectly acceptable. If I grew up in Germany in 1930 I very well may believe that Jews are an inferior race and deserve to be put into camps and murdered. And perhaps if I had grown up in the Middle East today, I would think that beating a woman for going to work on the wrong day, or sewing her vagina shut to prevent her from engaging in sexual intercourse are entirely acceptable practices. So therefore, the only reason that I think all of these latter practices are so incredibly horrific is just because of where I grew up and was raised right?

Actually, that’s completely incorrect. This is another argument that seems to have gotten all of the relativists and postmodernists and social constructivists ‘oohing’ and ‘ahhing’ for no good reason whatsoever, and is in my opinion an even poorer argument than those listed in my last post. Even a minimally thoughtful person should be able to see problems with this argument; as I’m a philosophy major, I’ll move from the more abstract to the more obvious. First of all, ontologically speaking, I, as a human being, have certain essential characteristics; characteristics that cannot change unless I can be considered to have become a new person after the change. The set of these characteristics includes who my parents were (and by extension, how I was raised) along with where I was  born. Thus what a relativist even means by saying that “if [I] were born in the American South prior to the Civil War [I] would have thought that slavery was morally acceptable” is unclear at best; if I were born in 1930 or in Iran I wouldn’t really be me, so the proposition that that person (whoever he would be) would have appallingly sexist views doesn’t seem to be particularly relevant. While the relativist might wish to counter by shouting “aha! You concede that where we are born and how we are raised are essential in forming our sense of morality” this would be nothing more than another relativist false dichotomy. As I explained in my last post, the moral objectivist can be perfectly content in accepting that social norms, class structures, culture, history etc. influence the way that we think about moral issues (or anything else, for that matter). It is the far more extreme viewpoint that these forces entirely determine how we feel about morality that the moral realist argues against, and is one of the hallmarks of relativist irrationalism.

Moreover, this argument is both blatantly false and self-defeating. As to the former, maybe the relativist is right, maybe if I lived in 1930 in Germany I would be a raving anti-semitic Hitler lover. Or maybe I wouldn’t be. Maybe I would be like Oscar Schindler or many other brave Germans, Austrians, Poles etc. whose compassionate hearts were far bigger than crackpot racial theories, and who had the courage and honor to do all that they could to help their fellow human beings in their time of need. While the relativist is right that sadly, those who were blatantly antisemitic or else willing to stand aside and do nothing to help those being murdered were in the majority, there were still those that dissented, and considering the entire scenario being cooked up by the relativist is a matter of conjecture it’s a damn ballsy move for the relativist to claim to “know” what I would be if I were put into that historical situation. Certainly the spirit of the relativist’s point (that views that we now consider horriffic were at many times widespread) is true, but that it serves as some specific basis of “knowledge” that the relativist can use to lend support to his position is doubtful at best.

This leads us to an aside but an important one, before we address the first of many ways in which relativism is self-undermining, which is that if relativism were true progressive social change would be impossible, both philosophically and (as far as we could predict) practically. All of those great humanitarian heroes of progressive social change that we venerate in our capitols and history books (Martin Luther King, Gandhi etc.) have something in common, which is a passionate and zealous belief that the oppression and unfairness they fought against was wrong. What a sorry state of affairs it would have been, if no one had stepped up to the plate during mankind’s darkest days because “morality is a social construction” and what our history’s oppressors were doing couldn’t be considered “really” immoral. It has been moral objectivism, passionate zeal for justice and equality, that have driven the engines of global moral evolution, and while those tiresome, whining cynics that fill today’s universities, who endlessly bitch and moan about the tiny details of our common social interactions (“how dare you vile men oppress me by thinking that wavy hair looks more attractive than straight hair!”) may not wish to admit the fact, mankind has made some improvements over the past several thousand years. Additionally, in between spouts of bitching and moaning, those cynical academics who seem enslaved to a tired dramatic dogma of melancholy postmodernism may wish to consider the fact that the fight is still going on, and all sorts of contemporary movements for equality (such as the womens’ movement, the gay rights movement etc.) are not benefiting from the idiotic proposition that “all values are relative.” If “all values are relative” then all of the marches, rallies and parades going across my college campus (and most college campuses) every year for womens’ rights, gay and lesbian rights etc. might as well pack up and go home, because apparently there’s really no reason at all (morally speaking) to prefer equality of the sexes and marriage equality to women chained in a basement and throwing homosexuals into concentration camps.

But even if all of the above were not true; even if the argument didn’t make an unclear metaphysical assertion, even if it didn’t seem patently false or at least unfalsifiable, and even if it wasn’t a deeply offensive, disturbing and repugnant viewpoint from the standpoint of anyone who cares to see progressive equality and protection of human dignity and rights in the future, it would be wholly irrelevant because it is simply self-defeating, plain and simple. To see why, simply understand that the hidden axiom of this argument is that social forces (you know, class, upbringing, religion etc. etc. etc.) entirely determine our moral beliefs, with those beliefs having no grounding in rational reflection or fact. Now it becomes clear that this argument has exactly the same amount of force against a moral relativist that it does against a moral realist. Presumably, by his own reasoning, if a moral relativist were born in the American South today, he would be some form of relatively conservative Christian, and therefore certainly not a moral relativist. Ultimately, the argument boils down to the proposition that, at least as far as one’s moral beliefs are concerned, one cannot possibly rise above the influences of one’s socialization and location. These forces are, according to the relativist, determining factors of our moral beliefs. This is all fine and well, but it would seem to be a fair assumption that if the moral relativist is bothering to argue with a moral realist (or anyone else for that matter) it’s because he believes that his own viewpoint is, in some sense, more rational, more true (otherwise why is he bothering to have the debate?) If this is the case, then the relativist is suddenly stuck in the position of arguing that his viewpoint is the most rational one, even though he is simultaneously arguing that peoples’ moral beliefs have nothing to do with rationality at all. There is yet another contradiction here then, between what the relativist preaches and what he practices.

It’s important when considering this argument to bear in mind the difference between “ethical” questions vs. “meta-ethical” questions. An “ethical” question is a question such as “is there any crime so heinous that the death penalty is a justifiable punishment for it?” A “meta-ethical” question, by comparison, is more abstract and focuses on what gives ethical propositions their normative force. So for instance “what is it about lying that makes it immoral?” would be a meta-ethical question (needless to say, the distinction between the two is sometimes blurry). A relativist may wish to say specific moral commands (don’t kill, don’t steal etc.) are relative by culture and at the same time say that this is the objective truth, meta-ethically speaking. In this way a relativist may wish to avoid the self-undermining nature of his own theory (this issue will be settled once and for all in my next and final post). This is, however, a bizarre move on the part of the relativist; bizarre enough to simply be labeled ad hoc. Relativism seems to collapse the distinction between ethical questions and meta-ethical questions because it is essentially connected to the notion that morality is not a real area of rational inquiry. Because meta-ethics is only concerned with what gives ethical commands their normative force (which obviously involves the use of rational persuasion) it winds up being non-subject according to relativists; the very essence of their position is to maintain that morality simply has nothing to do with reason. Thus even if the relativist tries to avoid making this an explicit part of his position, for a relativist to use the distinction between ethics and meta-ethics as a defense against the view’s otherwise self-undermining undertone betrays the spirit of relativism and is a doubtful strategy at best.

I’ve tried to keep this middle post on moral relativism short because the final post will require quite a bit of room. In my next and final post on this matter I hope to demonstrate why relativism is not only self-undermining (at least in spirit, even if a relativist might find some way to slip out of outright logical contradiction) but utterly incoherent as well.

“Man I don’t know no more, am I the only f**ckin’ one whose normal anymore?!” -Eminem, “My Dad’s Gone Crazy!”

In his book Ten Philosophical Mistakes, the British philosopher Mortimer J Adler discusses in his introduction the way that philosophy is and always has been, in some sense, “for the people.” While many subjects in contemporary philosophy have reached quite extreme degrees of abstraction, there are a certain core areas of philosophical thought that always have been, and always will be, areas that all people must, consciously or unconsciously, confront for themselves. Questions concerning the meaningfulness of life, the rightness or wrongness of certain actions, and the degree to which we balance individual liberty with civil responsibility are all examples of these kinds of philosophical areas.In contemporary society however, I have noticed, particularly throughout my education, that there are certain philosophical opinions that have become quite prevalent; so prevalent, as a matter of fact, that I am tempted to call these opinions “street philosophy” or (perhaps more pejoratively) “social dogma.” These views are important parts of the ‘common sense’ of our age, the views that purportedly any educated person ought to know and accept as true, and the rejection of which constitutes, if not an affront to rationality, than either a lack of thoughtfulness or allegiance to archaic religious dogma.

There is one particular ‘commonsense’ philosophical proposition which is extremely common amongst Americans and many Westerners today, and is so taken for granted that to dissent from it can, in my own experience, condemn one to incredulous stares and the taking of offense. This view is known as “moral relativism,” and is without doubt the most common ‘street philosophy’ view anyone encounters in ordinary life; since the 1960’s era of radical liberalization it has become nothing less than the status quo among the “educated” members of society, particularly those in the Northeastern United States from which I hail. I ought to say at the outset that I speak, in this entire essay, from my own experience and my own experience alone; I have conducted no surveys or studies to see how many academics in American society are relativists, or how widespread the view is amongst our population as a whole in the United States or the rest of the Western World. However, I have encountered this view time and time again throughout the past ten years of my education, from middle school to high school to college, and every time I have encountered it I have grown to loathe it even more. And what I have found to be most interesting about this view is that as far as my college education has been concerned, only a single liberal arts department has consistently either left this view unstated or openly opposed it; that department is our own (heavily analytic) philosophy department, which is only one more reason that I have so much admiration and respect for those members of our faculty (even if some of them are a little bit nuts sometimes). And furthermore I do believe, and hope to demonstrate in this essay, that there is a very good reason for this, which is simply that moral relativism is a trick, a farce, a sophistical and rhetorical scam. Much like the ‘Postmodernism’ that it is so closely related to, moral relativism survives because its arguments, while appalling to those brave enough to pick them apart, appear convincing and are easy to make, and for that reason relativism has snowballed into one more view that everybody “knows” must be true, and which it seems to me few people have the courage to doubt.

Perhaps you have noticed a trace of bitterness in my voice; there is no doubt about it; I absolutely hate this view, and one of the significant challenges of my intellectual life is the fact that so many of my closest and most loved friends subscribe to it in some form or another. That was the reason that I included the lovely quote by Marshall Mathers at the beginning of this post; in discussing it with many people I sometimes wonder if perhaps they’re right and I am nothing short of completely insane. What is most difficult about a debate over moral relativism is that it is bound to circle endlessly, and even after several years of philosophical education and a decent background in the subject it becomes very easy to get so dizzy from this spiraling that one forgets what one is fighting for. But I seek to be ambitious in this post; if I am successful, I will demonstrate not only that 1.) there is absolutely no rational justification for holding the view whatsoever (meaning that all arguments in favor of it are invalid or at least unsound) but even more strongly 2.) that moral relativism is literally incoherent, meaning that its very statement, its very framework, implies a contradiction. In doing this I want any reader (particularly any reader who does not know me personally) to understand a.) that I do not mean to be personally offensive in this post, but you are reading the pent up rage from being the butt of about 7 years of relativist snobbery, b.) I am not writing this post in order to defend “religion” or “traditional morality” or any such politicized concept of our modern social scene, but rather only the moderate path of rational commonsense that desires nothing but the truth, nothing more, nothing less and c.) that I concede that there are many relativists of goodwill and I feel no particular ill will towards them whatsoever. I only aim in this post to help to free our contemporary intellectual environment from this vile parasite, this disgusting infection, this despicable cancer that is moral relativism, and while this may be a lofty goal for a humble blog post, such a change must begin somewhere.

The introduction over, the discussion of moral relativism will span several posts and will take three parts: 1.) the fallaciousness of the primary argument in favor of this view, 2.) the fallaciousness of other informal impetuses for adopting this viw and 3.) the fundamental incoherence of the very framework underlying moral relativism.

I

I noted in my last post on Postmodernism (which is intimately connected with Relativism) a poster in a well-liked teacher’s class that read “other cultures are not failed attempts at being you; they are unique manifestations of the human spirit.” This poster very succinctly gets across the idea of the main motivation for adopting moral relativism, which is simply the “cultural relativism” common to Anthropology and the other Social Sciences. In the context of Anthropology or Sociology this axiom (which is simply the admission that different cultures have different ideas about right and wrong) makes perfect sense; little work can be done in the way of understanding an unfamiliar society if we are constantly focusing on how morally repugnant we find this or that practice of theirs. There is an underlying noble principle to Sociology and the other Social Sciences (as insane as their advocates may sometimes seem) which is that people, particularly vast groups of people, do not engage in behavior because  “they’re just bad people;” overall, it would seem most Sociologists believe, most people share a certain set of core categories of experience that dictate their individual behavior and their social organization. Thus when we see a widespread behavior of any sort, whether we find it morally reprehensible or not, what we ought to focus on is how the social stratification and organization of a particular society or culture leads to this behavior, and how it makes sense in a particular social framework. Cultural relativism is thus a perfectly valid and understandable methodological framework and assumption for those trying to do Social Science; about that I have absolutely complaints.

The problem however, is that all sorts of people take this methodological assumption and extend it improperly, taking it not only as a descriptive fact about people (“gee, look, all sorts of people believe all sorts of different things about right and wrong”) but as a normative fact about ethics. That is, social scientists, postmodernists and most relativists take the fact that many different people have many different moral beliefs and conclude that morality or ethics itself, that which those moral beliefs are about, must also be relative, with no particular set of them being superior to any other. Such a conflation of descriptive propositions (simple observations or descriptions of things) with descriptions of propositional attitudes (descriptions of peoples’ beliefs about things) is understandable given how abstract, technical and boring (to people who aren’t philosophy nerds) the distinction between the two is; what is not understandable is how tenaciously people will stick to this conflation and insist it must be true, no matter how much one attempts to talk a person out of it. But I have finally learned, I think, that the reason for this conflation and the reason it appears so damn convincing, though it is nothing more than a lie dressed up in veritable clothing, is twofold; one a general lack of clarity in the implicit, central argument in favor of moral relativism, the other a general unfamiliarity with truth-functional logic. Let me now demonstrate both errors:

The primary argument given for moral relativism, if it were stated formally, would probably look something like this:

1.) If what is considered “right” or “wrong,” “moral” or “immoral” varies widely across culture, place, time and situation, then morality is relative.

2.) What is considered “right” or “wrong,” “moral” or “immoral” varies widely across culture, place, time and situation.
______________________________________________
(c) Morality is relative

(a standard modus ponens of the form “If A, then B, A, therefore B.”)

This is nothing more than the common-sense conclusion that goes from cultural to moral relativism, according to most defenders of that view. However, what is often left unspoken in this argument (and for good reason, since to make this tacit premise explicit renders the argument fallacious) is the presumption that ethical propositions are true just in case individuals or societies say they are; in other words, ethical propositions are entirely human “social constructions,” and have no real truth value except as widespread assent to certain historic opinions. Thus ethical propositions (and the entire field of ethics for that matter) are not even in the realm of “truth” or “falsehood,” they instead reduce to propositions along the lines of “I like chocolate” or “Bob likes Vanilla.” Seen in light of this underlying assumption, the above argument really should be stated more like this:

1.) Ethical propositions are not “true” or “false,” but derive their meaningfulness only from individual or collective human interest.

2.) By (1), if what is considered “right” or “wrong,” “moral” or “immoral” varies widely across culture, place, time and situation, then morality is relative.

3.) (2) is true.
___________________________________

(c) Morality is relative

So what is the problem here? The problem is that this argument blatantly begs the question against a moral realist; exactly what is at stake, between the moral realist and the moral relativist, is whether or not ethical propositions can be considered “true” or “false,” with the moral realist answering “yes” and the relativist answering “no.” Furthermore, when we actually bring this core and unstated presumption underlying the relativist’s argument into the light, we see that there is little to no reason whatsoever to accept it. Why should we think that ethical propositions are not at all subject to rational constraints, that we cannot rationally decide whether it would be better to scratch one’s nail or see an entire ethnic group slaughtered? The support of this presumption usually comes from the assertion that ethics is obviously an entirely human construction; “if there were no humans there would be no right or wrong,” as I have been snobbishly told by so many people at so many times. Of course we could let all of the religious philosophers out there throw a hissy fit and insist that right and wrong are somehow based on God’s commands (whatever they might be) but we don’t even have to bother with that; why exactly should we think that just because any particular phenomenon is a ‘human construction’ it is in no way subject to rational constraints? As the United States, England and the rest of the Western World are currently in the experience of learning, the fact that the Economy is an entirely ‘human construction’ (there would be no economy if there were no humans) does not mean that there are not perfectly objective laws governing the growth and decline of markets, inflation or prices. And furthermore, as any economist will tell you, these economic principles can be derived, explained, made systematic and used to predict market fluctuations in the future; and all of this wondrous rational analysis concerns an entirely ‘human construction!’ So who the hell cares if there would be no ‘right and wrong’ if humans had never existed? As a matter of fact we do exist, we do make value judgments including moral judgments, and these judgments do have real consequences for the world we live in. Given this, it would seem prudent to put some effort into understanding what it is that gives support to particular moral judgments, and furthermore seeing what the implications are for how we ought to act (as rationally moral agents) in the situations we encounter.

A more skilled defender of moral relativism than most of those I have encountered might counter this along a more Wittgensteinian line; the very language of ethical propositions you see, the very notions of ‘ought’ and ‘should,’ are unclear and go beyond the limits of what can be clearly expressed. As Wittgenstein stated rather deppressingly (standard practice for Wittgenstein,) perhaps the drive to make moral judgments represents nothing more than a widespread idiosyncrasy in the nature of humans, but is forever outside the domain of true rational determination. To this though my counterpoint is that Wittgenstein, much like Russell and the Logical Positivists before him, were far too zealous in their quest for “scientific clarity” in all sorts of different areas. No one ever said that ethics had to be a “science,” that it had to share the same degree of systematic clarity as physics or chemistry. All that is necessary to show that moral relativism is false, at least of the sort propogated by endless legions of contemporary liberal academics, is to demonstrate that we can rationally decide between mutually exclusive moral courses of action, regardless of compassionately descriptive judgments about the reasons for a person’s behavior. Thus under this view Brutus’s despicable betrayal of his friend Julius Ceasar was unethical, regardless of whether he “thought” it was right or any other extraneous factors that our overly soft modern culture would want to use to excuse it (for instance that Brutus had a bad relationship with his father, came from a difficult socioeconomic background or any other pity points).

The grounds for rationally determining which moral course of action is superior in a certain situation of course, must be spelled out in some form, even if they cannot be made as systematic as the grounds for a scientific theory or mathematical theorem (and I will spell out my own in later posts, though I must continue clearing away the giant mounds of garbage left by moral relativists before I can do so). Nevertheless, the Wittgensteinian based critique of ethics as going ‘beyond the limits of language’ or ‘not being a systematic science’ are, as far as I can tell, simply promoting a well-entrenched false dichotomy that is itself without any real foundation. Modern philosophers, regrettably, though (at least in the Analytic strain) they are often harshly critical of Postmodernist and Relativist tendencies, are also very prone to speaking as if there are simply two epistemic categories, “rational” and “irrational,” seeking I suppose to reduce much philosophical inquiry to the sort of “yes” or “no,” 0-or-1 kind of computer-like clarity. For all the scientists and mathematicians out there this may be fine, but as far as I’m concerned, when it comes to philosophy (and here I must admit that I believe our Continental friends across the pond are usually better at getting this than we are over here in the good ol’ USA) we don’t have to have everything spelled out so exactly that we could turn it all into a bunch of equations and use it to build a spaceship; just because certain kinds of judgments or propositions cannot be spelled out in that way does not automaticaly mean there are simply “irrational” or “non-rational.” They are merely judgments and propositions about which we cannot have as much certainty, and it is for this reason that our modern society (at least in America) generally follows John Locke’s lead in not killing or imprisoning people for having the wrong ethical (or religious or spiritual or political) beliefs. But that these judgments are simply entirely non-rational, that there is no hope in attempting to distinguish certain moral principles as definitely preferable to others, is a thesis that is far to strong, and manifests not cautious rational reflection but merely extreme cynical skepticism. While some modern academics would have us think that increasing cynicism and skepticism is the best way to better come to know the truth, as far as I can tell there’s no reason to believe this, and it would be best for such academics to cheer the hell up and give a little more truck to the power of human reason.

To finish the first part of my critique of relativism (I’ll get to part II as soon as possible in my next post) I suppose I ought to briefly note what happens to the argument stated when we alter it slightly. Thus the argument becomes:

1.) If moral relativism were true, then we would expect to observe a range of different moral perspectives among different cultures, times and places.

2.) We do observe a range of different moral perspectives among different cultures, times and places.

________________________________________________

(c) Morality is relative

The reason I include this argument in my discussion of moral relativism is because it is another common variant of what seems to be the stereotypical relativist formula (such-and-such culture/group of people thinks this + such-and-such culture/group of people thinks that= I GUESS THIS ENTIRE AREA OF REFLECTION IS JUST A RELATIVE, ‘SOCIAL CONSTRUCTION’ LOL.) It also commits a blatant logical fallacy that renders it entirely worthless, but it’s a logical fallacy that many people who have not studied logic are unfamiliar with; it is a fallacy known as ‘Affirming the Consequent.’ Basically, just because we observe lots of different moral perspectives between different cultures, times and places does not mean that all of these perspectives are true or rationally equal (consider the fact that some people out there still believe that the earth is flat or that we never landed on the moon, and then consider the fact that these preposterous ideas do not change the fact that the earth is a sphere and we put a flag on the moon, but merely cast doubt on the sanity of those who hold them). There is tremendous diversity of opinion in all sorts of areas of science for instance, but this diversity of opinion does not prove that some theory or other, whether it’s yet been spelled out or not, is ultimately the true one; it only proves that very complex debates are prone to create widespread and differing opinions (one matter I have also not addressed, simply because I do not wish to sound like an arrogant snob, is that one must take into account whose opinions we are talking about when it comes to diversity of opinions as well; Billy Bob from backwoods Arkansas may think that evolution is a big load of hogwash, but if Billy Bob has no more than a third grade education, his opinion on the matter is not much above ‘worthless,’ and the same applies to a condescending Art History major who once informed me that I had to ‘open my mind’ and ‘be more tolerant of diverse moral views’ shortly after she had informed me that she had never once opened a book of philosophy in her life. But I digress.)

In any event, when one ‘affirms the consequent,’ one reasons in the form as follows:

1.) If A, then B

2.) B
_______________
(C) Therefore A

A glance at the argument in the form I have just written it (which, as I stated, is another form I have had it thrown at me in) reveals that it commits exactly this fallacy, going, as it does, from the consequent of A (we observe many different moral perspectives) to the antecedent of A (moral relativism is true,) and is therefore no more rational justification for holding the standpoint of moral relativism than a loud belch or fart. The way in which relativists often try to salvage this argument is to state that it is not intended as a deductive, a priori sort of defense but rather as an inductive argument relying on the ‘evidence’ of the social sciences. The use of the word ‘evidence’ is a hallmark of the scientism of our modern age (a suject that part II in this series of posts will concern), the implication being that apparently moral relativism is the best ‘scientific’ conclusion to reach, as if science can reveal to us not only the inner workings of the world around us, but somehow the moral fabric of that world as well. This standpoint seems to be emerging thanks to the work of scientists such as Stephen Hawking and pop-philosophers (or whatever) such as Sam Harris, the former of which knows nothing about philosophy (his brilliance in the field of physics notwithstanding) and the latter of which apparently knows and does not care, considering the relase of his recent book (though I confess not to have read it, so I will abstain from deriding it here; I assume he has some way to get around the objection I’m about to raise to this method but to address it would require getting into sticky areas of religion and religious morality, which I wish to avoid doing for the time being). The problem with this sort of thinking however, was raised centuries ago by the philosopher David Hume, and concerns the frightful difficulty of going from an ‘is’ to an ‘ought’ (well, sort of frightful; Hume was what is called a nominalist which is why most of his problems come up, but I will, once again, have to wait until another post to address that incredibly important (and incredibly overlooked) archaic philosophical debate).

Basically, knowing that such and such is the case does not tell us that such and such ought to be the case. The same way that the historic prevalence of monogomous, heterosexual relationships does not in and of itself prove that this arrangement is in some way morally superior to, for instance, a monogomous homosexual relationship, the fact that widespread diversity of opinion exists in matters of morality does not in and of itself prove that this is itself the framework underlying moral judgments. Restating what has been said previously to some extent, it is entirely possible that those Southern plantation owners who really thought there was nothing wrong with owning and selling their fellow human beings as property were simply horribly and tragically wrong, the same way that all the generations of humans who thought that the earth was flat or thought that the sun and moon were conscious, living beings were simply mistaken. Thus stated as a deductive argument the second formulation of the relativist credo is devastatingly fallacious, and stated inductively it is blatantly unsound. Having cleared away this argument, the conclusion to make is that, appearances aside, the fact of cultural relativism, the fact that ideas about morality and ethical actions vary widely based on time, culture and situation, is utterly irrelevant to the study and practice of ethics, and using it to demonstrate the accuracy of relativism in any way cannot help but be fallacious and useless, plain and simple.

To conclude, please do not misunderstand what I hope to have shown in this first essay. There is no doubt that cultural and social relativism are indirectly related to the philosophcial field of ethics; “indirectly” insofar as these areas help to explain widespread descriptive facts about usual human behavior. What this means is that we can be less judgmental of the characters of those Southern plantation owners; while condemning their actions as wicked, we can ‘love the sinners and hate the sins’ in some sense, by judging them less harshly for their actions given the time they lived in then as compared to the time we live in now. But reserving our judgment on their characters does not have to mean that we judge the actions of slave owners, racist bigots or any other rationally morally offensive people as anything less than wicked; an important lesson to learn if we hope to preserve a just and moral society for the future.

 

     At left is William of Ockham, the late medieval philosopher who originated the so-called “principle of parsimony” (better known, it seems, as “Ockham’s razor”), which states that we ought to postulate only those entities that are strictly necessary in order to explain some phenomenon, and furthermore that the simplest explanation for something is usually the right one. Most educated people living today are familiar with this principle in some form, and particularly those philosophers that do work in the philosophy of religion seem, in my experience, to make use of this principle more than anyone else (here I must also briefly note the irony of the way in which the principle is often employed by Atheists against religious believers, considering it was a Christian theologian who formulated it in the first place.) But in any scientific field, and many areas of philosophy as well, this principle is often brought forth as an argument in favor of or against some competing view, on the grounds that its competitor is a ‘simpler’ explanation, and thus to be preferred.

But there is a problem, as far as I can tell, with this principle, which will be the subject of the following post. The problem is not with the principle itself, nor is it with its application to the natural sciences, in which it is clearly a very important guiding axiom of scientific inquiry. The problem is not when scientists use this principle, it is when philosophers use it, and particularly when they use it with the impression that it constitutes an automatic sort of “killing blow,” an Ace in the Hole against which one’s philosophical opponent cannot hope to stand a chance. A perfect example of this sort of use of the principle can be found in Bertrand Russell’s 1910 classic “The Problems of Philosophy” in his argument against Idealism. Russell writes:

“There is no logical impossibility in the supposition that the whole of life is a dream, in which we ourselves create all the objects that come before us. But although this is not logically impossible, there is no reason whatsoever to suppose that it is true, and it is, in fact, a less simple hypothesis, viewed as a means of accounting for the facts of our own life, than the commonsense hypothesis that there really are objects independent of us, whose actions on us cause our sensations” (Bertrand Russell, The Problems of Philosophy, 23).

Much as I have never been particularly inclined to accept Idealism, I recall being positively underwhelmed when I read this argument on Russell’s part, and downright disappointed when I read him continue on without giving the matter much further thought (I am thankful that GE Moore came forth several years later with his own, much more systematic debunking of Idealism, or I fear that that vile view and its infernal legions would still hold great sway over the philosophical community and general academia as it once did). My disappointment did not stem from doubting Bertrand Russell; that the man was a genius is a fact about which there can be no doubt. Rather, my disappointment was from having read Bishop George Berkeley’s classic Treatise Concerning the Principles of Human Knowledge, the original manifesto of Idealism, in which he writes:

“were it necessary to add any farther proof against the existence of matter after what has been said, I could instance several of those errors and difficulties (not to mention impieties) which have sprung from that tenet. It has occasioned numberless controversies and disputes in philosophy, and not a few of far greater moment in religion…it is very obvious, upon the least inquiry into our thoughts, to know whether it is possible for us to understand what is meant by the absolute existence of sensible objects in themselves, or without the mind. To me it is evident those words mark out either a direct contradiction, or else nothing at all” (George Berkeley, A Treatise Concerning the Principles of Human Knowledge, II pp. 24).

Let’s be honest ladies and gentlemen, much as none of us today would really wish to be Idealists, the man has a point. Accepting the existence of mind-independent material objects brings us the mind-body problem, the question of primary and secondary qualities and the relation of sense data to physics, not to mention the endless bickering between materialists and their wide array of opponents in matters of religion and spirituality. Might it not just be ‘simpler’ to let matter go, to just give up the fight and think there are no tables and chairs, no atoms or quarks after all, and that perhaps we really are just mind-things into which God is beaming an endless stream of thoughts and sense experiences?

I hope you’ll trust me when I say that I mean this latter suggestion in jest, but I believe the dichotomy between these two passages illustrates my point; both authors are, in some sense, claiming that the principle of parsimony is on their side. Particularly, Russell is claiming that the commonsense materialist view has the advantage of epistemic parsimony while Berkeley claims that the Idealist view has the advantage of ontological parsimony, and sadly, both are right. Berkley’s mentioning of the “numberless controversies and disputes in philosophy” (the mind-body problem and the problems of physics that I mentioned being only a few) illustrates his at least implicit perspective that talk of ‘mind independent matter’ is really just troublesome hogwash, and his view, which eliminates two extensive classes of objects (material objects and abstractions) from our ontology is therefore simpler. Russell’s claim to parsimony, on the other hand, is that Idealism, as ontologically parsimonious as it may seem, is an extremely counterintuitive view, one that goes against our most deeply seated beliefs about the nature of the world in which we live, and that at the end of the day we ought to shut up and simply accept what seems like the most direct explanation rather than jumping through mental and epistemological hoops in order to justify such a bizarre perspective.

But it isn’t only in this relatively outdated philosophical debate that we see such use of the principle of parsimony; there is another area where we see “Ockham’s razor” being brandished more like a battle axe, with multiple sides claiming to have beheaded their irrational philosophical nemeses. The starkest example of this behavior is in the philosophy of religion, which I have already mentioned as an area that, in my experience, draws a great deal of use of this principle. A simple example is the perennial debate between Atheists and believers over such ‘modern miracle’ stories as the apparitions at Fatima in Portugal in the early 20th century, as well as miraculous accounts of such modern Saints as Padre Pio (please note in the following that I take no stand as to which side has a better case in these debates, I am using them only to illustrate a philosophical point). Accounts such as these, unlike their historical antecedents back to the time of the Gospels or other religious texts, took place in relatively modern times, when details of modern science and medicine were finally being codified and understood. Furthermore, many of these supposed miracles were witnessed by dozens, hundreds, or even thousands or hundreds of thousands of people, and seem to nevertheless defy explanation through ordinary scientific means. The debate then, is between the skeptics and the Atheists and their religious opponents, and once again both claim to have the principle of parsimony on their side. Specifically, the Atheists very fairly point out that the ontology of naturalistic science, with its atoms and quarks, molecules and cells, stars and planets etc., leaves no room for Virgin Mary’s, angels, demons, ghosts or God, yet has very reliably allowed us to make massive strides in our understanding of the world around us. Oughtn’t we then eliminate such troublesome entities from our ontology, and ascribe such fantastic, miraculous tales to far more ordinary modes of explanation?

Yet the religious believers shoot back with their own variant of the principle of parsimony, and once again not without some plausibility; presumably the more witnesses that observe a particular event, the more likely it is that that event occurred, and moreover occurred in a way pretty similar to the way in which most witnesses described it. So when several thousand people seem to see an apparition of the Virgin Mary before them, why would we not simply conclude that several thousand people did, in fact, happen to see the Virgin Mary? Why bother jumping through epistemic hoops trying to give accounts concerning “mass hysteria” or “mass hallucinations,” why not simply conclude that the people saw what they said they saw? After all, “mass hysteria,” “mass hallucinations” and all other such naturalistic accounts of such miraculous tales (which, though I am personally agnostic when it comes to Virgin Mary apparitions, I must confess often seem to be rather far-fetched to me, as scientific as they try to be) are only invoked for such fantastic accounts of the supernatural; there never seem to be “mass hallucinations” of perfectly ordinary, benign events or even of fantastic or tragic occurrences that are nevertheless entirely ‘natural’ in character. Once again, both sides are claiming to have the more ‘parsimonious’ explanation; who really does?

My own reflection, upon having observed how much use (and in my opinion, abuse) has been made of this principle, is that it is rather like a great many concepts and ideals being thrown around in our social media and culture today; concepts and ideals such as “free speech,” “personal liberty,” “family values,” etc. All of these are ideals that just about anyone you ask is “for;” we rarely encounter a person in favor of censorship, oppression or gratuitous turpitude. Instead, what we encounter are vast groups of people using these same words in very different ways, ways so different that I personally suspect different groups don’t even mean the same things by them. Personally (and in a blatantly self-serving spirit) I think that these vast camps, these diametrically opposed groups, ought to stop talking and start studying a little bit more philosophy, and perhaps engage in some actual deep thought for once. What constitutes “freedom of speech,” and how do we balance peoples’ right to profess whatever belief they want against the corruption of this right as a defense of outright lying by those with power? While obviously people have a right to self-determination, how do we balance this with considerations about the way in which one’s self determination does, in fact, influence others; for instance, the way that a person who smokes can ultimately put tremendous strain on the healthcare and insurance system should they become sick, which obviously will impact others. And why, exactly, are we all entitled to these natural rights, these inalienable privileges that we hold dear? Certainly I value them as much as anyone else and would never want to see any of them taken away; but I often wonder what the grounds are, specifically, for believing that our fellow humans are entitled to a degree of respect and autonomy. I wonder what these grounds are, not in a spirit of skepticism, but rather because I feel that perhaps analyzing them more, making them more systematic and reflecting upon them more, might help us to achieve a better society overall, as well as shed light on those ethical questions we all face in our cultural and our personal lives as well.

But to conclude, what is there to say about parsimony, about Ockham’s famous, much used, and perhaps now rather dull, razor? I think that the debates in the philosophy of religion, as well as the classic interchange between the Idealists and the Materialists, illustrates the way in which the principle of parsimony is supposed to work; it is after the consideration of all the other factors that weigh into our judgment of the rationality of a philosophical position that we must finally take parsimony into account. Parsimony is “the icing on the cake,” in a manner of speaking, the tie-breaker that determines the winner of the rationality contest once other considerations, such as internal consistency have been weighed. Seen this way the principle does emerge as a definite judge between opposing views; when we take parsimony into account on top of those other philosophical factors, it serves as a very illuminating and powerful form of rational persuasion. But, as I believe the passages from Russell and Berkeley show, taken on its own, parsimony is relatively unhelpful. Like all philosophical positions and methodologies, parsimony, in the end, ought to be weighed in context of an overall framework that defies simple reduction to binary, black-and-white distinctions, or to so simple a tool as a humble razor.

Here comes a polemical one ladies and gentlemen, you have been warned.

Years ago, back in high school, I had a teacher who had a poster in her classroom that was an artistic depiction of people celebrating around a fire, and on it was written “other cultures are not failed attempts at being you; they are unique manifestations of the human spirit” (that quote is by someone famous, though I’m not exactly sure who). Much of the emphasis in this particular class was on understanding other cultures, avoiding ethnocentrism and acknowledging that people have widespread and varied belief systems, and we must avoid holding the view that our own cherished culture and traditions are “better” than others. All of this normative development on the part of the class was fine with me; like the majority of students I knew, I was all for “blending cultures,” “promoting diversity” and “celebrating equality.” What a shock it was to me then, when I learned about men burning their (multiple) wives alive with full acceptance and approval in many parts of the Middle East, the Caste system in India and the Genocides in Rwanda, Bosnia and the Sudan. These practices were not the actions of limited groups of individuals nor were they the actions of corrupt, autocratic governments. Instead they were perpetrated or approved of by many thousands or millions of people, and according to their own “webs of meaning” (according to the fashionable jargon of Social Constructionism) these actions were completely justified and acceptable. In the context of these horrors, I became very cynical about the utopian message printed on that poster; it seemed to me to be something of a bad joke.

My cynicism about this particular “way of seeing things” primarily has to do with what I take to be an obvious inconsistency, which is that it at the one time extols the values of tolerance, equality and diversity while at the same time denying that any set of values has more worth than any other. Of course I do support equality, tolerance, diversity etc., yet the very reason I support these values is that I believe they are better values than, say, inequality, prejudice and uniformity, “better” implying that there is at least some objectivity in terms of comparing values (particularly moral values) to one another. The rejection of this thesis, of course, is specifically the ethical viewpoint known as “moral relativism,” an utterly appalling view that has achieved an unbelievable degree of acceptance in our society today. But the point of this particular post is not to attack that view specifically (I will do so in later posts, just you wait.) Rather, the target of this post is the primary contemporary framework underlying that view, labeled “Postmodernism,” which has latched onto contemporary academia like a vile parasite, and as far as I can tell has not received enough of the criticism it so justly deserves. It would be nice to begin by defining the view that I am criticizing (that’s what you’re supposed to do anyways) but I must confess I am unable to do so, not because I haven’t tried but because the Postmodernists themselves don’t seem to have any idea what their position is. Postmodernism is a perspective spanning the fine arts, architecture, literary criticism, the social sciences and philosophy and in this sense has a remarkable ability morph into some other domain whenever one attempts to criticize it; this also seems to give it the curious characteristic of lacking any real, essential definition. I know this from having taken out books at the library, reading articles in research journals, and asking self-described ‘Postmodernists’ what, the hell, exactly, Postmodernism is, and have, each time, received almost entirely different definitions that seem to have little in common with one another.

But I have come to see that this is the point, in some sense, of the Postmodernists’ position. The Postmodernist is one who rejects hard and fast, rigid definitions, who questions the underlying motivations for adopting the dominant strategies involved in more conventional academic discourse. It’s a sort of ‘Zen’ thing apparently; the answer to the question is that there is no answer; our words are misleading us, our culture and upbringing have biased us and what seems meaningful is meaningless. All that there is left to do, according to the Postmodernist (at least as far as I can tell, from having read some of their papers and spoken to some of their sycophants) is to have fun playing around with nonsense, as well as critique our broader social order and see the ways in which dominant power structures use language and ideology to carefully monitor and control the development of our society. And Postmodern theorists, in many fields, discuss the ways in which all of this takes place. For some reason though, this sacred message would apparently lose all of its force if someone just came out and stated it clearly and concisely. Rather, we need to arrive at it ourselves, as a sort of development that comes from reading the works of the historical ancestors of the movement and meditating on them while we consider their contemporary descendants, along with their often strikingly unclear writing, peppered with big, smart-sounding words used entirely out of context. The language thus serves, apparently, as Wittgenstein put it at the end of the Tractatus Logico-Philosophicus, like a sort of “ladder to the truth,” that can be kicked away once understanding is achieved. Indeed, the implication of the Postmodernists seems to be that like the Zen student who, after being struck by the master’s stick a dozen or so times, suddenly achieves enlightenment, upon following this noble path we will apparently reach a halcyon point of sophistication where can finally move past our silly, antiquated notions of absolute truth and value and realize that these old ideas were just inhibiting us, enslaving our minds, and that in abandoning them we have at least achieved liberation.

Or, on the other hand, maybe not. Maybe, just maybe, this is really all a bunch of immature nonsense; maybe the Emperor really is just naked after all.  Postmodernism seems to me to be one giant intellectual scam that a portion of the academic establishment is attempting to pull on a generation of earnest young students. About a decade ago, Cambridge University decided to award an honorary doctorate to Jacques Derrida, who is perhaps the quintessential postmodern philosopher, on the basis of his numerous accolades from his colleagues in France as well as his contributions to the humanities and literary theory. Upon learning this, numerous philosophers at Cambridge along with eighteen renowned analytic philosophers from around the world wrote indignant letters to the university claiming that Derrida’s work “did not meet acceptable philosophical standards” and was little more than sophistry. Cambridge went ahead with giving the man an honorary Doctorate anyways (what can you say? They are Cambridge after all) leaving many analytic philosophers indignant, while predictably the reaction from Derrida himself was that his work was attacked because it criticizes “the rules of the dominant discourse, it tries to politicize and democratize education and the university scene.”

This is exactly the sort of stock, arrogant, hobnobbing garbage that many Postmodernists will spew when one dares to criticize their methodology as inherently flawed or deficient, and exposes the way in which Postmodernism is itself nothing more than a dogma desperately clung to by skittish academics rather than a systematic, well thought-out theoretical framework. Notice in Derrida’s reply there is nothing that refers to the semantics, no comment made about the content of the debate itself. Rather there is just a petty, almost whining remark about why the big bad academic authorities are trying to put down a voice of dissent. To one indoctrinated by the Postmodernist creed this apparently seems plausible, but to a person who has not yet achieved enlightenment this is obviously false. Stall the revolution comrades; “the rules of the dominant discourse” in philosophy are not equivalent to the “rules of the dominant discourse” in politics or the government. Believe me, I (and most philosophers I know) really sincerely wish that the common people would defer to our judgment in a whole host of matters (getting paid “the big bucks” sure would be nice too), but in case Derrida or his postmodern apostles haven’t noticed, this isn’t exactly the case. The suggestion that his work would be criticized or dismissed as sophistry on the basis of politics or bias, or preserving some philosophical status quo (which certainly exists but in a very different sense from that which Derrida’s whining alleges) rather than because it is simply unclear, unsystematic and hackneyed, is characteristic of the Postmodern approach to criticism; blame the critic for being prejudiced or “just not understanding,” rather than actually answering the questions asked.

This latter point also betrays a curious tendency amongst contemporary Postmodern ‘theorists’ (I would say “philosophers” but that would be too limited a description); if you read a large enough sample of their work, you’ll find that what looks like a development of striking, radical claims about our intellectual discourse, our ideas and our culture and society (as well as how we ought to read certain texts) is in fact a longstanding pissing contest amongst the foremost Postmodernists in the field. What I mean by this is to simply note the curious way in which Postmodernists seem to attempt to outdo one another in terms of how radical or apparently implausible a claim they can make and get away with it; Michael Focuault suggests that it is ultimately the power structures of the dominant class that determine what is “true” and what is “false,” a radical reinterpretation of the intuitively plausible Marxist thesis that what is a serious topic for intellectual study is determined by the bourgeoisie. Peter Berger one-ups Foucault by originating the idea of ‘social construction’ and with the radical suggestion that any social problem (even the destruction caused by a natural event) is ultimately a ‘social construction’ and does not have any significance in an outsider, ‘objective’ sense. Then you get to the really crazy Postmodernists like Luce Irigaray, a radical feminist psychoanalyst who suggests that the modern field of Fluid Mechanics preserves patriarchal power structures by ‘privileging’ solid, rigid, ‘masculine’ objects (‘masculine’ because the erect penis is solid and rigid, and as Freudian psychoanalysis “definitively proves,” all human thought is subconsciously concerned with sex whether you know it or not) over fluid ‘feminine’ ones. Scoffing at any one of these preposterous suggestions is evidence not of a healthy skepticism, but instead that your mind is enslaved by the antiquated modernist or Romantic notions of absolute truth and value, which we ought to abandon in favor of a healthier, egalitarian nihilism.

I have attempted to be ironic, in this post, by subjecting Postmodernism to something of a Postmodernist critique itself, in paying little attention to that approach’s philosophical content and merely noting the absurd degree of undeserved dogmatic power it seems to wield. I speak to that power from my own experience in secondary education as well as in college; throughout secondary school I was subject to all sorts of “forward thinking” educational programs, their methodology lifted from the latest Postmodernist, constructionist and’ whole language learning’ paradigms, which I now have the good sense to see, upon looking back, were thoroughly mediocre in terms of the amount I actually learned compared to students enrolled in the “Traditional” programs at our schools. It was only until I reached the end of high school that I actually began to question this framework; and at many times in my phase of questioning I felt the way I suppose an Atheist feels at Church, that is, completely out of place. I would debate entire classes of my fellow students, who, upon hearing that I believed certain actions (rape, infanticide, slavery etc.) were simply immoral actions or practices, regardless of who did them and when, and without giving a damn what their “culture” said about the matter, were baffled, and often looked at me as if I had said that I believed I was a space alien. My fellow students and teachers would then play the classic relativist/ subjectivist/ postmodernist (or whatever) card of noting some culture that regularly practiced exactly some action that I find horrifying, and triumphantly ask if I “really thought an entire society could just be wrong,” to which they would again be baffled (and usually a bit irritated) when I would answer with a big, obnoxious “YUP!” As I noted, relativism is, as far as I can tell, a huge part of Postmodernism, and will receive its own bashing within my next several posts, so I don’t want to get into it too much  now. Suffice to say that all of these Postmodern dogmas, relativism, constructionism, structuralism etc., were held to tenaciously like a cherished creed by so many of my fellow students and my teachers (though they frequently didn’t consciously recognize exactly what it was they were clinging to), and one of the most profound experiences of being a deviant has been, in my experience, rebelling against them.

But now for the bottom line; what support does the Postmodernist have for his own position? In this vein I will again, for the time being, ignore moral relativism, since it will be the subject of such vitriolic later posts of mine. Instead of this let us briefly consider “anti-foundationalism,” the viewpoint that all of human knowledge is ultimately without a certain belief or principle as its fundamental basis. Now I’m not keen on this, but the really relevant variant of this view is the sort supported by Derrida’s notion of differance, which argues (in this context) that ‘knowledge’ is always given its foundation from a certain social or political context, in an analagous way to that in which certain words acquire their significance from the context in which they are used (a similar notion to Wittgenstein’s ‘language games.’) Therefore, it follows, apparently, that ‘philosophy’ conceived of in the way that its founders (or its contemporary American, British and Australian practicioners) conceived of it is impossible; the “web of meaning” in which we are trapped is simply too sticky for us to escape, and anyways, our language, on this view, has been radically cut off from the world to which it supposedly refers. Thus what is important to keep in mind about the fundamental difference between any variant of Postmodernism (and what seems to me to be its bare naked distinguishing factor, all of the rhetorical and dogmatic mumbo-jumbo aside) and other “Traditional” philosophical views is that Postmodernist perspectives usually take the stance that we are simply incapable of escaping from some essentially blinding forces, usually social or cultural ones. Thus the postmodernist ultimately believes that the result of our socialization and acculturation has been the complete fixing of even our very understanding and experience of what it is to “know” things, such that we can never “escape” the blinding force of our socialization and must learn to theorize and pursue knowledge within it. Considering the excessive focus on the ‘social construction of reality’ also present within this view (Postmodernism is the most contemporary sociological approach, and ‘social construction’ predated it, but the latter has strongly influenced the former) ‘learning’ takes on a pretty different meaning than it does under a more traditional approach, usually something along the lines of “learning how to create meaning” (the basis of the very popular ‘whole language learning’ approach in reading education, though this approach is highly unsupported by scientific research).

After years of education, mixed with my own research and reflection, this seems to me to be the essential philosophical framework of Postmodernism, which does not consist of arguments per se, but more consists of the implications created by a basically very simple set of philosophical assumptions. Having done my best to present it concisely but accurately, please allow me to now show why this framework is, all things considered, pretty much demonstrably false. In the first place notice that this variant of anti-foundationalism, which is based upon a sort of cultural relativism, is radically self-undermining; it claims that we are trapped by our socialization or upbringing and therefore cannot understand the “objective” state of the nature of humanity and human knowledge, and in doing so claims to understand something about the “objective” state of the nature of humanity and human knowledge. ‘Self reference’ is a problem that plagues all such radical generalizations; simply ask yourself whether the proposition “there is no truth” is true or not and you will see why radical relativism is blatantly false to anyone concerned with having consistent beliefs, rather than having their Postmodernist buddies in the department think that they’re cool. In fact sociology (which I am well acquainted with, and which has contributed greatly to the Postmodernist ‘school’) makes all sorts of ballsy generalizations about people that seem to refer to basic, essential characteristics of humanity; that we are “social beings,” the Thomas theorem, hell even saying that people “socially construct” some issue (deviance, obesity, race etc.) is nevertheless saying that all people engage in a certain process, thereby implying that there’s something about humans, that is, that it’s in the nature of humans to engage in this process called “social construction.” Thus the tough-minded Postmodernists who think that silly talk of innate human nature can be coherently abandoned do nothing more than reveal their own lack of systematic rigour in formulating their philosophy; the very core of the Postmodernist ethos is incoherent, which on its own ought to be enough to cast this view into the gutter.

Obvious incoherence aside, this sort of balls-out social theorizing might seem intuitively plausible, but is obviously a massive oversimplification upon further reflection, such an oversimplification that it can almost be dismissed as simply false. There is a curious idiosyncrasy amongst Postmodernists, which was sort of what my anecdote about my teacher’s poster at the beginning of this post spoke to, which is the way in which Postmodernists seem to extol tolerance, diversity, equality etc. while at the same time doing all they can to demonstrate how people in different areas of the world have such vastly different experiences we are apparently incapable of understanding philosophical issues the same way. To that end, a classic Postmodernist trick is to take some tribe of 20 people on an isolated pacific island and attempt to argue, from this pathetically small sample size, why eliminating any concept of ‘gender’ from our social discourse will lead to a more egalitarian society. The far more balanced explanation, that it is both labeling and biological factors that contribute to human gender categories, and in different circumstances one or the other will play a bigger role, is never even considered because of the dogma of “social construction,” which rules out any such essentialist or naturalist silliness a priori. But this is nothing more than dogma; this sort of ‘explanation’ doesn’t actually explain anything, but approaches a social phenomenon from a particular lens and purports to explain it by ruling out any other perspective as “biased” or “prejudiced” from the start. Thus even if the extreme anti-foundationalist’s position wasn’t utterly self-undermining (which it is,) the arguments given in favor of it certainly do not support the radicalism characteristic of Postmodern approaches, but interpretations that, while true, are far more modest. Furthermore, the differences among conclusions reached, belief systems or the experiences of different peoples’ and cultures are often drastically overemphasized by Postmodern apologists. To take a very good example, consider religion, that one area of human experience that seems to be so widely divergent among different places and times. Yet while it might seem this way, the more philosophical side of even vastly different religions often resemble one another; careful consideration of Vedantic philosophy for instance, can reveal definite similarities with Aquinas’s variant of the Unmoved Mover (and there is the classic philosophical comparison of Confucius with Aristotle.) Of course, most Postmodern philosophy is far too shallow to analyze these great historical works (the Vedas and the Summa Theologica respectively) as their authors intended them, and when it considers these great works usually adopts a stance rather like a crazed artist slathering paint over the Mona Lisa and claiming to be “creating a new meaning for the picture.” Nevertheless, there are often, in my opinion, far more similarities between cultures and belief systems (particularly philosophical belief systems) than the Postmodernists want to let on. When the differences are considered from a calm, systematic standpoint, rather than one eager to come up with the newest, craziest, most counterintuitive interpretation, they serve as evidence of far more moderate sociological and philosophical conclusions than most Postmodernists ever seem to make.  The social theorizing underlying postmodernism therefore, is hardly supported by the observations made, and the philosophy underlying postmodernism is nothing more than a bag of wind, a shell-game that appears exciting and novel, but is irrational down to its very core.

But I know, in the end, that I probably “just don’t understand.” I’m sure some knowledgeable (or should we say rhetorically appealing) Postmodernist could swoop in right here and, with a wave of impressive verbiage, purport to “deconstruct” my writing and show that I’m just another stubborn traditionalist, unwilling to open my heart to this utopian, nihilistic Brave New World of his. But if such a Postmodernist wants to do so I invite him to try, since I’m pretty secure in my belief that all of his creedo, all of his scary deconstruction and big bad Postmodernist language tricks are really just (to really drive the point home) a giant crock of sh*t. And moreover this is the part that I think us in philosophy, or those in the humanities not intoxicated by the deceptive poison of Postmodernism, ought to play in simply walling of Postmodernism as the decadent slum of academia that it is. If the Postmodernists want to play their language games, going in circles and wasting time let them feel free to do so, I will not say a word. However the moment some Postmodernist steps forward to offer another brilliant idea for how we ought to remake society, or with their unsupported, preposterous ideas about “humanity’s place in the world” (ideas that ascribe trivial or simply bad definitions to all of these words), or most of all with some social critique of scientific literature that deigns to ridicule it as “absolutist,” “prejudiced” or in some other way deficient, we ought to pour down the disdain and skepticism that this absurd philosophical obscenity so righteously deserves. The domain of philosophical and intellectual inquiry is a difficult enough world to navigate already, and grasping the truth is a process that takes a long time and a lot of work. We ought not make it all even more complicated by endorsing such poorly thought out a view as Postmodernism, which spits in the face of the very spirit of philosophical investigation.

One of my favorite philosophers is George Edwin Moore, one of the originators of “ordinary language” philosophy in England in the early 20th century. GE Moore is something of a hero of mine, not only because he wears sport coats and smokes a pipe (both passtimes that I happen to enjoy) but also because he is historically known for being a defender of “common sense” in philosophy. His unashamed defense of this rather unpopular form of reasoning was often as humorous and polemical as it was insightful; consider the following passage from his Principa Ethica (1903):

“That ‘to be true’ means  to be thought in a certain way is, therefore, certainly false. Yet this assertion plays the most central part in Kant’s ‘Copernical Revolution’ of philosophy, and renders worthless the whole mass of modern literature, to which that revolution has given rise, and which is called Epistemology.”

There are many words that a commentator might use to describe this passage; “polemical,” “rhetorical,” “opinionated;” but I prefer to use the far more evocative vernacular term, “ballsy.” GE Moore certainly could not be said to have lacked balls, a trait that philosophers in today’s day and age could use a bit more of. Moore’s arguments, from his Refutation of Idealism to his Prinicipa Ethica and on, were by themselves intuitive, commonsensical and easy to understand. It was his defense of the premises of those arguments that spanned many pages and showed extremely systematic attention to very abstract details; I also think he was great for possessing the rare ability to make points that, upon hearing them, make you slap your forehead and yell “well of course! How the hell did I not see that?” For instance, he innocently (almost naively) questions the view that to “know” a proposition is true means we must have absolutely no reason whatsoever to doubt its truth, and instead suggests that perhaps simply increasing our degree of skepticism to an absurd level has no effect on what we actually “know” and what we don’t. His dismissal of Kant in the passage above, whose ‘Copernican revolution’ in philosophy was (this is the Cambell’s soup condensed version) to suggest that the human mind plays a role in the definition of ‘truth’ (in the same way that the motion of the Earth plays a role in the apparent motion of the stars) could be said to help lay a groundwork for the general de-emphasis of Kant in analytic philosophy as opposed to his predecessors, Locke, Berkeley and Hume.

Moore raises an important question, in my opinion, not only for philosophers but for pretty much anyone with even the most modest education as to where we draw the line between the domain of ‘common sense’ and the domain of special study. The history of philosophy is littered with philosophers who would try to convince us of the most counterintuitive and seemingly crackpot conclusions, and I don’t only mean unconvincing religious philosophers or arguments such as the Ontological proof for the existence of God. I include in this domain philosophers such as Nietzche, with all of the ridiculous talk about “the Will to power” or Ayn Rand, who would have us believe that what’s “really” good for society is for everyone to blatantly act in their own self interest (an obscenity of a moral theory known as ‘ethical egoism.’) What is amazing though, for a student of philosophy, is that careful study of the actual arguments of these philosophers and their background assumptions can reveal them to be extremely systematic and well-thought out, even if the conclusions reached are utterly appalling (though personally I find even the humble Ontological argument, for all of its implausibility, a far worthier argument than any defense of ‘ethical egoism’ that I have ever read, but perhaps I’m just benighted and old-fashioned). And while non-philosophers who seem to absolutely love Ayn Rand, Friedrich Nietzche et. al. seem to abound, the reaction is exactly the opposite when one brings up Idealists such as George Berkeley or his descendents that were alive and well in Moore’s day (and who Moore worked very hard to refute). When these philosophers are brought up, the reaction I have seen is almost universally derisive, since no one wants to think that there’s no such thing as matter, or if they do they don’t seem to want to admit it (this is probably also explained by the simple sociological fact that while it sure would suit some peoples’ self interest if they could justify why ignoring the needs or desires of others is “really” the moral thing to do, there isn’t much utility in thinking that no material world exists, hence the vast majority of people can remain unbiased enough to affirm that this is a load of crap, even if they aren’t capable of providing quite a thorough demonstration of that fact as Moore did.)

But then again, even the more radical voices in philosophy, both in contemporary ‘Postmodern’ departments, who attempt to show that there’s no such thing as truth (i.e. Jacques Derrida), or ‘Analytic’ departments, who try to show that there’s no such things as beliefs and desires (i.e. Paul Churchland) have a point when they say that we should not let the limits of what we imagine is possible or plausible limit our inquiry. Who is to say that the truth isn’t weird? How do we know that things might not all change tomorrow? I must grudgingly admit that it was ‘common sense’ that told us the Earth was flat, ‘common sense’ that tells us that heavier objects fall faster, and ‘common sense’ that tells us we can drink seawater. There seems to be no clear line to differentiate the acceptable domain of ‘common sense’ from the domain of more systematic and thorough reflection, or thorough empirical testing. Does this mean we have to abandon common sense altogether, and that Moore’s viewpoint is fundamentally flawed?

As far as I can see, no, it does not, because it’s the spirit of common sense, in a manner of speaking, rather than the letter, that matters. What I mean by the ‘spirit of common sense’ is that rather than ‘common sense’ being conceived of as a certain framework (or worse, as a sort of ‘folk theory’ in the appallingly over-scientific way that a lot of modern philosophers of mind approach it) ‘common sense’ more means a certain kind of principles in approaching information, arguments, claims etc. It means approaching these things in a spirit of a straightforward, earnest desire to know what’s going on, one that “cuts the bullshit” if you won’t mind a slight bit of obscenity. I think what Moore was really responding to in his quote above, which explains his rather dismissive tone (and which I can really relate to) is the tendency of intellectuals, not only in philosophy but in a whole host of fields, to dogmatically treat certain theoretical frameworks, certain assumptions, certain guidelines as simply given, based not upon actual arguments for those assumptions but because we like the conclusions, and furthermore with no reflection upon how those assumptions are affecting the final outcome of chains of reasoning. This may seem obvious, or even worse it may seem like a charge that anybody could make. However, what seems to me to be a key aspect of real, good ol’ fashioned ‘common sense’ spirited philosophy is really paying attention to unspoken details. Moore, for instance, brilliantly observed the distinction between a sensation (an experience considered on its own as an object) and the object being sensed, and was able to show that an inference going back to Berkeley (which concluded that, because any object being conceived of had to be conjoined with some sort of consciousness, logically all objects must exist conjoined with consciousness, vindicating Idealism). This extremely technical detail made the difference in terms of undermining an argument that, like so many in philosophy, led to a conclusion widely considered undesirable, yet seemingly unavoidable. It is this sort of attention to detail, not only to the starting assumptions of an argument but to the assumptions made by every one of its premises, that is what I would label “the spirit of common sense,” a spirit of calmly looking at the premises to an argument themselves, without letting oneself get too excited by the mystery or wonder of an argument’s conclusion.

I think in the end every camp in philosophy could be accused of, at some point or another, getting so excited by particular conclusions of particular arguments that they began to overlook critical details of certain premises. Religious medieval philosophers, thrilled by what they thought they had proven about God, overlooked critical considerations to be made about whether the language their arguments were using were really applicable in that way. Early Modern philosophers, thrilled by how their arguments seemed to provide an airtight philosophical basis for the conquest of science, overlooked ways in which their mechanistic assumptions were far too limited to really capture the complexity and detail of the natural world. Idealist philosophers, impressed by how obviously their arguments seemed to show that the extremely counterintuitive conclusion that in fact there was no material world was true, overlooked critical details underlying the premises of those arguments. 20th century philosophers, in their excitement to “do away” with so many ‘traditional’ problems of philosophy simply by dismissing them as figments of unclear language, overlooked ways in which this thesis can, if taken too far, become radically self-undermining. Everyone has been there at some point; but what I think “the spirit of common sense philosophy” entails is, in all things, showing a bit of moderation, and always remembering that “we must follow the argument wherever it leads,” rather than having it “follow us wherever we lead.”

Seeing as though I’ve just decided to get back into blogging I figured I would set an optimistic tone for Ne Quiz Nimis with a nice, non-controversial somewhat introspective post (beware, many to come will probably be far more vitriolic than this)

At this point in my education I’ve known many artists; they’ve been my girlfriends, best friends, family members, fellow students and borne just about every other relationship to me that you could possibly imagine. Several years ago one of my best friends who is an artist put it best when he said to me (after we had downed several beers I might add) “you know I think we get along well because artists and philosophers…we just get each other you know? We just see the world in similar ways.” I’ve never forgotten this comment of his and have reflected on it often. On the one hand it has struck me as something of an overstatement; in my experience many of my artistic friends have come from a background that promotes what I will callously and carelessly label shameless liberalism. Please don’t take this label too seriously-it’s meant as something of a joke. What I mean is that in my experience  many artists I’ve known are often the purveyors of some extreme postmodern viewpoints that I dislike rather strongly, i.e. radical subjectivism, relativism, constructionism, nihilism etc. Strictly speaking there’s nothing wrong with any of these views in my opinion, except that I think an ironic historic consequence of their prevalence has been their institutionalization as a new orthodoxy. I find this ironic insofar as the original purveyors of these views were often reacting against the institutions of another era- ‘traditional (i.e. Religious) morality,’ ‘conservatism,’ ‘the Protestant work ethic’ etc and encouraging the youngsters of a past generation to “open their minds” and “think critically,” and consider that perhaps past widespread viewpoints that just seemed like common sense were not so invulnerable to criticism as they first appeared. However in today’s day and age, at least in my experience, these new views have become the new orthodoxy, such that I’m often greeted with perplexed looks when I explain my skepticism of views such as relativism or subjectivism. As far as these views go, in my experience, it seems that the revolution has become the establishment, the former rebels have become “the Man.”

But more recently I have come to have a far greater appreciation for art (and for artists and artistic vision) than I once had. Perhaps this is a consequence of getting older and gaining some perspective; after seeing my art major friends pull all-nighter after all-nighter in order to finish a project or make a piece absolutely perfect, I have come to see the extreme attention to detail and completeness that is present in the general demeanor of most artists. This attention to completeness is, in my opinion, an area in which contemporary philosophers ought to learn the most from our colleagues in fine art departments. In today’s era philosophy has become utterly fractured into so many different subdivisions that at times it seems to me they have almost nothing in common. Philosophy of psychology, philosophy of biology, philosophy of physics, philosophy of chemistry- these newer areas have, at least in the analytic tradition, raised important questions, most importantly questions about how they are all to relate to one another. Common sense seems to say that psychology is ultimately reducible to neurobiology, which ought to ultimately be reducible to biology proper, which ought to be ultimately reducible to chemistry, which ought to be reducible to physics QED. And yet this common sense perspective does not seem to be panning out; the concepts at the level of psychology simply do not reduce smoothly to those of neurobiology, and the considerations of biology do not seem sufficently addressed by those of chemistry, which may not even be fully accounted for by the concepts of physics. While this may not necessarily entail an ontological distinction between respective fields (though I’m inclined to think it does,) it certainly seems to me that there is a problem here- simply put, the problem of completeness. How do we turn a vast array of scientific fields and their philosophical concerns into an overall, coherent worldview? Stated another way, how do we turn a vast series of details and shapes into a single, complete picture?

This attention to completeness is, in my opinion, what is most conspicuously lacking in contemporary analytic philosophy. It seems to me that different sub-fields are only growing farther and farther apart, with few attempts being made to unite them. I just completed a philosophy of psychology class this past semester, and towards the end I began to wonder what Socrates, Plato or Aristotle would have said if they could see modern American and British philosophy and the sort of highly specialized topics (such as psychology) upon which it so often focuses. On the one hand I think they would be thrilled; particularly Aristotle, who originated many of the categories of biology that are still used today, would probably be pleased to see the vast degree of scientific progress that has been made since his day. But speaking from the standpoint of philosophy, towards what end are all of these developments growing? A professor of mine once wryly remarked that the one question that philosophers seem incapable of answering is what the hell their subject is actually about; to me it seems that philosophy is, at heart, a rational attempt to create a systematic worldview. We have all inherited worldviews from our parents, from our upbringing, from our culture and our religion and our education. Philosophy is, in my opinion, the humble attempt to unite these divergent perspectives into a single worldview that answers those three pivotal philosophical questions; “what am I, what do I know and what should I do?”

But there is another, related part of artistic vision that I believe is sadly lacking in contemporary philosophy that is very important, and that is passion. In Aristotle’s Nichomachean Ethics, when Aristotle critiques and improves upon Platonic Realism, he writes

“some may find this [that is, his critique of Platonic Realism] cruel, those who introduced the Forms were friends of ours. Still it seems better, indeed only right, to destroy even what is close to us if that is the way to preserve truth. And we must especially do this when we are Philosophers, lovers of wisdom, for though we love both the truth and our friends, piety requires us to love the truth first.”

This is one of my favorite philosophical quotes ever (you may have noticed it’s my one-liner under the blog title) and I think it clearly displays a certain amount of passion for the field- the same way that Socrates is reputed to have said “we must follow the argument wherever it leads,” Aristotle implies that the pursuit of philosophy includes a passion for understanding the truth, even if it’s difficult, even if it isn’t what we always want to do. It seems to me (and this is only my own intuition) that us sophisticated modernists have lost some of this idealistic drive- you’d be hard pressed, in my opinion, to find a quote such as this in  masterpieces of contemporary analytic philosophy such as Fodor’s “Language of Thought” or Kripke’s “Naming and Necessity.” And though the continental tradition has preserved some of this more artistic drive, many of us analytic theorists dismiss such perceived literary sentimentalism as unparsimonious at best, and sophistical at worst. But to me I think there’s another important lesson to be learned here- why are we doing this, exactly? Many of my friends who have taken philosophy classes and hated them often ultimately traced their dislike of the subject to that question; who cares? Why is it important? What does it matter? Even at the beginning of the 20th century when Russell and Moore were laying the groundwork for Ordinary Language philosophy I think that far more passion was involved in the field, because the big picture was kept in mind. Without bearing that big picture in mind, I suppose the passion underlying philosophical work, passion originating not from practical considerations but from the more noble human drive to seek understanding, begins to fade.

There is no doubt that art often raises deep philosophical questions, some of the more boring sort (which fascinate nerds like me) such as “what is the ontological difference between an assortment of colors vs. an  image” or “what is the nature of ‘representation’ when considered in an artistic, rather than a mental context” but many of the far more poignant, human sort. What is ‘expression,’ and what differentiates artistic impression (which clearly requires a great degree of skill) from other modes of expression? I recall visiting Spain several years ago and seeing Picasso’s famous (and massive) Guernica, which captured the horror and tragedy of war far better than my sociological analysis of the Spanish Civil War for a Social Revolutions class several years later ever could. Why is it that such a qualitative experience can evoke such an impression, one that seems so much closer to the real experience, than even the most detailed and systematic description?

To conclude, it seems to me that ultimately what my friend said was right; perhaps aritsts and philosophers “get” each other because in many cases we address the same sorts of questions, though from very different perspectives. To the degree that I once thought this was not the case, I now think that maybe this betrays a lack of key values that philosophers traditionally held- the drive for passion, for completeness etc. Values that perhaps we have neglected in recent decades but ought to bear in mind more in the future. I suppose what I’d like to see now for the future of philosophy is more of an endorsement of what seem to me to be these “artistic” values; passion for our field, the drive to unite the details that we pay so much attention to into a more universal picture. In these areas it seems to me that we philosophers have much to learn.