Wednesday, May 22, 2013

Guy Deutscher’s Through the Language Glass

            In Through the Language Glass, Guy Deutscher addresses the question as to whether the natural language we speak will have an influence on our thought and our perception. He focuses on perceptions, and specifically the perceptions of colours and perceptions of spatial relations. He is very dismissive of the Sapir-Whorf hypothesis and varieties of linguistic relativity which would say that if the natural language we speak is of a certain sort then we cannot have certain types of concepts or experiences. For example, a proponent of this type of linguistic relativity might say that if your language does not have a word for the colour blue then you cannot perceive something as blue. Nonetheless, Deutscher argues that the natural language we speak will have some influence on how we think and see the world, giving several examples, many of which are fascinating. However, I believe that several of his arguments that dismiss views like the Sapir-Whorf hypothesis are based on serious misunderstandings.
            The view that language is the medium in which conceptual thought takes place has a long history in philosophy, and this is the tradition out of which the Sapir-Whorf hypothesis was developed. I believe it has its roots in medieval nominalism, the position which asserts that only particular exist and that universals are mere names referring to groups of particular things. It would follow from this view that thoughts concerning universals will be thoughts concerning names. Many philosophers in early modernity developed this notion, asserting that thought about about anything that was not particular necessarily involved the use of names (although not all of these philosophers could be called nominalists properly speaking). Thomas Hobbes (in Leviathan and Elements of Philosophy) stated that thought involving universals was essentially a matter of connecting names in propositions and then connecting propositions in arguments. John Locke (in An Essay Concerning Human Understanding) asserted that words were necessary to hold abstract ideas in the mind. Gottfried Wilhelm Leibniz (in New Essays on Human Understanding III.1.2, Dialogue on the Connection Between Things and Words, and Meditations on Knowledge, Truth, and Ideas) went further to state that words were needed to hold any clear and distinct ideas in the mind and to connect ideas together in an act of reasoning.
            Although this idea that words are necessary for some kinds of thinking doesn’t necessarily result in the view that different natural  languages result in different types of conceptual thought, it isn’t difficult to see how it could be developed in that direction. Locke himself asserted that our abstract ideas are developed through our needs to communicate, and he seems to have thought that the fact that some languages have words that cannot be directly translated into words in another language indicates that speakers of some languages will have abstract ideas that speakers of other languages will not (e.g. Essay 3.5.7-8).
            In the tradition I have described above, the idea that language conditions not only the nature of our conceptual thought but also the nature of our perception has its origin in the philosophy of Immanuel Kant. Kant argued that for something to be an experience at all it would have to possess a unity that would conform to the concepts that he referred to as the categories of the understanding. This is not the same as saying that experience has to conform to the concepts of some particular language, of course, but Kant did see conceptual thought as closely connected language generally. In his Prolegomena to Any Future Metaphysics he said that determining what the categories of the understanding were was a project closely related to the study of grammar, and in his Anthropology From a Pragmatic Point of View he stated that conceptual thinking was dependent upon the use of words. Kant’s views would be gradually developed by later philosophers such as Hamann, Herder, Humboldt, Hegel, and Nietzsche into the idea that our experience necessarily conforms to the concepts and the grammar of the natural language we speak.
            It is important to note that in this tradition the relation between language and conceptual thought is not seen as one in which the ability to speak a language is one capacity and the ability to think conceptually a completely separate faculty, and in which the first merely has a causal influence on the other. It is rather the view that the ability to speak a language makes it possible to think conceptually and that the ability to speak a language makes it possible to have perceptions of certain kinds, such as those in which what is perceived is subsumed under a concept. For example, it might be said that without language it is possible to see a rabbit but not possible to see it as a rabbit (as opposed to a cat, a dog, a squirrel, or any other type of thing). Thus conceptual thinking and perceptions of these types are seen not as separate from language and incidentally influenced by it but dependent on language and taking their general form from language. This does not mean that speech or writing must be taking place every time a person thinks in concepts or has these types of perception, though. To think that it must is a misunderstanding essentially the same as a common misinterpretation of Kant, which I will discuss in more detail in a later post.
            While I take this to be the idea behind the Sapir-Whorf hypothesis, Deutscher evidently interprets that hypothesis as a very different kind of view. According to this view, the ability to speak a language is separate from the ability to think conceptually and from the ability to have the kinds of perceptions described above and it merely influences such thought and perception from without. Furthermore, it is not a relation in which language makes these types of thought and perception possible but one in which thought and perception are actually constrained by language. This interpretation runs through all of Deutscher’s criticisms of linguistic relativity.
            Deutscher writes about an Australian aboriginal language that has no words for egocentric directions like left or right. The speakers always refer to cardinal directions north, south, east, and west. He briefly mentions other languages that also lack words for left and right and instead have words for relations such as towards the sea or towards the mountains. According to some thinkers in the tradition that I have outlined, if this was the only language that someone had, then that person would not have the concept of left or right, and he or she would not perceive something as on the left or on the right. However, Deutscher argues that the aboriginals’ language does not prevent them from seeing things on the left or on the right because they are perfectly able to describe things as on the left or on the right when they speak English. This assumes that those who espouse linguistic relativity would say that the aboriginal's first language prohibits them from seeing the relations, when they would really claim that it doesn’t give them the ability to see the relations and that before they were able to see the relations, they would have to learn another language or their language would have to change to include the concepts in question. In the case Deutscher describes, the aboriginals have learned English, and it is because they have learned English that they have the concepts of these relations and are able to see these relations. It would be very surprising if anyone did claim that if a language does not contain a concept then it is impossible for those who speak the language to ever learn that concept. That would mean that it is impossible for languages to develop new concepts and impossible for a person to learn another language that has a different set of concepts.
            Deutscher offers a similar argument against linguistic relativity when he argues that since people who speak English can have a conception of Schadenfreude, it is not the case that their language prevents them from having the concept. This might refute a popular conception of the way in which the natural language one speaks conditions the way one thinks or perceives, but the tradition I described isn’t necessarily as simple as the view that one cannot have a concept unless one’s language has a single word for it. The example does not refute the idea that one cannot have a concept unless one’s language makes it possible to have it. English is able to refer to the same thing that Schadenfreude refers to, even though it cannot do so with only one word. So this case is probably different from the case with the aboriginal language and left and right. One could make a stronger case that to explain to someone in that language what left and right are it would be necessary to add new concepts to the language by showing the unilingual speakers of it how words are used. The egocentric nature of “left” wouldn’t be captured by descriptions such as “the direction that is west when you are facing north, north when you are facing east, east when you are facing south, and south when you are facing west”.
            Elsewhere in the book Deutscher makes a similar argument to the one about Schadenfreude when he teaches the reader a bit of terminology from linguistics—the term factivity. He says that even though the reader did not have a word for this concept, he or she was able to learn the concept. This is an even stranger argument since the word in question is one in the English language, albeit part of a specialized jargon. The reader is taught the concept by being given an English word for it and then an explanation in English of what the word means. Consequently, this example fails to show that a person’s ability to have a concept doesn’t depend on his or her language making it possible to have the concept.
            There seems to be a process of knocking down increasingly weaker straw men here. The aboriginals using the English words left and right was cited to knock down the straw man which claims that if a language is not capable of describing a concept then it is impossible for the speakers of that language to ever learn the concept. The ability of English speakers to conceive of the thing that Schadenfreude refers to was used to knock down the straw man which claims that if a language does not have a single word for a concept then its speakers can never learn the concept. Then the example of the reader learning the meaning of factivity was used to knock down the straw man which claims that if a person does not know of a single word for a concept, even if such a word exists in his or her language and even if the concept can be expressed in more than one word in the language, then he or she can never learn the concept. In each case the straw man involves the idea that language is external to thought and influencing it in a restrictive way.
            Certainly many questionable assertions have been made based on the premise that language conditions the way that we think. Whorf apparently made spurious claims about Hopi conceptions of time. Today a great deal of dubious material is being written about the supposed influence of the internet and hypertext media on the way that we think. This is mainly inspired by Marshall McLuhan but generally lacking his originality and creativity. Nevertheless, there have been complex and sophisticated versions of the idea that the natural language that we speak conditions our thought and our perceptions, and these deserve serious attention. There are certainly more complex and sophisticated versions of these ideas than the crude caricature that Deutscher sets up and knocks down. Consequently, I don’t believe that he has given convincing reasons for seeing the relations between language and thought as limited to the types of relations in the examples he gives, interesting though they may be. For instance, he notes that the aboriginal tribes in question would have to always keep in mind where the cardinal directions were and consequently in this sense the language would require them to think a certain way.

Saturday, December 4, 2010

The Death of Philosophy: Part II

As university departments of philosophy still exist and as conferences of professional philosophers still take place, one might argue that academic philosophy is alive and well, especially if one is an academic philosopher who finds the debates going on in contemporary academic philosophy fruitful and exciting. Furthermore, one could point out that this lively state of academic philosophy goes on despite the many historical pronouncements that philosophy has died. As I mentioned in my last post, however, past pronouncements of the death of philosophy have tended to come from within philosophy. As such, they were always something that philosophers could address on their own terms, confronting the ideas behind them with arguments. I suggested in the last post that the forcefulness of the responses to comments like those of Stephen Hawking with assertions that philosophy is in fact thriving (and usually with tediously unimaginative appropriations of Twain’s line about reports of his death being exaggerated) have more to do with the awareness that there is a different kind of threat to philosophy today, one that is not so much a threat to the idea of philosophy as a threat to the practice of philosophy as a profession.

This threat could be described as “institutional” or “structural”, for it involves the lack of financial support for philosophy departments from governments and university administrations. The most notable of the recent books that address this issue is Martha Nussbaum’s Not for Profit: Why Democracy Needs the Humanities. The analysis in this book reflects a view of the situation that I find very common among academic philosophers. It is the view that the problem is not with academic philosophy itself but with governments and administrators of universities, who adopt a myopic business model that values only employable skills. According to this way of thinking, if politicians and administrators realized that philosophy has value as a discipline that makes people into critical-thinking, well-rounded citizens of democracies and the world, then they could loosen their purse strings and fund academic philosophy the way it should be funded.

It’s easy to feel some sympathy for this view because it is at least right insofar as it identifies part of the threat to the humanities today as the deleterious effect of anti-intellectualism, philistinism, and avarice. But such threats have plagued philosophy since the days of Anaxagoras and Socrates. Again the issue today is not so much a threat to philosophy as a threat to the profession of philosophy. Yet the defence of the profession given by academics like Nussbaum raises several questions.

One question that needs to be asked is whether the kinds of skills Nussbaum and her ilk characterise as those that make one a well-rounded citizen really are of vital importance to a democracy. Since the types of democracy in question are modern liberal democracies, it is necessary to consider the two main branches of liberalism—which are described clearly by the political philosopher John Gray in his book The Two Faces of Liberalism. (Of course, I am using the word “liberalism” to refer to the political theory of “classical liberalism”. I am not using it in the colloquial sense to refer specifically to left wing politics.) One branch of liberalism is characterized by the search for fundamental ethical principles that all people who understand would agree upon and which could therefore form the constitution of an ideal government. Noteworthy philosophers from this tradition would be Immanuel Kant and John Rawls. Nussbaum is evidently sympathetic towards this tradition, having particular admiration for Rawls. The other branch of liberalism is characterized by the belief that this kind of significant agreement between people on ethical principles is not possible, that reason and dialogue will not bring people to any such agreement, and that politics will always be a matter of making compromises and tolerating others with radically opposed systems of values. Notable philosophers in this tradition, Gray asserts, are John Stuart Mill and Isaiah Berlin. Gray considers this branch of liberalism more realistic than the first, as do I.

If the first branch of liberalism I have described is on the right track, then it could be true that liberal democracies require their citizens to have the ability to apprehend the correctness of the principles of justice that would ideally form the foundation for such societies. This ability would consist in effectively questioning traditional authorities and local allegiances, and such an ability is exactly what the study of the humanities provides according to Nussbaum. Yet if the second branch of liberalism is correct, then there is something deeply mistaken about the conception of such moral principles and the supposed ability to apprehend their justness. In that case, citizens would not need to empathize with all other citizens and find solidarity with them so much as they would need to realize that for a peaceful society to be possible it will sometimes be necessary to make compromises and to tolerate those with whom one has irresolvable differences.

A second question that needs to be asked is whether it should be the goal of the university to make students good citizens of a liberal democracy. Of course, one would likely think that such a goal is misguided if one agrees with the critique of the branch of liberalism whose ideas underlie this notion of the university’s goal. Yet there could be other reasons for rejecting the notion that university’s goal is to create good citizens. While Nussbaum and others like her could reasonably contend that they are not arguing for the humanities by saying that they have instrumental value, they are not appealing to the value of the purely theoretical interest in the subjects either. Consequently, some commentators, such as Stanley Fish, have objected to arguments of this nature, contending that the academic study of humanities has only intrinsic value and that the attempts to make universities places that nurture good citizens end up politicizing academia and lessening the quality of education.

A third question that should be asked is whether an education in the humanities really does help to make people into critical-thinking, well-rounded citizens of a liberal democracy. The claim that it does always reminds me of a passage from The Catcher in the Rye in which Holden Caulfield describes his prep school:

‘They advertise in about a thousand magazines, always showing some hot-shot guy on a horse jumping over a fence. Like as if all you ever did at Pencey was play polo all the time. I never even saw a horse anywhere near the place. And underneath the guy on the horse’s picture, it always says: “Since 1888 we have been molding boys into splendid, clear-thinking young men.” Strictly for the birds. They don’t do any damn more molding at Pencey than they do at any other school. And I didn’t know anybody there that was splendid and clear-thinking at all. Maybe two guys. If that many. And they probably came to Pencey that way.’ (2)

My own experiences at university, from my undergraduate studies to the completion of my PhD, resemble Holden’s insofar I suspect the few good citizens I encountered during that time probably came to university as good citizens. This is purely anecdotal, of course, but there aren’t any objective measures for “good citizenship”. At any rate, the validity of any purportedly objective measures of good citizenship are questionable. Furthermore, even if we found that people who go to university tend to be good citizens, this doesn’t necessarily mean that universities make people good citizens. People who are already critical of traditional authorities and who feel empathy for others could be more likely to go to university and stay in university than those who aren’t. This would especially be the case with those who also share the kinds of liberal values Nussbaum espouses, for they would consequently value university education highly. While such people might develop a more mature and sophisticated position through an education at university, a great deal of their values would already be in place before they even set foot in a post-secondary institution.

Of course, I’m not saying that that a university education doesn’t provide a person with any knowledge or skills. I am suggesting that the knowledge and skills such an education provides are not those that make a person a good citizen of a liberal democracy. This is not to say that a university education necessarily makes a person a worse citizen of a liberal democracy either. I mention this because in a documentary on Montaigne, Alain de Botton was questioning whether graduating from Cambridge necessarily makes a person happier or wiser (implying that it doesn’t) and when he discussed this topic with the master of his old college at Cambridge the master of the college replied by arguing against the idea that being poor and uneducated makes one happier, which is not the same thing.

I may be told that the fact that universities don’t make people good citizens is precisely the problem and that they could make people better citizens if they were given enough support from governments and administrations. Furthermore, I might be told that if universities do not make people better citizens, this could also mean they are using the wrong methods. The example of Holden’s prep school might be taken up again, and I might be informed that he is describing precisely the kind of authoritarian institution that proponents of liberal education oppose, one that conflates education with the rote memorization of “facts”. A liberal education, it will be insisted, would use methods that empower the student to think for himself or herself. The “Socratic method” will be mentioned, but underlying this claim isn’t based so much on Plato’s view of learning as recollection as the cluster of educational theories called “constructivism”.

So the next question to ask is whether universities can make students into good citizens of a democracy. This brings us back to the distinction between the two types of liberalism and the two different conceptions of a liberal democracy. If the second type of liberalism I described is right, then no significant consensus on values and no strong sense of solidarity will come from education, debate, and dialogue. It might come from authoritarian indoctrination from a young age, but neither branch of liberalism is sympathetic to such an approach.

I’ve already mentioned that I am sympathetic toward this critique. The reasons why I hold this view are beyond the scope of this post, but it might be a moot point anyway. Given the pressing nature of the problems facing the profession of philosophy today, it is not advisable to appeal to ideas that have themselves been the subject of intense debate for many years. Even those who are sympathetic to the branch of liberalism on which the defence given by Nussbaum and similar authors rests need not accept the conclusions about the value of the humanities. If the aim of these authors is simply to rally those who agree with their values (and this might be the aim of Nussbaum, who describes her book as a manifesto), that will probably not be enough to save professional philosophy.

The real problem is that professional philosophers are being forced to convince others to subsidize their profession, and their arguments wouldn’t persuade many who aren’t themselves professional philosophers. The argument that philosophy has intrinsic value is the only argument that isn’t spurious, but it is the least likely to persuade others if they aren’t already inclined to agree. Certainly the assertion that their unexamined lives are not worth living is hardly calculated to win them over. As I have explained above, the argument that philosophy is necessary for a healthy democracy is highly questionable and also not likely to persuade anyone who isn’t already inclined to agree. The attempt to hitch philosophy’s wagons to the sciences is also unlikely to succeed. Setting aside the questions as to whether philosophy can or should help out the sciences and whether the sciences need such help, it is difficult enough finding funding for highly theoretical work within the sciences.

The question I will address in a later post is whether contemporary philosophy has to be academic philosophy and what form it would take if not that of academic philosophy.

Wednesday, November 10, 2010

The Death of Philosophy: Part I

A few days ago an acquaintance brought it to my attention that Stephen Hawking had declared that philosophy is dead. I found that this acquaintance was referring to the first page of Hawking’s latest book The Grand Design, a popular science work co-written with Leonard Mlodinow. Hawking writes that questions such as “How can we understand the world in which we find ourselves?”, “How does the universe behave?”, “What is the nature of reality?”, “Where did all this come from?”, and “Does the universe need a creator?” were originally questions addressed by philosophy. But now philosophy is dead, he tells us, because it has not kept up with developments in science, particularly physics.

This dismissal of all contemporary philosophy is as cryptic as it is blunt. For a start, it isn’t clear what Hawking has in mind when he uses the word “philosophy” in this passage. His comments would suggest that he thinks it is a specific type of inquiry that investigates precisely the same questions that “science” does, but in a rather inept fashion so that it has now been outstripped by science. Of course, it would be very strange if philosophers today were investigating the same questions that scientists investigate, and it would be even more strange if they were doing so without keeping abreast of what scientists were doing.

It might be said that philosophers in antiquity were asking the same questions that scientists today investigate but pursuing the answers to them in a more primitive fashion, but this is mainly because at that time “philosophy” (from the Greek for “love of wisdom”) referred to a way of life rather than a specific subject of study. It was the life of a person who sought wisdom, and wisdom could encompass any disciplined study, including, but not limited to, the study of topics at least analogous to those addressed by sciences today. So Aristotle would not have said that his studies of animals and plants were something separate from philosophy. The natural sciences only began to split off from philosophy around the 17th century, although at first they were labelled “natural philosophy”. They weren’t called “sciences” until the 19th century.

After the natural sciences and then the social sciences developed out of philosophy, it came to be viewed as a specific field of study, encompassing such topics as metaphysics, ethics, epistemology, and logic. Since then, philosophers have had varying conceptions of how their field relates to other disciplines such as the sciences, and there is no consensus on the matter today. Yet no one in professional philosophy takes the view that philosophy addresses the same questions that the sciences do and that it thus comes into direct competition with them.

Oddly enough, only three of the questions described by Hawking as ones traditionally investigated by philosophy but now investigated by science are clearly the kinds of questions now investigated by scientists. The questions “How does the universe behave?”, “Where did all this come from?”, and “Does the universe need a creator?” might be questions pursued by physicists. It might be thought that the last question is also studied by philosophers, or at least theologians. Admittedly, in some undergraduate philosophy courses there is a cursory review of arguments for the existence of God offered by philosophers such as Thomas Aquinas (who lived in the middle ages), one of which (the so-called “cosmological argument”) is basically an answer to this question. Anyone who takes one of these courses will note that the arguments are discussed mainly to show that there are serious problems with them. While arguments that aim to show that the universe requires or does not require God as a cause may have been popular in Aquinas’ time, they have not been characteristic of professional philosophy since Immanuel Kant criticized them in the 18th century. They are also uncharacteristic of the thought of contemporary theologians, who generally excoriate such arguments as “ontotheology”.

The question “How can we understand the world in which we find ourselves?” could mean different things. Yet it doesn’t seem to be the psychological question about how the human mind, or the human brain, forms beliefs or perceptions. It appears to be the epistemological question addressing how we can determine which of those beliefs and perceptions count as knowledge.

The question “What is the nature of reality?” is, of course, a formulation of the question addressed by metaphysics. It could be argued that only the sciences can provide any legitimate answer this question, but this argument itself will necessarily be made within a traditionally philosophical field of study— metaphysics or epistemology. The notion that the sciences themselves, by dint of their “success”, show that only they can offer any legitimate answers to the question actually presumes a kind of pragmatist epistemology. This isn’t the only kind of philosophical position that defers to the sciences, though. Peter McKnight in the Vancouver Sun cites other writings by Hawking to argue that the hasty and enigmatic dismissal of philosophy in Hawking’s latest book is based on his acceptance of logical positivism. Consequently, “despite his dim view of philosophy, Hawking does subscribe to a philosophy of science.” He also contends that Hawking’s “M-theory” is an example of the kind of metaphysical speculation logical positivists rejected as meaningless.

While Hawking’s dismissal of philosophy is crude, his critic’s optimistic assertion that it is alive and well, and their appropriation of Mark Twain’s witticism about the rumours of his death being exaggerated, are equally curious. McKnight himself points out that Hawking may not be aware that many philosophers over the years have declared the death of philosophy. Indeed, one of the main texts for a course I took back when I was an undergraduate had the title After Philosophy: End or Transformation. The point of this is doubtless to show that while many others have declared philosophy dead, it somehow manages to survive. Yet the real issue that seems to animate these responses is not so much the death of philosophy as the death of philosophy as an academic discipline, and this threat appears to be much more serious than the criticisms of philosophy that have arisen within philosophy. I will discuss this topic in a later post.

Thursday, September 23, 2010

Ray Monk’s “Ludwig Wittgenstein: The Duty of Genius”

A good biography will have a specific purpose, thereby avoiding the approach of simply cataloguing  events in its subject’s life. In his introduction to Ludwig Wittgenstein: The Duty of Genius, Ray Monk explains that his purpose is to show “the unity of [Wittgenstein’s] philosophical concerns with his emotional and spiritual life”. The result is compelling, although Monk does not in every instance examine the connections in detail. From my reading of this book, three main facets of Wittgenstein’s emotional and spiritual life emerge most prominently.

The first facet could be described as Wittgenstein’s sense of his vocation for philosophy. In this regard, Monk points to the influence of Otto Weininger. Weininger was a Viennese philosopher who gained notoriety in 1903 when he published the book Sex and Character. In this work, he bemoaned the encroachment of femininity into culture, arguing that masculine qualities were truly creative, logical and divine and that feminine qualities were their opposites. He contended that the purest state of masculinity was found in the “genius”, a person with the highest degree of intellect and artistic insight. Such a person, he believed, had a duty to exercise his gift. This idea appears to be the source of the title of Monk’s book. What Monk doesn’t mention is that Arthur Schopenhauer, another strong influence on Wittgenstein early in his life, also writes at length about "the genius" and also describes the value of a sense of one’s vocation, which according to Schopenhauer comes through knowledge of one’s character and one’s strengths and weaknesses. This conception of the genius as a type of person who has special insight into the nature of the world was, in fact, an important idea in German philosophy and literary criticism from at least the time of the Sturm und Drang movement.

Influenced by the ideas of Weininger and Schopenhauer, Wittgenstein only wanted to pursue philosophy insofar as he was capable of producing something great, and thus only insofar as he was truly inspired by genius. First of all, in his youth he was uncertain as to whether he should leave his studies in engineering, where he experimented in aeronautics, to study philosophy. Only the encouragement of Bertrand Russell convinced him at last to do so, and he felt a great sense of relief when this encouragement came. Secondly, after he had written the Tractatus, he felt that he had resolved all philosophical problems, and he consequently left academic philosophy to teach at elementary schools in Austria. It seems that he thought there was no point in making a living from philosophy when all that it could address had been addressed. Furthermore, when he returned to philosophy, convinced by philosophers such as Frank Ramsey that the Tractatus had left some loose ends, he delayed publishing works until he thought they were perfect. Consequently, his second book, Philosophical Investigations, was published posthumously, and most likely in a form he would not have been satisfied with. He was infuriated by Russell’s advice that he publish imperfect works.

The second facet could be described as Wittgenstein’s mystical Christianity. To a certain extent this was also influenced by Schopenhauer and Weininger, but the influence of Leo Tolstoy and Rabindranath Tagore seems to have been more lasting. Under these influences, Wittgenstein viewed religion – which he appears to have conflated with ethics and aesthetics - as radically distinct from the sciences. For him, it was a way of seeing and standing in relation to the world as a whole, and the truly religious life was one in which a person stood in relation to the world in such a way that he or she “fit” with it, having an attitude of grateful and joyful acceptance. Monk connects this mysticism with the distinction Wittgenstein made in his early philosophy between showing and saying. As the religious was something inexpressible, and not about the facts of the world but about the world as a whole, so logic was also not about facts but about the world as a whole and was that which not could not be said but only shown. Russell couldn’t understand why logic couldn’t also be something said, perhaps expressed in a meta-language. Ramsey insisted that if logic was something that couldn’t be said, then the Tractatus itself was something that couldn’t be said.

The third facet could be described, contentiously, as Wittgenstein’s conservativism. While he felt some sympathy for the Left and for Soviet Russia, this was not rooted in a kind of liberalism that sought to improve the world. For instance, he detested the progressive organizations to which Russell belonged. It had more to do with Tolstoy than Marx, involving a love of austerity and manual labour. In many ways, Wittgenstein felt disdain for modernity. This was first of all manifest in his distaste for contemporary culture in Vienna and his belief that the arts had degenerated. Monk reveals that to a certain extent these views were influenced by the writings of the satirist Karl Kraus and the architect Adolf Loos. Yet Wittgenstein’s taste seems to have been more conservative than that of Kraus and Loos. He notoriously disliked all music after Brahms (the music of the composer Labor being the sole exception, evidently). His disliking for modernity also went deeper than his aesthetic taste. Partly under the influence of Oswald Spengler, he was hostile to what he saw as the encroachment of science and technology into all areas of life and culture. In this respect, his thought was very much at odds with the tenor of professional philosophy in England at the time and in the Vienna Circle, where philosophy was generally seen as contributing to a scientific picture of the world that would dismiss religion as superstition and artistic imagination as insight into nothing but one’s own feelings and impulses. Wittgenstein was particularly repulsed by James George Frazer’s interpretation of religious rituals as primitive science. When he agreed to speak with members of the Vienna Circle, they were shocked to discover that he had chosen to read excerpts from Tagore’s poetry.

Of course, these three facets of Wittgenstein’s moral and spiritual life are interconnected. For example, Wittgenstein’s perfectionist conception of his philosophical vocation was connected to his mystical conception of his religious and moral duty to be a “decent person” and his disdain for worldliness and base sensuality. Furthermore, his mystical interpretation of religion was in turn connected to his rejection of scientism. The three aspects could be described as three ways of characterizing Wittgenstein’s worldview.

The picture of Wittgenstein that emerges from my observations here is a contentious one. As with the thought of several other major philosophers, such as Nietzsche, Wittgenstein’s work has been subject to many different competing interpretations. Also as with philosophers such as Nietzsche, this has not been simply due to the complexity of his work. Various interpreters have been keen to claim Wittgenstein as one of their own. Wittgenstein’s brilliance cannot be seriously contested, and so if ideas can be attributed to him then they can always benefit from the halo effect. Yet I think that the picture that we get from Wittgenstein’s biography mainly shows us the difficulty of associating him closely with any movement in contemporary philosophy. Contemporary analytical philosophers - who have usually claimed Wittgenstein as one of their own due to his connection with Frege, Russell, Moore, Ramsey, and the Vienna Circle – cannot honestly assimilate Wittgenstein’s thought into their own project, which accords a kind of dominance to science that would have repelled him. Yet it is just as spurious to attempt to assimilate it into the tradition of more postmodern thinkers, usually labelled “continental philosophers”. Wittgenstein’s approaches to philosophical problems and his conceptions of logic and grammar may have been partly formed by political views that stood in sharp contrast to the humanist projects of modernity, but his philosophical writings were not themselves articulations of those views, let alone arguments for them. His philosophical thought was almost exclusively concerned with puzzles that he saw as arising from the confusing web of language, problems that are generally dismissed by postmodernist thinkers today.

This is not to say that Wittgenstein’s writings have no relevance for anyone today, but rather that they can at most offer material for projects that will be very much at variance with Wittgenstein’s own. Personally, I retain some sympathy for Wittgenstein’s thought, mainly for the same reasons why I was originally attracted to it many years ago when I was an undergraduate. He discerned and articulated something that I had at first only felt and suspected. That is, he saw that what academic philosophers present as the traditional problems of philosophy are not truly deep questions but rather pseudo-problems. My sympathy only goes so far, though, as I do not agree with his ahistorical account of the origins of these pseudo-problems or with the specifics of his quasi-mystical notions of what is truly deep. Consequently, while I was reading through his biography I found that almost all of his work as a professional philosopher was of little interest to me now. Perhaps Wittgenstein himself would not be troubled by this view, for he mused at one point, “I am by no means sure that I should prefer a continuation of my work by others to a change in the way people live which would make all of these questions superfluous” (Culture and Value, 61). Nevertheless, he remains a significant figure in the history of philosophy for, with Heidegger, he is one of the last philosophers who could seriously contend both that philosophy had a task distinct from that of any other discipline and that it could be practiced fruitfully in academia. Also with Heidegger, he seemed to see the writing on the wall and already had serious misgivings about academic philosophy. For these reasons, I regard these two thinkers as the last truly significant academic philosophers.

Saturday, September 11, 2010

Lierre Keith’s Vegetarian Myth: Nostalgie de la Boue

There have been many theorists who have pointed to some specific revolution in history and proclaimed that it was the moment when everything started to go wrong. Some would denounce the recent digital revolution as the start of a downward trend, but others would go further back and attack the Industrial Revolution, the Enlightenment, or the Reformation. A recent movement would push the date even further back to the Neolithic Revolution, the point when humans first developed agriculture. Two of the best known writers in this genre are Derrick Jensen and John Zerzan. Evidently, even Jared Diamond, the author of the bestseller Guns, Germs, and Steel, wrote an article supporting an anti-agricultural view. Fellow travellers promote a “Paleolithic” diet, similar to the Atkins diet insofar as it largely consists of animal protein and shuns carbohydrates. One of the latest additions to this movement is Lierre Keith’s book The Vegetarian Myth, a polemical and confessional work from an ex-vegan. In this book, Keith describes her gradual conversion to the view that agriculture is unsustainable and that it brings about more death and suffering than hunter-gatherer modes of subsistence do. She also describes the process by which she came to accept the notion that her many health problems were a result of her vegan diet.

One general objection I have to Keith’s book is that her thesis is essentially utopian and unrealistic. She calls for the elimination of all agriculture, saying that this is the only way the planet can be saved. Repeating a familiar Marxist view about revolution, she says that this change will not happen through educating people and changing their ideas but through communities of people changing institutions. In this case, one wonders why she wrote a book. Perhaps she realized that communities are unlikely to make institutional changes of the type she envisions because hardly anyone shares her goal to begin with. I don’t think it can be seriously maintained that most people, or even most workers, most poor people, or most mothers will ever come to have the goal of bringing an end to agriculture worldwide. As many would be opposed to a revolution seeking to dismantle agricultural society, and as it would require the destruction of economies around the world, such a revolution would surely involve considerable violence and suffering as well. Keith would probably reply that agricultural societies themselves are at least as violent and that they are unsustainable besides. This could be debated, but as the worldwide reversion of societies to their condition in Paleolithic period is not going to happen, the debate would be purely theoretical. Keith really does a disservice to people who might otherwise be likely to support more realistic goals, such as advocating more sustainable forms of agriculture, instilling them with the belief that only a radical change in world culture will suffice. This could easily lead people into despondency when they realize how unlikely it is that this radical change will occur and how much suffering it would involve if it did occur.

To support her case that agricultural modes of subsistence must be eliminated, Keith offers arguments based on what she describes as an “animist ethic”. This evidently involves the view of the whole world as alive and imbued with spirit. To support this view, she provides accounts of nature laced with anthropomorphisms (examples from page 36 include the claims that trees try to make a forest, that grasses want a prairie, and that water aches for wetlands). These accounts include assertions about vegetation that Keith presents as overwhelming evidence for plant consciousness. It should be noted that almost every citation she gives to support her claims on this topic refers to the same book, a work by Stephen Harrod Buhner. As plants do not have central nervous systems, or even so much as ganglions, they cannot be said to be conscious in the literal sense of that word. Keith seems to recognize this tacitly, as she avoids claiming outright that plants are conscious and instead poses rhetorical questions: “Why don’t we want to include plants in the circle of us?” (p. 90) She also confesses that the view of plants as conscious is something “spiritual” and that it must be based on experiences rather than arguments (p. 30). If she wants to take this “spiritual” view based on experiences, I have no great objection, but she cannot then make claims like her assertion that the difference between her and vegetarians is that she is informed and vegetarians are not (p. 16). People can have all kinds of experiences of things real or illusory; no experience of itself makes one informed. She also poses a false dichotomy, implying that you either regard plants as conscious or you look at them as dead matter to be manipulated (p. 92).

Apart from decrying the damage that agriculture wreaks on insentient objects such as soil and plants, Keith also mentions that the conversion of wild lands to agricultural fields ends up killing animals, either directly by activities such as tilling the soil or indirectly by depriving them of their homes. She apparently doesn’t draw a moral distinction between the killing of animals in such instances and the deliberate killing of animals for food, and it seems that she therefore doesn’t draw a moral distinction between actions whose aim is not violence but which can be expected to result in violence and actions whose deliberate aim is violence. Failing to make such a distinction leads to absurdities, though. For example, if one sets up a legal system, no matter how excellent it may be, the imperfections of the real world will be such that it will still result in acts of injustice such as the imprisonment or death of innocent people. But this cannot be seriously equated with the deliberate imprisonment and execution of innocent people. Likewise, if one goes to war, no matter how careful one is about preventing harm to civilians, civilians will be injured and killed. This cannot be equated with the deliberate murder of civilians, though.

In order to provide an additional argument to establish that agriculture requires death, Keith offers a narrative about her gardening hobby in the days when she was a vegan, leading to a description of her discovery that modern fertilizers can contain blood meal and bone meal. She seems to think that this will come as a disturbing revelation to vegetarians and vegans everywhere and cause them as much cognitive dissonance as it did her. However, I have been a vegetarian for over twenty-two years and I have known about this since the beginning of my vegetarianism. I also don’t see it as a threat to my vegetarian principles. The reason why Keith thinks that the use of blood meal and bone meal in fertilizers threatens vegetarianism is that she attributes the views that she had when she was a vegan to all people who are vegetarians for moral reasons, seeing them as utopians trying to change the world and remove all death from it (p. 14, 16, 18). This is really setting up a straw man, though, for I have never seen my vegetarianism as part of a project to remove death from the world. My principle has always been that if it isn’t necessary to kill animals for food, clothing, medicine, or other means of survival, then it is wrong to do so, and when it is necessary to kill them, then it should only be done to the extent that it is necessary and in a manner that inflicts as little suffering on the animal as is possible. If this principle were enacted worldwide, then I think it is likely that in some parts of the world people would still kill animals for food and clothing. Speculating about such a situation is mainly of theoretical interest, though, for by following this principle I am not committed to the attempt to establish it as a law followed by all people.

Given that vegetarianism need not be a utopian movement to remove all death from the world, it isn’t clear how the use of blood meal and bone meal to grow plants is supposed to threaten vegetarianism. If plants are grown with blood meal, this does not mean that they are themselves blood meal in anything other than a metaphorical sense. If we want maintain that the plants are blood meal, then we can also assert that the cow’s blood is actually grass, so there is no problem. But let us be serious. Keith’s point seems to be that the production of plant food requires the killing of animals, although she admits that plants can be fertilized with manure, compost, or artificial fertilizers as well. My admittedly cursory research on the subject indicated that fertilization with bone and blood meal didn’t occur on a large scale until the Industrial Revolution, when large amounts of slaughterhouse by-products could be obtained, processed, and transported. It could be that in some parts of the world agricultural methods would require fertilizing the soil with the by-products of animal slaughter, but as a vegetarian I need not aim at a utopia in which all death is eliminated. It might be argued that by eating plants fertilized with blood and bone meal one is indirectly supporting the kinds of intensive meat production called “factory farming”. In fact, it is hard to avoid supporting this industry in our culture. For example, books could have glue in the spine made with gelatin, soap could be made with fish oils, and the foam in fire extinguishers could be made with slaughterhouse blood. But buying books and washing with soap is not comparable to supporting the industry by eating factory farmed meat, for it is unlikely that the industry could survive as such if it only sold by-products.

The problem doesn’t seem to be that Keith has no compassion, but rather that she suffers a great deal from her compassion. Describing her life as a vegan, Keith recounts an episode in which she lifted a rock in her garden to see that she had disturbed an ant colony and confesses that she had trouble holding back her tears as she watched the worker ants scurrying to carry the larva away. She also tells the story of her attempts to protect her garden from slugs without killing them and how she went so far as to put them in buckets and drive them to a nearby forest, all the while trying to avoid the nagging thought that they would die in the woods anyway. She says that she had trouble eating seeds and nuts because she felt she was eating a plant’s “babies”. She mentions that she once contemplated becoming a breatharian, a person who supposedly utilizes esoteric practices to live on breath and sunlight alone. She presents her current view as an “adult” one that comes to accept the necessity of death, but my impression is that she still hasn’t come to terms with death. I am not opposed to the killing of animals for food in every circumstance, but I would say that in the cases in which it is acceptable, killing an animal is an unfortunate but understandable necessity—the animal doesn’t offer itself to be killed. Keith, it seems to me, couldn’t bear to accept this view of the world as a place in which survival may depend on “domination” and “exploitation”. She needed to view the killing of an animal as part of a beautiful compact that the animal had entered into, one in which the animal allows us to kill it if we agree to become prey ourselves at some point (e.g. p. 23-24, 271). By “becoming prey” Keith apparently means that we will die, be buried, and become part of the soil. She resents not being allowed a “sky burial”, meaning that it is currently illegal to dispose of corpses by exposing them to the elements (p. 24).

Keith implies that her “animist ethic” is the same as the worldview of “indigenous cultures”. At one point she identifies it with the worldview of the ancient Mayans (5), which is strange as the Mayans are mainly famous for having founded a civilized, agricultural society in the New World. I doubt that her animism has much to do with the religious ideas and practices of Native American cultures, though. The sense of relatedness to plants and animals described in accounts of many indigenous American cultures has more to do with the totemism of tribal systems than an ethic of leaving a light ecological footprint. Some have theorized that the extinction of New World megafauna at the end of the last ice age was a result of the hunting practices of the newly arrived Clovis humans (although this theory remains controversial). Somewhat less controversial is evidence of hunting practices at buffalo jumps, which appears to upset the view that Native Americans only killed as many animals as they could use. Whatever the case may be, these archaeological observations would not imply any moral condemnation of Native Americans, who like all other people developed methods for surviving in their environments as best they could. It is absurd to consider such observations racist, as some have done, and one is not doing Native Americans any favours by romanticizing them to serve one’s own ends.

The goal of returning to a pre-agricultural society seems to have a strange contradiction at its core. Keith’s arguments about change coming through activism rather than education notwithstanding, this goal rests on a view of humans as capable of conceiving an ideal type of society and remaking their societies through the use of reason and free will. Such a view is a relatively recent product of modernity and therefore a product of the kinds of society that Keith and her ilk say they oppose. Hunter-gatherer and pastoral societies are based on institutions such as tribal systems established by tradition and preserved as sacred. The notion of a revolutionary remaking of society from the ground up would make no sense in such a context, and so it isn’t clear how it could be said to reflect the ethos of hunter-gatherers.

Another main objection I have to Keith’s book is that many of her arguments commit the genetic fallacy. She cites a theory which suggests that the switch to a diet of meat made it possible for hominids to develop the type of brains characteristic of modern humans, implying that if we don’t eat meat now we will not be fully human. Yet even if humans developed certain traits by eating meat, this does not mean they must now eat meat to maintain those traits. Furthermore, it should be noted that the theory Keith cites is being debated by anthropologists and that there are rival theories that Keith does not mention. A relatively recent overview of research on Paleolithic and Neolithic modes of subsistence is found in this article. She also mentions that the Neolithic Revolution and the beginnings of agriculture first resulted in a decline in health in humans, implying that lifestyles in modern agricultural societies will necessarily also result in worse health than that enjoyed by Paleolithic humans. This conclusion doesn’t follow, for technology and knowledge in agricultural societies today make possible very different diets and lifestyles from those of agriculturalists in the Neolithic.

We should be skeptical about arguments that appeal to human evolution to come up with conclusions about what we are “meant” to do. Vegetarians have often used this kind of argument to conclude that we aren’t meant to eat meat, and it is the type of argument Keith relies on in many cases to arrive at the opposite conclusion. The study of the human body as it exists today is the most useful method for figuring out what kinds of diets are best for humans to eat now. The study of the sketchy record of human evolution in the Paleolithic and Neolithic periods is primarily suited to explaining why the human body is this way.

The medical experts I have spoken to regard diets like the Atkins Diet and the Paleolithic Diet as fad diets. After the low fat diet craze, when people found that they were not losing weight by eating low-fat treats loaded with sugar, the pendulum of fashion swung towards diets like the Atkins diet, high in fat and low in carbohydrates. Traditional medical advice is less exciting: losing weight is still fundamentally a matter of burning more calories than one takes in, and the recommended diet is a balanced one low in saturated fats and low in simple carbohydrates. Meat can be a part of such a diet, but it need not be. The choice to be vegetarian simply requires ensuring that one gets enough minerals such as iron and vitamins such as B12. Agriculture and civilization may have made it possible for people to have unhealthy lifestyles, but a person who lives in a civilized agricultural society today does not have to embrace such a lifestyle.

In the fourth chapter of her book, Keith tries to establish that a vegetarian diet is necessarily unhealthy. However, in this chapter I found questionable assertions and evidence of very sloppy research. At one point she responds to a statistic commonly given by vegetarians, one which indicates that Seventh Day Adventists, whose religion recommends vegetarianism, live on average seven years longer than the general population. Keith makes a fair point when she notes that this could be the result of other lifestyle choices, for Seventh Day Adventism also recommends exercise and abstinence from alcohol and tobacco. She then claims that you have to compare Seventh Day Adventists to a group that has a similar lifestyle but which consumes meat. Then she asserts that Mormons have the same lifestyle with the exception that they eat meat, and she argues that they live longer than Seventh Day Adventists. I would expect that she would reference a scientific study that compared the two populations to support this claim. However, she references a book The Culprit and the Cure by Steven Aldana, and the author of that work refers to two different studies. One study is by Gary Fraser and David Shavlik, who studied Seventh Day Adventists and reportedly found that the men observed lived 7.3 years longer than the national average and that the women observed lived 4.4 years longer than the national average. The other study is by James E. Enstrom, who studied a cohort of Mormons and reportedly found that the men observed lived 11 years longer than comparable American males and that the women observed lived 7 years longer than comparable American females.

One of the main problems with Keith’s direct comparison of these two studies is that she assumes they measured the same variables and that the two cohorts observed had the same lifestyles with the exception that all the members of the Seventh Day Adventist cohort were vegetarian and all the members of the Mormon cohort were meat-eaters. In fact, in the study by Fraser and Shavlik referenced by Aldana, all of the cohort were white, non-Hispanic Seventh Day Adventists, but only 28% of the men and 31% of the women were vegetarian (defined as never eating meat or eating meat less than once a month). Fraser and Shavlik conducted a multivariate analysis, comparing several variables such as exercise, abstinence from smoking, consumption of nuts, and vegetarianism. The study found that each of the variables measured contributed to a higher life expectancy at age 30. When all covariates were at medium risk levels, vegetarianism resulted in 1.53 extra years in men and 1.51 extra years in women. Thus, contrary to Keith’s claims, the study was able to exclude the influences of other healthy lifestyle choices and demonstrate that vegetarianism contributes to a longer life. Aldana evidently understood this, and on page 6 of his book he claims that studies show being a vegetarian will result in 1.5 extra years. Although Keith herself references Aldana’s book, she evidently overlooks this.

It should also be noted that when Aldana gives a figure comparing the life expectancy of Seventh Day Adventists to the national average, he is citing a figure that compares the life expectancy of all the Seventh Day Adventists studied and the life expectancy of other white Californians. When offering this figure, Aldana misleadingly refers to the Seventh Day Adventist group as “these vegetarians”, but in fact only about 30% of the cohort were vegetarian. According to Fraser and Shavlik’s study, the life expectancy of vegetarian Seventh Day Adventist men was 9.5 years higher than that of other white Californians and the life expectancy of vegetarian Seventh Day Adventist women was 6.1 years higher than average. When the other risk factors measured in the study were at intermediate levels, then the vegetarian men and women had respective life expectancies 11.5 years and 9 years higher than average. Keith evidently did not check the original article by Fraser and Shavlik and simply repeated the figure offered by Aldana assuming that it referred to vegetarians alone.

In the second study referred to by Aldana, James E. Enstrom observed a cohort of Mormons in California from 1980 to 1987 and compared the number of deaths of the Mormons during this period to the number of deaths in the population of white Californians, expressing the difference in terms of standardized mortality ratios. It is mentioned in the study that Mormonism recommends a lifestyle outlined in the Word of Wisdom, which says that meat should be eaten sparingly but which does not proscribe its consumption. However, meat consumption was not one of the variables measured in the study. Aldana describes the results of this study, but he also makes the aforementioned claim about the life expectancy of the cohort, which isn’t mentioned in the original article by Enstrom. As Aldana did not provide a citation for the numbers on life expectancy, I did some research and found that around 1997 Enstrom was interviewed in popular publications claiming that in a follow-up study of the healthiest of the Mormons he originally observed, he discovered that they had a life expectancy 8-11 years longer than that of the general population of white Americans. As far as I could see, Enstrom did not publish the results of this follow-up study in a peer reviewed journal until 2007 (a couple of years after Aldana published his book). By that time, he had observed the group from 1980 to 2004. In the 2007 paper he again claimed that the “optimum subgroup” of Mormons had a greater life expectancy than that of comparable white Americans, but there he indicated that the men’s life expectancy was 9.8 years higher than average and the women’s was 5.6 years higher. Keith takes her figures straight from Aldana’s book, repeating the numbers that seem to come from early reports before the final results were published in a peer reviewed journal.

People sympathetic to Keith’s ideas could object that I have just picked out one of the statistics that she gave and that I have yet to refute the countless other statistics in her book (although another review that accuses Keith of substandard research can be found here). I will admit that I have had neither the time nor the motivation to check all of the citations that she gave. It is an extremely time-consuming process to check citations and to hunt down and read the original research they are based on. This is especially the case when dealing with the style of writing employed by Keith and favoured by polemicists of all persuasions. She offers waves of statistics without providing an adequate account of their context, failing to describe the nature of the original research and the nature of the debates in the relevant literature. I don’t believe that Keith’s supporters have checked all of her citations either, though. They like her conclusion, and so they will assume that her arguments are sound and her research strong. I suspect that we will find her sympathizers repeating her claims about Seventh Day Adventists and Mormons, citing her book for support and consequently appealing to a source even further removed from the original research papers. I have concluded that it isn’t worth my time checking all her citations, for her thesis is utopian, her arguments are generally unsound, and the citations that I did take the time to check revealed sloppy research. I should also mention that the tone of her writing is often vituperative and condescending and that her prose is consequently very irritating to read. I believe I’ve already devoted more time to her book than it really deserves.